The New Drone Order – Part III_Intro: At the Advent of Winged Drones, Research Progresses Forward

Biology-inspired Engineering and Morphing Technology

Drones with wings? But why?! While some Dronesters are dwelling on the metallic, the plastic, and the 3D printed, other roboticists & researchers are harkening back to the whims of the natural world. There are birds that can maneuver like no human built aircraft can. Researchers have found that the courtship dive of the Anna's Hummingbird makes it comparatively speedier than a jet fighter at full throttle or the space shuttle re-entering the atmosphere. Anyone who's anyone has admired how frustratingly hard it is to catch a fly, much less swallow one. I once knew an old lady who swallowed a fly. It's a good thing it wasn't a drone fly, or she may have sputtered and wheezed. Perhaps she could’ve sued Lockheed Martin if she survived?

The third edition of the New Drone Order series will introduce readers to projects like the Lentink Lab at Stanford University, and other related information.

…………………………………………………………………..

To read the exclusive analysis click here (BFP Community Members Only)

Subscribe & Join BFP Activist Community here

*Read Part 2 here

The New Drone Order – Part II_Intro: Dronetopia: Lessons and Parallels from the Insect World

Drone Warfare, Propaganda, Proliferation, Mutualism, Symbiosis & Biomimicry

What can insect societies teach us about our own? Sure, we bug out from time to time, but we’re intelligent, and they’re not. Right? Well, it turns out that humans share common traits with ants, bees, and other insects. We even go to war in similar ways. This edition of the New Drone Order series will explore how drone technology fits into our world system, and question where it’s taking us, utilizing lessons from the realm of insect species to guide the topic, as well as an interview with an expert on insect societies and autonomous robotics. Propaganda, proliferation, global sales, the military industrial complex, and the concept of biomimicry will all be examined. Go on, read it, give it a chance! If you think you’re so different from insects, you’ve got ants in your pants…

…………………………………………………………………..

To read the exclusive analysis click here (BFP Community Members Only)

Subscribe & Join BFP Activist Community here

The New Drone Order is Only Beginning: Intro- All is Buzzing on the Geopolitical Front

Drone technology is moving forward, whether we like it or not. MQ-9 Reapers manufactured by General Atomics are sold to the U.S. Air Force, fitted with hellfire missiles provided by Lockheed Martin. The military industrial complex is ticking, unmanned aerial vehicles are soaring, and all is not quiet on the Western front. Few places are quiet on the Eastern hilt of the world. Drone strikes pepper Pakistan, Yemen, Somalia, Libya, and Afghanistan, as the world has become an all-access battlefield where remote-controlled homicide can be carried out with minimal effort, for the first time in human history.

Things are changing. Warfare has been altered forever. Machines are learning...how to learn. Humans are doing less of the hunting and killing and delegating these duties to tougher, colder customers. The purpose of this series is to examine players, characters and ideologies that are deeply influencing the way that our future is shaping up, in both negative and positive ways. When one drone strike kills an innocent child in a foreign village, another is used for ocean exploration and hurricane detection. We will enter into the eye of the storm of controversial issues and attempt to chart through territory that pits the right to due process against the rich vein of untapped A.I. (artificial intelligence) technology, which kicks up dirt on greedy politicians, lobbyists and arms dealers who would rather push a button than fight a war themselves. If you think the United States is winning... I'll only tell you this once. The new drone order is only just beginning, and all is buzzing on the geopolitical front.

Editor’s Note- BFP welcomes Erik Moshe to its team. Future articles in Erik’s new series will be available only to BFP activist members.

The New Drone Order: Part I- A.I. Entities, Our Future Friends or Enemies?

Steve is a scientist, entrepreneur, and a jack of many trades. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He can be seen online contributing to a wide variety of podcasts, discussions, conferences and foundations. One of his goals is to ensure the smooth transition of autonomous robots into our lives without mucking up our own livelihoods in the process. His company Self-Aware Systems started out to help search engines search better, but gradually, he and his team built a system for reading lips and a system for controlling robots. If he ever owns a cyborg in the near future and he's able to program it himself, it will not be cold-hearted. I'm confident it would be a warm, hospitable homemaker with culinary and family therapeutic skills to boot.

"The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that's AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves," Steve says.

In addition to his scientific work, Steve is passionate about human growth and transformation. He holds the vision that new technologies can help humanity create a more compassionate, peaceful, and life-serving world. He is one of the men and women behind the scenes who are doing their very best to ensure that killer robots never reach an operable level - either in perpetuity, or before we're ready to handle it as a species. His "safe AI scaffolding strategy" is one of his main proposed solutions, and a positive way forward.

You can call him an expert in the field of FAI, or friendly artificial intelligence, which is "a hypothetical artificial general intelligence that would have a positive rather than negative effect on humanity.” The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values.

Getting an entity with artificial intelligence to do what you want is a task that researchers at the Machine Intelligence Research Institute (MIRI), in Berkeley, California are taking on. The program’s aim is to make advanced intelligent machines behave as humans intend even in the absence of immediate supervision. In other words, “take initiative, but be like us.”

Yudkowsky realized that the more important challenge was figuring out how to do that safely by getting AI to incorporate our values in their decision making. "It caused me to realize, with some dismay, that it was actually going to be technically very hard," Yudkowsky says. “Even if an AI tries to exterminate humanity,” it is “outright silly” to believe that it will “make self-justifying speeches about how humans had their time, but now, like the dinosaur, have become obsolete. Only evil Hollywood AIs do that.”

Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power.

Steve differs in that he is wholly optimistic about the subject. He thinks that intelligent robotics will eliminate much human drudgery and dramatically improve manufacturing and wealth creation. Intelligent biological and medical systems will improve human health and longevity, and educational systems will enhance our ability to learn and think, (pop quizzes won’t stand a chance). Intelligent financial models will improve financial stability, and legal models will improve the design and enforcement of laws for the greater good. He feels that it's a great time to be alive and involved with technology. With the safety measures he has developed, Steve hopes to merge machine with positive psychology - a division that's only a few decades old but has already given us many insights into human happiness.

Cautious attitudes in an evolving drone age

In an article on Vice’s Motherboard entitled "This Drone Has Artificial Intelligence Modelled on Honey Bee Brains", we can see firsthand how bizarre science can get, and how fast we are progressing with machine intelligence.

Launched in 2012, the Green​ Brain Project aims to create the first accurate computer model of a honey bee brain, and transplant that onto a UAV.

Researchers from the Green Brain Project—which recalls IBM’s Blue Brai​n Project to build a virtual human brain—hope that a UAV equipped with elements of a honey bee’s super-sight and smell will have applications in everything from disaster zone search and rescue missions to agriculture.

Experts, from physicist Stephen Hawking to software architect Bill Joy, warn that if artificial intelligence technology continues to be developed, it may spiral out of human control. Tesla founder Elon Musk calls artificial-intelligence development simply “summoning the demon.”

British inventor Clive Sinclair said: "Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," he told the BBC. "It's just an inevitability."

"I am in the camp that is concerned about super intelligence," Bill Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Are we jumping the gun with all of this talk of sentient robots triggering an apocalypse? Rodney Brooks, an Australian roboticist and founder of iRobot, thinks so. He views artificial intelligence as a tool, not a threat. In a blog post, he said:

Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.

In an interview with The Futurist, Steve talked about the best and worst case scenarios for a fully powerful AI. He said:

I think the worst case would be an AI that takes off on its own, its own momentum, on some very narrow task and works to basically convert the world economy and whatever matter it controls to focus on that very narrow task, that it, in the process, squeezes out much of what we care most about as humans. Love, compassion art, peace, the grand visions of humanity could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, the fact that people have to work at jobs that aren't necessarily fulfilling, all of those could be taken care of by machine, ushering in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.

Autonomous technology for the greater human good

Steve’s primary concern has been to incorporate human values into new technologies to ensure that they have a beneficial effect. In his paper, “Autonomous Technology and the Greater Human Good”, the most downloaded academic article ever in the Journal of Experimental and Theoretical Artificial Intelligence, Steve summarized the possible consequences of a drone culture that’s moving too swiftly for its own good:

Military and economic pressures for rapid decision-making are driving the development of a wide variety of autonomous systems. The military wants systems which are more powerful than an adversary's and wants to deploy them before the adversary does. This can lead to ‘arms races’ in which systems are developed on a more rapid time schedule than might otherwise be desired.

A 2011 US Defense Department report with a roadmap for unmanned ground systems states that ‘There is an ongoing push to increase unmanned ground vehicle autonomy, with a current goal of supervised autonomy, but with an ultimate goal of full autonomy’.

Military drones have grown dramatically in importance over the past few years both for surveillance and offensive attacks. From 2004 to 2012, US drone strikes in Pakistan may have caused 3176 deaths. US law currently requires that a human be in the decision loop when a drone fires on a person, but the laws of other countries do not. There is a growing realization that drone technology is inexpensive and widely available, so we should expect escalating arms races of offensive and defensive drones. This will put pressure on designers to make the drones more autonomous so they can make decisions more rapidly.

Thoughts on Transhumanism

In an interview featured on Bullet Proof Exec, Steve briefly expressed his views on transhumanism, which is a cultural and intellectual movement that believes we can, and should, improve the human condition through the use of advanced technologies:

My worry is that we change too rapidly. I guess the question is, how do we determine what changes are like, “Yeah, this is a great improvement that’s making us better.” What are changes like, let’s say, you have the capacity or the ability to turn off conscience and to be a good CEO, well, you turn off your conscience so you could make those hard decisions. That could send humanity down into a terrible direction. How do we make those choices?

Interview with Dr. Steve Omohundro

I had the privilege of speaking with Steve, and here's what he had to say.

BFP: Thanks for taking the time to speak with us today. You have an interesting last name. If I may ask, where does it come from?

Steve: We don't know! My great grandfather wrote a huge genealogy in which he traced the name back to 1670 in Westmoreland County, Virginia. The first Omohundro came over on a ship and had dealings with Englishmen but we don't know where he came from or the origins of the name.

BFP: How have drones changed our world?

Steve: I think it's still very early days. The military uses of drones, both for surveillance and for attack, have already had a big effect. Here's an article stating that 23 countries have developed or are developing armed drones and that within 10 years they will be available to every country:

On the civilian side, agricultural applications like inspecting crops have the greatest economic value currently. They are also being used for innovative shots in movies and commercials and for surveillance. They can deliver life-saving medicine more rapidly than an ambulance can. They can rapidly bring a life saver to a drowning ocean swimmer. They are being used to monitor endangered species and to watch out for forest fires. I'm skeptical that they will be economical to use for delivery in situations which aren't time-critical, however.

BFP: Do you think artificial intelligence is possible in our lifetime?

Steve: I define an "artificial intelligence" as a computer program that can take actions to achieve desired goals. By that definition, lots of artificially intelligent systems already exist and are rapidly becoming integrated into society. Siri's speech recognition, self-driving cars, and high-frequency trading all have a level of intelligence that existed only in research systems a decade ago. These systems still don't have a human-level general understanding of the world, however. Researchers differ in when that might occur. A few believe it will be impossible but most predict it will happen sometime in the next 5 to 100 years. Beyond the ability to solve problems are human characteristics like consciousness, qualia, creativity, aesthetic sense, etc. We don't yet know exactly what these are and some people believe they cannot be automated. I think we will learn a lot about these qualities and about ourselves as we begin to interact with more intelligent computer systems.

BFP: According to a report published in March by the Association for Unmanned Vehicle Systems International, drones could create as many as 70,000 jobs and have an overall economic impact of more than $13.6 billion in three years. Which means, the report says, that each day U.S. commercial drones are grounded is a $28-million lost opportunity. If these economic projections prove to be accurate, do you see a prosperous industry on the horizon for them as well?

Steve: I believe they could have that impact but $13.6 billion is a small percentage of the GDP. The societal issues they bring up around surveillance, accidents, terrorism, etc. are much larger than that, though. For there to be a prosperous industry, the social issues need to be carefully thought through and solved.

BFP: Do you think that autonomous robot usage will spin out of control without implementation of the Safe-AI Scaffolding Strategy that you and your colleagues formulated?

Steve: Autonomous robots have the potential to be very powerful. They may be used for many beneficial applications but also could create great harm. I'm glad that many people are beginning to think carefully about their impact. I believe we should create engineering guidelines to ensure that they are safe and have a positive impact. The "Safe-AI Scaffolding Strategy" is an approach we have put forth for this but other groups have proposed alternative approaches as well. I'm hopeful that we will develop a clear understanding of how to develop these systems safely by the time that we need it.

BFP: Drones have landed on the White House lawn and in front of Angela Merkel. Where they might land next is unpredictable, but this uncertainty is a reminder that governments around the world are still trying to find their balance when it comes to an emerging technology of this scale and wide application. What positive ways do you posit that drones can affect the world, or affect the work that you are involved in?

Steve: Flying drones are just one of many new technologies that have both positive and harmful uses. Others include drone boats, self-driving vehicles, underwater drones, 3-D printing, biotechnology, nanotechnology, etc. Human society needs to develop a framework for managing these powerful technologies safely. Nuclear technology is also dual-use and has been used to both provide power and to create weapons. Fortunately, so far there hasn't been an unintended detonation of a nuclear bomb. But the recent book "Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety" tells a terrifying cautionary tale. Among many other accidents, in 1961 the U.S. Air Force inadvertently dropped two hydrogen bombs on North Carolina and 3 out of 4 of the safety switches failed.

If we can develop design-rules that ensure safety, drones and other autonomous technologies have a huge potential to improve human lives. Drones could provide a rapid response to disaster situations. Self-driving vehicles could eliminate human drudgery and prevent many accidents. Construction robots could increase productivity and dramatically lower the cost of homes and manufactured goods.

BFP: Have you read any science fiction books that expanded your perspective on A.I.? In general, what would you say got you into it?

Steve: I haven't read a lot of science fiction. Marshall Brain's "Manna: Two Views of Humanity's Future" is an insightful exploration of some of the possible impacts of these technologies. I got interested in robots as a child because my Mom thought it would be great to have a robot to do the dishes for her, and I thought that might be something I could build! I got interested in AI as a part of investigating general questions about the nature of thought and intelligence.

BFP: You recently showed me a video of a supercharged drone with advanced piloting tech that could reach speeds of 90 miles per hour, and that costs about $600. Could you see yourself going out and buying a quadcopter like that, maybe having a swarm of drones spell out "Drones for the Greater Good" in the sky? Or would you rather keep your distance from the "Tasmanian devil" drone.

Steve: I haven't been drawn to experimenting with drones myself, but I have friends who have been using them to create aerial light shows and other artistic displays. The supercharged 90 mph drone is both fascinating and terrifying. Watching the video, you clearly get the sense that controlling the use of those will be a lot more challenging than many people currently realize.

BFP: I've also seen a quadrotor with a machine gun.

Steve: Wow, that one is also quite scary. What's especially disturbing is that it doesn't appear to require huge amounts of engineering expertise to build this kind of system and yet it could obviously cause great harm. These kinds of systems will likely pose a challenge to our current social mechanisms for regulating technology.

# # # #

*Watch Steve's TEDx video from May 2012: Smart Technology for the Greater Good

Erik Moshe is BFP investigative journalist and analyst. He is an independent writer from Hollywood, Florida, and has worked as an editor of alternative news blog Media Monarchy and as an intern journalist with the History News Network. He served in the U.S. Air Force from 2009-2013. You can visit his site here.

 

BFP Update: Our New Partners, Video-Podcast Shows & Exclusive Report-Analysis Series

BFP Welcomes SpyCulture’s Tom Secker, PorkinsPolicy’s Pearse Redmond & Journalist-Analyst Erik Moshe

We are entering an exciting new phase here at Boiling Frogs Post. I launched this website with a firm commitment to becoming a network of truly independent authors, analysts and producers to serve the community of critical-thinking irate minorities-those who we know we can depend on for moving towards needed changes; those who truly count: ‘You.’

We have been doing exactly that since our inception, and doing so slowly but with sure steps. Our website has evolved from a one-woman small forum to a multi-faceted network of stellar and independent authors, analysts and producers showcasing one-of-a-kind reports, podcast and video programs, editorial cartoons and analyses.

I am truly honored to announce and introduce our new partners and soon-to-begin reports and multimedia programs - which will be available exclusively to BFP member activists:

SpyCulture’s Tom Secker

I am sure many of you are familiar with author, analyst and filmmaker Tom Secker and his incredible research on macro topics ranging from Gladio to significant global false flag operations. Starting the first week in May, Tom will present his exclusive podcast series on highly significant topics and controversial issues ranging from conspiracy theories to the Deep State - all paired up with much needed history, examples and contexts. The title of Tom’s podcast series: Disinfowars. Here is a brief bio for Tom, which does not begin to encompass all his accomplishments.

Tom Secker- BFP Partner Producer & Host, Disinfowars
Tom Secker is a researcher, filmmaker and the author of Secrets, Spies and 7/7.  His work focuses on the intelligence services, particularly their roles in international terrorism and popular culture.  He is based in North Yorkshire, England.  Visit Tom’s website here.

Please welcome Tom, and stay tuned for his first podcast episode coming up the first week in May!

PorkinsPolicy’s Pearse Redmond

Pearse Redmond is another name and producer many of you are familiar with. Pearse has been producing some incredible podcast shows through his site, and we have published his latest discussion series with Tom Secker and me on Operation Gladio B and related topics here at Boiling Frogs Post. He is a rare talent who combines informed critical thinking with articulate and natural delivery (and a beautiful voice as well!).

Pearse will be joining Guillermo Jimenez and me as a partner producer and host for our coming BFP Roundtable Video Series. Just like Guillermo and me, Pearse is also pro spontaneous and interactive delivery. Meaning: just as with our new podcast series Probable Cause, we are going to invite and participate in your feedback, comments and questions posted under each video episode, and structure our following episode based on those comments and questions.

We have had several episodes experimenting with the Roundtable Video series, and now I think we are ready to make it a regular and permanent feature at BFP. The format and length will be pretty much the same. In addition to our trio we will be featuring rotating guests- including our BFP partners, contributing authors and others. However, after our initial experimentation I have decided to get out of the YouTube forums and make these episodes available to BFP activist members through BFP’s own streaming video server. Of course we will post very brief preview clips on our YouTube channel and homepage in order to provide a general overview of each episode, but the full show will be available only to our activist members and supporters @ Boiling Frogs Post.

Here is a brief bio for Pearse Redmond:

Pearse Redmond- BFP Partner Producer & Host, BFP Roundtable Video Series
Pearse Redmond is an alternative researcher and podcaster based in New York City.  He covers a wide variety of topics including geopolitics, terrorism, cults, and deep-state events.  He is the host of Porkins Policy Radio, and co-host of Porkins Great Game with Christoph Germann, as well as The CIA & Hollywood with Tom Secker.  Visit Pearse’s website here.

Let’s welcome Pearse to our BFP activist community. I am looking forward to our first Round Table episode!

Investigative Journalist & Analyst Erik Moshe

A few weeks ago I began communicating with writer, researcher and journalist Erik Moshe. Erik had been following our coverage of issues at BFP and realized the synergy between the areas/topics we cover and his areas of interest. He served in the U.S. Air Force from 2009-2013, and his areas of interest and research include world history, international conflicts and cover-ups.  Erik is currently working on his first multi-part series of research-analysis on the issue of Drones: the history of the drone, where the idea came from, and what it's foreshadowing for our future. Additionally he will be analyzing the issue of Drones from a psychological perspective: Perceptions related to the use of drones in today's society, why drones are revolting to us, why we oppose them, and much more.

I am looking forward to Erik’s part I of the series, which will published at BFP very soon, and will be available exclusively to our activist members.

Here is Erik’s bio:

BFP Partner Journalist & Analyst
Erik Moshe is BFP investigative journalist and analyst. He is an independent writer from Hollywood, Florida, and has worked as an editor of alternative news blog Media Monarchy and as an intern journalist with the History News Network. He served in the U.S. Air Force from 2009-2013. You can visit his site here.

Welcome to BFP Erik!

For those of you who have not read my recent comments under our latest episode of Probable Cause: I will be continuing our podcast series, however, for the next four months, due to my quarterly contract work and some travel (June-July), on average we will have a new episode every other week. I love our podcast series, especially our discussions related to each podcast. I would love nothing more than to be able to do this full-time, multiple times a week, including bonus episodes based on ‘current events & headlines.’ Unfortunately, we were not able to meet our subscription drive-campaign goal for the last quarter, so in order to make ends meet I had to sign up for another quarterly contract involving analyses-translations. Hopefully, one day soon, I will be able to quit my part-time job and focus solely on BFP. Well, that’s one of the items on my wish list. As you all know, another major item for BFP is to be able to enroll full-time investigative journalists to do major exposé work. I still have my hopes intact- together we shall get there; hopefully sooner rather than later. Until then 😉