(Carolanne Wright) Straight out of the science fiction film The Terminator, a 72-page Pentagon document lays out their plan for the future of combat and war, which will utilize artificial intelligence (or AI), robotics, information technology as well as biotechnology.
by Carolanne Wright, January 19th, 2018
Proponents of advanced technology — such as robot soldiers and artificial intelligence — argue both can be made ethically superior to humans, where issues of rape, pillaging or the destroying of towns in fits of rage would be drastically reduced, if not eliminated. Many in the science community are casting a weary eye toward this technology, however, warning that it can easily surpass human control, leading to unpredictable — and even catastrophic — consequences.
Defense Innovation Initiative — The Future of War
The Department of Defense (DoD) has announced the United States will be entering a brave new world of automated combat in a little over a decade, where wars will be completely fought using advanced weaponized robotic systems. We’ve already had a glimpse of what’s to come with the use of drones. But, according to the DoD, we haven’t seen anything yet.
In a quest to establish “military-technological superiority”, the Pentagon ultimately has its sights set on monopolizing “transformational advances” in robotics, artificial intelligence and information technology — otherwise known as the Defense Innovation Initiative, a plan to identify and develop pioneering technological breakthroughs for use in the military.
Disturbingly, a new study from the National Defense University — a higher education institution funded by the Pentagon — has urged the DoD to take drastic action in order to avoid the downfall of US military might, even though the report also warns that accelerating technological advances will “flatten the world economically, socially, politically, and militarily, it could also increase wealth inequality and social stress.”
The NDU report explores several areas where technological advances could benefit the military — one of which is mass collection of data from social media platforms that is then analyzed by artificial intelligence instead of humans. Another is “embedded systems [in] automobiles, factories, infrastructure, appliances and homes, pets, and potentially, inside human beings, [where] the line between conventional robotics and intelligent everyday devices will become increasingly blurred.” These systems will help the government to monitor individuals and the population and “will provide detection and predictive analytics.”
Armies of “Kill Bots that can autonomously wage war” are also a real possibility as unmanned robotic systems are becoming increasingly intelligent and less expensive to manufacture. These robots could be placed in civilian life as well, to execute “surveillance, infrastructure monitoring, police telepresence, and homeland security applications.”
To counteract public outcry about autonomous robots having the capacity to kill on their own, the authors recommend the Pentagon should be “highly proactive” in establishing “it is not perceived as creating weapons systems without a ‘human in the loop.’”
Strong AI, which simulates human cognition — including self-awareness, sentience and consciousness — is just on the horizon, some say as early as the 2020s.
But not everyone is over the moon about these advances, especially where AI is concerned. Leaders in the field of technology, journalists and inventors are all sounding the alarm about the devastating consequences of AI technology that’s allowed to flourish unchecked.
AI Technology — What Could Possibly Go Wrong?
As the DoD charges ahead with its plan to dominate the military and surveillance sphere with unbridled advances in technology, many are questioning the serious ramifications of such a path.
Journalist R. Michael Warren writes:
“I’m with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligence (A.I.) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequences.
Ray Kurzweil, director of engineering at Google, predicts that by 2029 computers will be able to outsmart even the most intelligent humans. They will understand multiple languages and learn from experience.
Once they can do that, we face two serious issues.
First, how do we teach these creatures to tell right from wrong — in our own self defense?
Second, robots will self-improve faster than we slow evolving humans. That means outstripping us intellectually with unpredictable outcomes.” [source]
During a conference of AI experts in 1999, a poll was given as to when they thought the Turing test (where computers surpass humans in intelligence) would occur. The general thought was about 100 years. Many believed it could never be achieved. Today, Kurzweil thinks we are already at the brink of intellectually superior computers.
British theoretical physicist and Cambridge University professor Stephen Hawking doesn’t mince words about the dangers of artificial intelligence:
“I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC. “Once humans develop artificial intelligence, it will take off on it’s own and redesign itself at an ever-increasing rate.” He adds, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
At the MIT Aeronautics and Astronautics department’s Centennial Symposium in October 2015, Tesla founder Elon Musk issued a stark warning about unregulated development of AI:
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
Furthermore, in a tweet posted by Musk in 2014, he thinks “We need to be super careful with AI. Potentially more dangerous than nukes.” In the same year, he said on CNBC that he believes the possibility of a Terminator–like scenario could actually come to pass.
Likewise, British inventor Clive Sinclair believes artificial intelligence will be the downfall of mankind:
“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”
Microsoft billionaire Bill Gates agrees.
“I am in the camp that is concerned about super intelligence,” he says. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
That said, Gates’ Microsoft Research has designated “over a quarter of all attention and resources” to artificial intelligence development, whereas Musk has invested in AI companies in order to “keep an eye on where the technology is headed”.
About The Author
Carolanne enthusiastically believes if we want to see change in the world, we need to be the change. As a nutritionist, natural foods chef and wellness coach, Carolanne has encouraged others to embrace a healthy lifestyle of organic living, gratefulness and joyful orientation for over 13 years. Through her website Thrive-Living.net she looks forward to connecting with other like-minded people from around the world who share a similar vision. Follow Carolanne on Facebook, Twitter and Pinterest.
Stillness in the Storm Editor’s note: Did you find a spelling error or grammar mistake? Send an email to [email protected], with the error and suggested correction, along with the headline and url. Do you think this article needs an update? Or do you just have some feedback? Send us an email at [email protected]. Thank you for reading.