(JD Heyes) The Russian president has become the latest person to warn of the dangers of artificial intelligence (AI), actually predicting that whoever masters the technology first can rule the world.
by JD Heyes, September 8th, 2017
Addressing students last week, Vladimir Putin said that there are legitimate concerns about AI and that its development will produce “colossal opportunities and threats that are difficult to predict now.”
Going further, Putin warned that “the one who becomes the leader in this sphere will be the ruler of the world.”
He added: “Artificial intelligence is the future, not only for Russia but for all humankind,” according to Russia Today.
Putin added that he does not want to see the technology “monopolized,” and added that Russia would share it with the world if Moscow develops advanced AI first.
“If we become leaders in this area, we will share this know-how with the entire world, the same way we share our nuclear technologies now,” he told students around the country, via satellite link-up, as he spoke to them from the Yaroslavl region.
Putin’s ‘lesson’ also included discussion of other topics including medicine, space, and human brain capabilities.
“The movement of the eyes can be used to operate various systems, and also there are possibilities to analyze human behavior in extreme situations, including in space,” he said, as reported by RT.
But he also made another stunning prediction that can definitely be tied to his comments about AI. He said that he believes future wars will be fought by drones, noting that “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”
Putin is not the first one to make dire predictions about a future dominated by AI technology. Tesla and SpaceX CEO Elon Musk has become a vocal, and regular, critic of the technology, which is rapidly advancing thanks to companies like Google and Microsoft.
As reported by Robotics.news, Musk warned tech companies last month that are working to develop the technology to slow down their research to allow time for a regulatory regime to be established — before the technology gets away from its human creators.
“AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that,” Musk said during a presentation at the National Governor’s Association summer meeting in Providence, Rhode Island, The Independent reported.
Musk has already established an initiative that seeks to set up AI standards and rules called Open AI, a nonprofit organization.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” the Tesla CEO said. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.” (Related: Elon Musk warns AI could wipe out humanity.)
But he’s fighting against developmental headwinds. As the Silicon Vally Business Journalreports, the time is coming for AI, and quickly.
Facebook founder Mark Zuckerberg is one of the technology’s leading cheerleaders; he’s called Musk’s warnings “pretty irresponsible.”
“I have pretty strong opinions on this. I am optimistic,” Zuckerberg said. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”
Others agree with Zuckerberg. Max Versace, an AI expert, says that people like Musk are “selling fear, and it’s working.”
As CNBC reports, Versace says it’s too early to regulate AI now because doing so would slow down innovation — which is what Musk seeks.
“It’s not appropriate to regulate AI until you know what you’re working on,” he said. “AI will not kill us. That’s science fiction.”
One of the world’s great power leaders doesn’t think so.
J.D. Heyes is a senior writer for NaturalNews.com and NewsTarget.com, as well as editor of The National Sentinel.
Stillness in the Storm Editor’s note: Did you find a spelling error or grammar mistake? Do you think this article needs a correction or update? Or do you just have some feedback? Send us an email at [email protected] with the error, headline and url. Thank you for reading.