by Brett Tingley, October 3rd 2016
Robots are getting scarier and scarier every day. It was only a matter of time, I suppose. Already, Chinese police forces have begun testing weaponized police-bots, and the U.S. Marine Corps has several machine gun-wielding robots in prototype and R&D stages. While most of these robots can only respond to pre-programmed commands and must be controlled by a human overseer, advances in artificial intelligence might enable these weapon-wielding robots to begin acting autonomously – even if that means using weapons on humans. Just this week, an announcement by two computer science students at Carnegie Mellon University (CMU) has made science fiction’s visions of deadly robots a little closer to reality.
|The system has so far been taught to play the game DOOM, but could be extended to real world applications.|
According to a press release issued by CMU, computer science students Devendra Chaplot and Guillaume Lample have developed an artificial intelligence system that has outperformed both humans and in-game AI at killing other players. The system uses deep learning systems and a reward-based reinforcement system to encourage the AI to get better at murdering other players – in the classic first-person shooter game DOOM, that is.
The researchers behind this potentially deadly development have published their results on arXiv.org but their methods have yet to be peer reviewed. According to their research, their artificial intelligence can learn different aspects of the game simultaneously, enabling it to learn and develop optimal strategies much faster than humans are capable of:
Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios.
One of the researchers, Devendra Chaplot, was quick to point out in CMU’s press release that the system is not specifically designed to kill people:
We didn’t train anything to kill humans. We just trained it to play a game.
However, some computer science journalists have already pointed out that this system is easily portable, meaning it could be used in scenarios other than playing video games. Were the system’s visual inputs attached to cameras, the system could easily navigate say, city streets, and shoot real humans rather than in-game avatars. But hey, the military industrial complex probably isn’t already developing their own killer AI systems, right?
|Israel Aerospace Industries’ Harop drones are able to autonomously detect and take down enemy air-defense systems.|
Related Cosmic Disclosure Season 2 – Episode 14: The Threat from Artificial Intelligence – Summary and Analysis | Corey Goode and David Wilcock
Stillness in the Storm Editor’s note: Did you find a spelling error or grammar mistake? Do you think this article needs a correction or update? Or do you just have some feedback? Send us an email at [email protected]. Thank you for reading.