(Dr Michael Salla) A February 8 video by the Anti-Defamation League (ADL) promotes a new Artificial Intelligence (AI) based algorithm it calls the “Online Hate Index” which is aimed at identifying hate speech. The ADL believes that the AI algorithm can be used by social media platforms such as Facebook, YouTube, Twitter to identify and quickly remove hate speech.
by Dr Michael Salla, March 8th, 2018
In the video, Brittan Heller, the Director of the ADL Center for Technology & Society says the goal of the index is to:
Help tech platforms better understand the growing amount of hate on social media, and to use that information to address the problem. By combining Artificial Intelligence and machine learning and social science, the Online Hate Index will ultimately uncover and identify trends and patterns in hate speech across different platforms.
In its “Phase I Innovation Brief” published January 2018 on its website, the ADL further explains how “machine learning”, a form of Artificial Intelligence based on algorithms, can be used to identify and remove hate speech from social media platforms:
The Online Hate Index (OHI), a joint initiative of ADL’s Center for Technology and Society and UC Berkeley’s D-Lab, is designed to transform human understanding of hate speech via machine learning into a scalable tool that can be deployed on internet content to discover the scope and spread of online hate speech. Through a constantly-evolving process of machine learning, based on a protocol developed by a team of human coders as to what does and does not constitute hate speech, this tool will uncover and identify trends and patterns in hate speech across different online platforms, allowing us to push for the changes necessary to ensure that online communities are safe and inclusive spaces.
The ADL’s Online Hate Index is described as “a sentiment-based analysis that runs off of machine learning.” The ADL Brief goes on to say:
All the decisions that went into each step of creating the OHI were done with the aim of building a machine learning-enabled model that can be used to identify and help us understand hate speech online.
What the ADL and other promoters of AI based algorithms fail to grasp is the potential of AI to evolve through its programmed capacity for “machine learning” into the kind of fearsome interconnected sentient intelligence featured in movies such as the Terminator and Battlestar Galactica.
It is well known that scientists/inventors such as Stephen Hawkins and Elon Musk have been loudly warning about the long term threat posed by AI. They and others believe that AI poses an existential threat to humanity, and needs to be closely controlled and monitored. In a 2014 speech Musk said:
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence…. ‘I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish… With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.
Musk’s view was echoed by Stephen Hawking who warned against the danger of AI in an interview with the BBC in December 2014:
The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever increasing rate.… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.
Similarly, Corey Goode, an alleged insider revealing the existence of multiple secret space programs, claims that AI is already a threat in deep space operations. When he first emerged in early 2015, Goode focused a great deal of attention on the AI threat, and continues to warn about it today.
Stillness in the Storm Editor’s note: Did you find a spelling error or grammar mistake? Send an email to [email protected], with the error and suggested correction, along with the headline and url. Do you think this article needs an update? Or do you just have some feedback? Send us an email at [email protected]. Thank you for reading.