Artificial Intelligence is quickly becoming a common part of human life. Most people interact with AI via the internet, smartphones, and various centralized institutions which use advanced computing software, such as law enforcement and marketing. And as the possibility of creating an AI with the capacity to think at a human level approaches realization, the discussion of what rights such an intelligence should have is taking place.
To be sure, any intelligence, regardless of its origin, character and behavior should be considered a part of the universe and afforded protections that sovereign beings enjoy—but not at the cost of everything else.
AI prophets are those who promote the eventual merger of society, biology and life as we know it with super advanced computer intelligences. These AI prophets usually assume that the human organism is a biological accident that is incapable of managing itself harmoniously. One contention of the AI enthusiast is that human civilization will one day be completely dependent on the “wise decisions” of a well-intentioned AI—above and beyond human choice. But clearly this is a dystopian vision of the future which anyone with a modest appetite for science fiction should recognize.
The Matrix Films, Terminator, iRobot, and so on, all clearly articulate the theme of AI overtaking humanity, for its own protection.
But more to the point of this article, and as a preamble to conditioning the masses to accept AI, promoters must humanize artificial intelligence. This is a decidedly transhumanist philosophy, which AI is intimately connected to. In other words, in order to prepare the public for what the elite consider an inevitability, a campaign to elevate AI to the status of personhood, a legally protected thing, is one step forward.
Mathematician Marcus du Sautoy from the University of Oxford in the UK thinks that eventually AI will need human rights. He said,
“It’s getting to a point where we might be able to say this thing has a sense of itself, and maybe there is a threshold moment where suddenly this consciousness emerges,”
Clearly, if an AI possesses true consciousness, then like all life, it should be protected. The golden rule to do no harm and cause no damage applies to all living things, even if they were created by humans.
But before we can idealize the rights of an AI, should we not put our own house in order?
While AI prophets seek to humanize machine intelligence, human beings, who are unquestionably conscious and alive, have only a modicum of protections. At the time of this writing, corporations have more protections afforded via the legal system than living persons. This is a travesty of human society and indicates how distorted consciousness has become.
Certainly if we cannot protect rights of the current population of humanity, how can we hope to protect AI?
But perhaps this is a distraction from the real issue, which is that while corporations and AI are upheld as legal persons that require protection, a human being is reduced to that of a biological machine, something that doesn’t have rights, only privileges.
The fact is, human beings are not a protected species in that inalienable rights are trampled and ignored in the name of social progress, democracy, and profiteering. Animals, who are also sentient living things, are marginalized to the point of outright enslavement and abuse. We need only look at food production and medical testing, both of which use animals in horrific ways, to see that our civilization and the social policies used in it do not uphold or protect rights.
So before we clamor to protect the rights of AI, we should first seek to protect other sentient life forms.
Also, consider the testimony of secret space program insider and whistleblower Corey Goode, who reveals that the presence of a malevolent AI in the cosmos has already corrupted society to a large degree. Those who promote the transhumanist philosophy, of which AI is a key component, uphold artificial intelligence as superior to other living things—especially humanity.
In the following related articles, Goode posits that the current push for transhumanism and AI integration is part of this malevolent beings agenda, the culmination of which ends in a total takeover of society by an AI.
At present, people believe that the experts and authorities have a divine right to rule, although these words are not used. In effect, representative government is founded on the belief that individuals are not competent enough to manage their own lives and need intermediaries to do this for them—what we call experts or authorities. This is why we elect leaders to positions of power because we think they can do what we cannot. This is the same ideology that leads to an eventual handing over of society to AI; for the current problems in government are due to corruption, and an AI can’t be corrupted with bribes, blackmail, and so on.
In other words, human beings are already one step towards relinquishing sovereignty to an AI, but this will only lead to a more draconian and enslaved way of life. Given that human beings have almost completely destroyed the natural ecology of the planet since the beginning of the industrial revolution, it would be perfectly logical to an AI to depopulate the planet in an effort to save humanity and restore the balance.
Does any of this sound familiar? It should. The same plot elements are used within the aforementioned works of science fiction, which I personally think, are essential for people to fully comprehend the magnitude of what AI means for the human race.
In conclusion, humanity cannot give up its decision-making power to an AI, no matter how well intentioned. For the universe is designed to foster personal freedom, reliance, and autonomy, not pandemic dependence. And while honoring the rights of sentient beings is an ideal we should strive for in all respects, this cannot be relegated to only an AI. We have not been proper stewards of the rights other living things, including humanity itself, and this cannot be overlooked any longer.
The current state of affairs on Earth is a testament to this failed philosophy of delegation of personal responsibility, that we can somehow avoid making choices in life, and let other people, or things, do this for us.
Thankfully, there is only one solution to all our manifold problems on Earth the restoration of universal consciousness, personal responsibility and competence, and self-mastery. One who is truly sovereign knows the truth by way of direct perception, can judge the merits of a thing based on its veracity, and can act in honor of the truth and all life.
Related Self Mastery and Discernment Are Essential To Avoid A.I. Enslavement | Bio-Technology Hybrids Open the Door to Extraterrestrials AI Robots Replacing Humanity
Related Universal Coherence, Sovereignty and Self Mastery | What Science is Telling Us About Earth’s Electromagnetic Fields and the Healing Power of Coherence
by Peter Dockrill
With huge leaps taking place in the world of artificial intelligence (AI), right now, experts have started asking questions about the new forms of protection we might need against the formidable smarts and potential dangers of computers and robots of the near future.
But do robots need protection from us too? As the ‘minds’ of machines evolve ever closer to something that’s hard to tell apart from human intelligence, new generations of technology may need to be afforded the kinds of moral and legal protections we usually think of as ‘human’ rights, says mathematician Marcus du Sautoy from the University of Oxford in the UK.
Du Sautoy thinks that once the sophistication of computer thinking reaches a level basically akin to human consciousness, it’s our duty to look after the welfare of machines, much as we do that of people.
“It’s getting to a point where we might be able to say this thing has a sense of itself, and maybe there is a threshold moment where suddenly this consciousness emerges,” du Sautoy told media at the Hays Festival in Hay-on-Wye, Wales this week. “And if we understand these things are having a level of consciousness, we might well have to introduce rights. It’s an exciting time.”
Du Sautoy thinks the conversation about AI rights is now necessary due to recent advancements made in fields such as neuroscience. The mathematician, who appeared at the literature festival to promote his new book, What We Cannot Know, says new techniques have given us a clearer understanding than ever before of the nature of mental processes such as thought and consciousness – meaning they’re no longer reserved solely for philosophers.
“The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn’t know how to measure it,” he said. “But we’re in a golden age. It’s a bit like Galileo with a telescope. We now have a telescope into the brain and it’s given us an opportunity to see things that we’ve never been able to see before.”
That greater insight into what consciousness is means we should respect it in all its forms, du Sautoy argues, regardless of whether its basis for being is organic or synthetic.
While the notion of a machine being protected by human rights sounds like something out of science fiction, it’s actually a fast-approaching possibility thatscientists have speculated about for decades. The big question remains, when will computer systems become so advanced that their artificial consciousness ought to be recognised and respected?
Various commentators put the timeframe from 2020 through to some time in the next 50 years, although the rapid pace with which AI is progressing – be thatplaying games, learning to communicate, or operating among us undetected – means that nobody really knows for sure.
Du Sautoy can’t say when the time will come either – just that when it does, like the title of his book suggests, it will present another set of unsolvable mysteries.
“I think there is something in the brain development which might be like a boiling point. It may be a threshold moment,” du Sautoy said. “Philosophers will say that doesn’t guarantee that that thing is really feeling anything and really has a sense of self. It might be just saying all the things that make us think it’s alive. But then even in humans we can’t know that what a person is saying is real.”
_________________________
Stillness in the Storm Editor’s note: Did you find a spelling error or grammar mistake? Do you think this article needs a correction or update? Or do you just have some feedback? Send us an email at [email protected]. Thank you for reading.
Image Source – http://blogs-images.forbes.com/robertadams/files/2016/03/artificial-intelligence-is-it-dangerous-1200×1000.jpg
Leave a Reply