Computer Science, asked by ujjwalgrover88, 8 months ago

Do AI should have rights? Why or why not?​

Answers

Answered by ItzSecretBoy01
1

Answer:

If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights. Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.

Answered by Anonymous
1

Answer:

Hypothetically, robots are given rights with the assumption that humans will always hold hierarchical power and control over these robots. Yet, what happens when the robots begin to reason themselves? If they could have rights, would they take advantage of them? In instance of this was when facebook’s two artificially intelligent programs were put together to negotiate and trade objects in English, but the experiment broke down when the robots “began to chant in a language that they each understood but which appears mostly incomprehensible to humans” (4). In the end, facebook had to shut down the robots because they were speaking out of control of their original creators. The experiment in itself was able to be shut down was because in our modern day AI do not have rights, and were not protected against being terminated, but if AI were to have rights, this would not be the case and the robots could have spun out of control and communicating within themselves without us every being able to decipher it. The facebook AI shows that robots can and will be developed so they no longer need to learn through being fed data, but can create algorithmic knowledge for themselves. At this point it can endanger civilization because robots are inherently not human, so they do not understand human values in life and may act in psychopathic ways. A robot that is originally manufactured and programmed to help the world by alleviating suffering may come its own conclusion that “suffering is caused by humans” and “the world would be a better place without humans.” The robot may then decide that the annihilation of humans would be best for the world in order to end general suffering, and carry out the task without evaluating the morality of its actions from a human standpoint.

A scarier situation is through self-recursive improvement, which is the ability of a machine to examine itself, recognize ways n which it could improve its own design and then tweak itself (5). Futurist Kurzweil believes that the machine will become so adept at improving itself that before long we will have entered in an age in which technology evolves at a blisteringly fast pace, and the reality would be so redefined it would not represent the present at all. This phenomenon is called the singularity (5). So, what if robots are able to create knowledge for themselves decide that they don’t want to be used or oppressed by humans? What if they believe they are superior to humans and want more rights to humans? There would be nothing humans could do to stop it. Robots would be able to reason and work in a rate hundreds times faster than humans, and if they already have rights, there’s nothing stopping them from becoming smart enough to realize their inferiority to humans and push for more rights. Some may argue that it is selfish in not wanting robots to be able to reason for themselves and realize their oppression and therefore demand more rights from humans. Perhaps the way we are oppressing these equally intelligent creatures without allowing them to have the same rights is unethical, but in order for us to level this argument, we must acknowledge the fact that the sole purpose for the creation of AI and robots is to act as tool to help mankind and improve human life. Yet, if full human rights were given to AI, this serves to be more harmful for mankind than beneficial. As mentioned before, this is because AI will start improving its own intelligence faster than humans can, and given rights, there’s no stopping what other legal affairs AI can become involved in. Stephen Hawking forewarned that “AI will take off on its own and redesign itself at an ever increasing rate. Humans, limited by slow, biological evolution, couldn’t compete” (12 ). AI will be to do everything faster and better than humans, and in the end, if they are given full human rights, it is possible for them to usurp our legal system and completely renovate our society. This will eventually lead to a phenomenon called the AI takeover where Elon Musk states that AI becomes “an existential threat” to humans and the further progress it is is comparable to “summoning the demon” (13). AI takeover is a hypothetical scenario in which artificial intelligence becomes the dominant form of intelligence on earth, which results in replacing the entire human workforce, takeover by a super-intelligent AI, and finally robot uprising. Humans could either be enslaved by robots or completely wiped from the whole planet (14). So, by giving AI full human rights, we are quite literally handing AI the key to our own doom.

Similar questions