Computer Science, asked by RohitGupta2871998, 11 months ago

Artificial Intelligence is "Good" or "Bad", give your opinion with explaination?​

Answers

Answered by Cloud9Krish
5

Answer:

It took 2 Hours Writing this answer. Please Rate 5 stars and follow me.

At this point AI is the result of data driven learning – it has no conscience and can’t explain its reasoning. There is no implicit good or bad to AI it will simply respond with results that are derived completely by its learning. The good or badness of AI will thus be based on how well we train the AI, and perhaps most importantly how well we test the AI.  

There have been several failures of AI that raise concern – the self-learning chat-bot that developed racist and sexist traits after only a few days of being exposed to the public; and the resume screening AI that filtered out younger women on the basis that they had gaps in their employment history (children).  

Many implementations of new Technology have been tempered by establishing controls and practices that make the technology safe. Historically this has been through both thoughtful design and responses to disasters. We can learn from the way that technology domains like aircraft, nuclear power and medical devices have evolved.  

We have yet to create these frameworks and practices that let us test and govern AI implementations – particularly where our own safety is involved. There is no doubt that the current engineering focus considers safety but we simply don’t know enough about the way the risks will emerge. Even in cases where AI has no safety impact it may still create an ethical and moral dilemma by perpetuating unsustainable behaviours and points of view.  

Human behaviour is extremely varied – even for those who don’t break the law there is a broad range of acceptable behaviour and attitudes that society tolerates. One of the biggest challenges we face with AI is how do we choose what is acceptable AI behaviour? In most cases there will be a single AI behaviour and we then need to determine how we agree on what the behaviour should be from an ethical perspective. Some autonomous cars are allowing the drivers to select a different profile based on their own preferences (essentially balancing self-preservation with harm to others) – should we even allow such choices to be made?  

Humans are also implicitly challenging – graffiti although generally harmless and an expression of our desire for expression is still illegal. The same intent can be seen with some hacking activities where there is no malicious damage yet they are clear proponents of a particular point of view. Criminal activities range from intentional damage (and potential safety impacts) to greed and power (and even wielded by countries). When we consider this in the context of an AI enabled future we have to believe that human maliciousness and ingenuity will find ways to damage the intent of the AI and may result in societal or individual harm.  

I am a strong advocate of embracing technology – it is in our nature to explore and innovate – yet we need to avoid the adoption of technology ‘at any cost’ to protect our people, our society, and our world.

Explanation:


RohitGupta2871998: I know this is not your answer copy from "Reuters" website.
Answered by AmazingSyed15
3

The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.

One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.”

And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.


RohitGupta2871998: I know this is not your answer you copy from books.google.co.in
Similar questions