Computer Science, asked by SamriddhiDhyani, 7 months ago

Imagine an AI controlled nuclear missile centre. The centre has been designed to automatically deploy the weapons in case of need. What are the ethical problems that can emerge from this system?

Answers

Answered by kalasaravanan2011
2

The aim of the ICRC’s expert meeting was to gain a better understanding of the issues

raised by autonomous weapon systems and to share perspectives among government

representatives, independent experts and the ICRC. The meeting brought together

representatives from 21 States and 13 independent experts. Some of the key points made by

speakers and participants at the meeting are provided below although they do not

necessarily reflect a convergence of views.

There is no internationally agreed definition of autonomous weapon systems. For the

purposes of the meeting, ‘autonomous weapon systems’ were defined as weapons that can

independently select and attack targets, i.e. with autonomy in the ‘critical functions’ of

acquiring, tracking, selecting and attacking targets.

There has been rapid progress in civilian robotics in the past decade, but existing

autonomous robotic systems have some key limitations: they are not capable of complex

decision-making and reasoning performed by humans; they have little capacity to perceive

their environment or to adapt to unexpected changes; and they are therefore incapable of

operating outside simple environments. Increased autonomy in robotic systems will be

accompanied by greater unpredictability in the way they will operate.

Military interest in increasing autonomy of weapon systems is driven by the potential for

greater military capability while reducing risks to the armed forces of the user, as well as

reduced operating costs, personnel requirements, and reliance on communications links.

However, current limitations in civilian autonomous systems apply equally to military

applications including weapon systems.

Weapon systems with significant autonomy in the critical functions of selecting and attacking

targets are already in use. Today these weapons tend to be highly constrained in the tasks

they carry out (e.g. defensive rather than offensive operations), in the types of targets they

can attack (e.g. vehicles and objects rather than personnel) and in the contexts in which they

are used (e.g. simple, static, predictable environments rather than complex, dynamic,

unpredictable environments). Closer examination of these existing weapon systems may

provide insights into what level of autonomy would be considered acceptable and what level

of human control would be considered appropriate.

Autonomous weapon systems that are highly sophisticated and programmed to

independently determine their own actions, make complex decisions and adapt to their

environment (referred to by some as “fully autonomous weapon systems” with “artificial

intelligence”) do not yet exist. While there are different views on whether future technology

might one day achieve such high levels of autonomy, it is notable that today machines are

very good at quantitative analysis, repetitive actions and sorting data, whereas humans

outperform machines in qualitative judgement and reasoning.

Answered by spraygod74
41

Answer:

it will emerge problems. the problem with it can be

1. it could be hacked and used on us only.

2.if it misjudged andody due technical problem it may harm him

3. it will also be a costly project as well

Similar questions