Which of the following are examples of bias in an AI system?
Facial recognition systems performing well for individuals of all skin tones.
Image recognition systems associating images of kitchens, shops, and laundry with women rather than men.
Customers not being aware that they are interacting with a chatbot on a company website.
AI systems in call centers providing context sensitive assistance to staff.
Answers
The answer to your question is option (b) Image recognition systems associating images of kitchens, shops, and laundry with women rather than men.
AI programs are taught to identify images, videos and other kinds of media files on the basis of certain parameters, such as colour, race, gender, and so on. In case of ImageNet Roulette, it was able to identify photos of light-skinned men very easily from among its database. However, there error rate went upto 35% when they were asked to identify dark-skinned women. So, in such a biased software, it is likely that images of kitchens, shops and laundry would be associated with women and not men.
The option have been ruled out because of the following reason :-
- Option A talks about facial recognition, which is not relevant to the question itself.
- Option C talks about the unawareness of customers whether they are interacting with a chatbot in a company website, because that is actually the case most of the time. Many company websites have their own chatbots to cater to the basic requirements of customers and users. Thus, this too is irrelevant to the question.
- Option D, on the other hand, states that AI systems in call centres provide "context sensitive assistance" to staff. Actually, providing context-related assistance to the call centre staff enables the latter to cater to the callers easily and seemlessly. And in no way does this option answer the given question.
Answer:
Bias in an A I Intelligence
Explanation:
The amazing thing about AI is just how un(human)biased it is. If it had personhood and opinions of its own, it might stand up to those who feed it examples dripping with prejudice. Instead, ML/AI algorithms are simply tools for continuing the patterns you show them. Show them bad patterns and that’s what they’ll echo. Bias in the sense of the last two bullet points doesn’t come from ML/AI algorithms, it comes from people.
1)Facial recognition systems performing well for individuals of all skin tones.
2)Image recognition systems associating images of kitchens, shops, and laundry with women rather than men.
3)Customers not being aware that they are interacting with a chatbot on a company website.
4)AI systems in call centers providing context sensitive assistance to staff.
From the above given options,we can say that
option (d)AI systems in call centers providing context sensitive assistance to staff.
Is biased of AI Intelligence because it fools innocent customers who think they are talking to a person.