Artificial Intelligence is transforming every sector, it’s predicting things that can be leveraged to find out the disease of a patient, area where crime will be reported next, and what not! Moreover, experts are trying to make AI creative as well so that it can plan for unexpected consequences. We cannot deny the fact that several decisions in our life require forecast, and AI is definitely better at forecasting than their human counterparts. Despite acknowledging this why do we still lack confidence in the machines and technologies we have developed. A majority of population thinks that it is better to consult an expert rather than any intelligent machine or chatbot, even though the experts might not be as accurate as machines! We cannot just dream of a world where AI is helping us survive without trusting them, can we? Therefore, if we want to take benefit of this, we have to find a way to make people believe in them. But why do we doubt them?
What Marked Origin of Trust Issues?
This is a significant question because if we had not started to doubt, we would have been surrounded by AI till now! It all started with IBM’s attempt to promote Watson Oncology that is designed to help doctors in treating cancer patients. The AI promised to deliver reliable recommendations on the treatment of 12 types of cancers that would approximately account for 80% of the world’s cancer patients. But it failed! The reason behind this was when they interacted with Watson Oncology to find out solutions to a few problems, the suggestions coincided with their own.
This lead to two things, the doctors became confident that their method was accurate and even if it contradicted doctors, they concluded Watson wasn’t competent enough! Moreover, when it was asked why the method suggested was better than the ones followed by doctors, it wasn’t able to come up with satisfactory answers. The reason behind this was the inefficient algorithm which was too complex to be understood by the humans. So, even when it tried to explain and prove it’s point, doctors could not understand. Eventually, more doubts started to arise! Not long after this, IBM Watson’s premier medical partner, MD Anderson Cancer Center, dropped the programme and returned to its native methods.
AI, though being new, makes decision using complex system of analysis that identifies potentially hidden patterns from large amounts of data which is good. And, even if it explained, it is too hard to understand how it deducted the same. Many professionals have claimed that they doubt if AI is actually working because they cannot see it! Quite difficult to believe! The other instances like when Google’s algorithm categorized colored people as gorillas or when Microsoft’s chatbot Tay decided to become racist for a day or when Tesla’s car in autopilot mode caused fatal accidents have contributed to raising doubts in people. In the end, we can conclude for now that AI isn’t perfect because even the people who’ve designed and coded it aren’t!
Is There Anyway Out Of This Belief?
Distrust towards AI could be the biggest dividing force in society. Therefore, if AI has to live up to its full potential, we ought to find a way to get people to trust it especially in the cases when it produces recommendations that differ from what we are used to. Fortunately, we don’t lag behind much and have some ideas about how to improve trust in AI, and possibly there’s light at the end of the tunnel. Let’s discuss these in detail:
Experience: One solution can be exploring the automation apps more in everyday situations. It has been found out that previous experience (that are pleasant enough to remember) can improve people’s attitudes towards it. This is especially important for the masses because they are not well-versed with technicalities nor do they have sophisticated understanding of the technology. Also, we can only trust any technology if we use them quite frequently! For instance, we trust Internet because we’ve used it enough, earlier, it was also doubted!
By Introducing Transparency: Another reason why anything is doubted is that people don’t know about it much. Once they get to know about it, they are not as fearful or doubtful as they were! So, we have to make sure that companies release their transparency reports frequently. The companies such as Google, Airbnb and Twitter, etc. are already doing it! Similar practices can help in better understanding of how the algorithms make their decisions.
Taking Control: Experts have conveyed that creating collaborative decision-making process will contribute towards building trust. It will also help in making the machines learn by themselves. On contrary to everyone’s belief, involving people more in the AI decision-making process actually improves trust and transparency.
These were a few things that could help machines gain more trust. Nobody apart from professionals are interested in learning about how AI works, even if they are given slightest idea of how things are working, they’ll be more welcoming to AI. What are your thoughts on this?