From diagnosing diseases to categorising huskies, Artificial Intelligence has countless uses but mistrust in the technology and its solutions will persist until people, the "end users", can fully understand all its processes, says a US-based scientist.
Overcoming the "lack of transparency" in the way AI processes information -- popularly called the "black box problem" -- is crucial for people to develop trust in the technology, said Sambit Bhattacharya who teaches Computer Science at the Fayetteville State University
Citing another example, Bhattacharya said, "If you show an image classification algorithm a cat image, the cat comes with a background. So the algorithm could be saying it is a cat based on what it sees in the background that it relates to a cat."