Don’t miss the latest developments in business and finance.

Why has the Pope launched an 'AI Bible'?

The Vatican's efforts to unite faith and technology can offer the right kind of seriousness, and support for building a more inclusive and ethical future with AI

Pope Francis
File photo of Pope Francis. Photo: Shutterstock
Sandeep Goyal
5 min read Last Updated : Jul 07 2023 | 11:01 PM IST
Pope Francis, of all people, has joined the dialogue surrounding artificial intelligence’s (AI’s) societal impact. The Pope’s involvement in AI ethics is the result of a collaboration between the Vatican and the Santa Clara University’s Markkula Center for Applied Ethics. Together they have created the Institute for Technology, Ethics, and Culture (ITEC), a body that will mediate ethical concerns in the realm of technology. This unexpected development is making waves around the world: It signals the Vatican’s determination to be an influential voice in the rapidly evolving field of AI. Already, a handbook, Ethics in the Age of Disruptive Technologies: An Operational Roadmap, has been released.

The handbook advocates for the integration of ethical values rooted in principles that both technology, and organisations that develop the technology, must ingrain from the very outset. The handbook stresses that the “Common Good of Humanity and the Environment” must drive all corporate actions. This grand principle itself is segmented into seven key guidelines, such as respect for human dignity & rights and promotion of transparency and explainability. The guidelines are further micro-detailed into 46 concrete “actionable steps”, each accompanied by detailed definitions, illustrative examples, and “actionable strategies”. Not bad for a church-driven technology document, no?

The intersection of the Vatican, a venerable centuries old institution representing religion and spirituality, and AI, a recent technological marvel, might seem somewhat incongruous to many. However, the Vatican’s initiative marks the fulfilment of a longstanding interest in technology within the church. The Vatican has always felt that it enjoys a unique position of authority (not just moral) and its influence can muster the right kind of seriousness, and support, to deliberate on crucial matters regarding the future of technology, and its development.

The guidelines in the handbook are well thought through. For example, under the guideline respect for human dignity and rights, the handbook emphasises the cardinal importance of privacy and confidentiality. It stresses the need for commitment to “not collect more data than necessary” and advises that “collected data should be stored in a manner that optimises the protection of privacy and confidentiality.” Furthermore, it champions specific protective measures for sensitive personal, medical and financial data, focusing on the responsibilities companies have to users, beyond just the fulfilment of legal requirements.

The ethical challenges of AI applications are increasingly evident and subject to scrutiny. The use of various AI technologies can lead to unintended but harmful consequences, such as privacy intrusion; discrimination based on gender, race/ethnicity, sexual orientation, or gender identity; and opaque decision-making, among other issues. Addressing existing ethical challenges and building responsible, fair AI innovations before they get deployed has never been more important for humankind.

Going forward, first and foremost, AI needs to learn to eliminate bias in data. Researchers quote an interesting example to illustrate this. The ImageNet database has far more white faces than non-white faces. When experts train AI algorithms to recognise facial features using a database that doesn’t include the right balance of faces, the algorithm won’t work quite as well on non-white faces, creating a built-in bias that can have a huge societal impact.

One of the most significant issues centres around the control and morality of AI. Take drone technology.  If you have a drone that could potentially fire a rocket and kill someone, there needs to be human intervention in the decision-making process before the missile gets deployed. But there isn’t any, because the drone has been allowed to have a mind of its own. The problem is that AI is increasingly having to make split-second decisions. In high-frequency trading, over 90 per cent of all financial trades are now driven by “algorithmic intelligence”, so there is no chance to put a human being in control of the decisions. The same is true for autonomous cars. The car needs to react immediately if a child runs out on the road, but AI lacks human guidance (or reflex, or instinct) regarding the danger to human life at that critical juncture, and that is the control mechanism the AI needs. 

Ownership is another interesting dimension of AI. The question being asked today is, when an AI writes a new piece of music, who owns it? Who has the intellectual property rights for it? And, should AI potentially get paid for it?

Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being. AI impersonating humans? Or becoming human. It is becoming a daily reality.

No wonder the Pope is worried. 
The writer is MD of Rediffusion 

Topics :Artificial intelligencePope Francis

Next Story