Don’t miss the latest developments in business and finance.

Gen AI needs global governance

As the internet transcends national borders, country-specific regulations are going to be ineffective

artificial intelligence, AI
Photo: Pexels
Prosenjit Datta
5 min read Last Updated : Oct 31 2023 | 9:07 PM IST
Generative Artificial Intelligence (Generative AI) is too powerful to be allowed to grow unfettered. That is a point on which almost everyone agrees. Geoffrey Hinton, often dubbed the Godfather of deep learning, and Sam Altman, the head of OpenAI, both concur on this matter. Mr Hinton has warned that Generative AI can be an existential threat to humankind, while Mr Altman has appealed to US lawmakers to create parameters for AI creators to prevent them from causing significant harm to the world.

Meanwhile, the world’s lawmakers are still grappling with the best way to regulate the powerful technology. Many of them are trying to toss the ball right back into the court of the Generative AI creators. US President Joe Biden has just issued an executive order that aims to ensure that “America leads the way in seizing the promise and managing the risks of artificial intelligence.” Among its goals is to establish new standards for AI safety and security, while protecting the privacy of Americans.

The European Union (EU) has been trying to pass a comprehensive set of regulations for AI for a while. While it has made progress, it is unlikely to pass these laws by the end of calendar 2023. The latest news suggests that the EU lawmakers have not been able to decide on how to regulate the foundational models, with some members pushing for guidelines that require foundational model developers to assess potential risk during testing and release, and monitoring it post-release.  

The country that has moved the fastest is China, which has already passed a law with a specific provision for Generative AI technology. It tries to balance control with enough freedom for private players to progress.

India has traditionally been late in passing laws in the digital realm — the Digital Personal Data Privacy (DPDP) Act that got passed recently had taken several years and it is not really an Act that can regulate AI research in general, let alone Generative AI.

Lawmakers across the globe place a lot of faith in “Responsible and Ethical AI” — something that Big Tech is expected to follow. Broadly, the idea seems to be that companies should put in safeguards themselves to ensure that rogue AI models or those with the potential to cause great harm are not developed and unleashed.

The issue with the policies being contemplated is that most lawmakers have not yet grasped fully what can truly regulate and control Generative AI. They place a great deal of faith in Big Tech to self-regulate Generative AI models. They are ignoring the fast-accelerating Open-Source movement in Generative AI, which provides the tools — many of them totally free — to developers to pick up and start building their own models.

Lawmakers in different countries are still trying to use old-style privacy and copyright laws to deal with Generative AI models without understanding what actually makes or mars these models.

Most lawmakers are still trying to draft regulations that apply only within their borders for a technology that is global and should be regulated across borders with a common set of laws. The only leader who seems to understand that individual country laws will not be any good is Prime Minister Rishi Sunak of the UK. He has been trying hard to build a global alliance that can develop a consensus on the issue. The UK will be hosting the first ever International AI Safety Summit in November this year.

What all global leaders need to understand is that no Generative AI regulation will work unless they start with the basics, which is data collection. At the core of any Generative AI program is the foundational model — a deep learning algorithm that has been pre-trained on data scraped from the internet. These foundational models need fresh data inputs constantly. Otherwise, they start losing their magic.

To ensure that their models are working properly, Generative AI companies build web crawlers or data scrapers, which are essentially computer programs that crawl through websites and extract their data. Web crawlers have been used by search engines for many years, with the Google crawler being particularly effective. But web scrapers for Generative AI serve a different purpose — they are primarily gathering data for training the models properly.

There are no real laws regulating data scraping. Some laws, like copyright and privacy laws, seek to put some restrictions on very specific types of data being gathered. But, by and large, they were designed for a different era and older technologies, and few web scraper programs pay much heed to these. Moreover, they do not even address the vast amount of personal data available on social media platforms. 

The real issue is that it is the data collected by the web scraper or crawler that needs to be regulated, because it is this data that is used to train the foundational models. Equally importantly, it requires all countries around the world to be on the same page, and to build a set of harmonised regulations that govern all Generative AI models and their data crawlers/scrapers.

Lawmakers who are hoping that “responsible and ethical AI” practices of Big Tech will be at the forefront of safety measures are being naïve. The onus of regulation lies with governments, not corporations.

The writer is former editor of Businessworld and Business Today, and founder of Prosaic View, an editorial consultancy



More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :BS Opinionartifical intelligenceTechnology

Next Story