The next year will see this extraordinary and impactful field alter the fabric of our lives - but it may not see any comprehensive regulatory approach taken by expert regulators
If there is one universal truth to the art of regulation, it is that technological advance inevitably works faster than regulators can respond. The first driving licences were required more than two decades after the automobile became a common sight on American streets, for example. Regulators have learned to live with this fact: They will always be behind technical innovators, struggling to play catch-up.
The problem we will have to deal with in 2024 is that the difference between the speed of innovation and that of regulatory action is growing ever vaster where it counts. For years now, this gap has been growing — sufficiently so for some unscrupulous entrepreneurs to in fact develop entire business models around “regulatory arbitrage”. They first identify profitable areas that nobody had thought to regulate and then turn their company into a fait accompli in the sector that regulators will hesitate to rule out of existence altogether.
But with the rapid spread of cheap and accessible generative Artificial Intelligence — in which text, speech, and images can be produced on demand by algorithms trained on vast reserves of data — regulators face an unprecedented challenge. Normally, we at least understand how tech operates sufficiently to regulate it intelligently; but, with most generative AI models, nobody really knows why they answer one way and not another. Meanwhile, the societal, economic, and political impacts of AI might well dwarf those of any other technological development since the transistor itself, and they will ripple through the entire world within years if not months. The impact is already being felt in political campaigns that are retooling their approach to deal with AI-generated fake news. For that matter, levels of trust even in real photographs have decreased sharply, as an Israeli photographer found out this week when a photo from one of his staged shoots went viral for being AI-generated propaganda.
One problem essentially is that Silicon Valley has a remarkably cavalier approach to cooperating with regulators. They operate with thinly veiled contempt for those who need apparently basic concepts explained to them. They also generally share the assumption that regulating technological development is in any case an exercise in failure; the fact that the People’s Republic of China appears to be succeeding in doing so does not seem to have occurred to them. Many policymakers, meanwhile, view leadership in AI development — unlike some other innovative fields — as crucial to national security and are loath to take any steps that would reduce the competitiveness of their national companies in the field.
But this approach will lead to two different sub-optimal regulatory approaches. It will cause the first steps in comprehensive regulation of AI to be taken by those with minimal stakes in its development; and it will lead to judicial pronouncements that force-fit AI into regulatory structures developed for very different technologies.
In just the past couple of weeks, both these trends have become increasingly visible. After 37 hours of concentrated discussions, regulators in Brussels agreed on a European Union-wide approach to regulating AI. The details of the agreement are naturally important, but the overall attitude was more restrictive than hoped for. One of the harshest critics of the law was French President Emmanuel Macron, who made the very perceptive point that the EU was behind in AI development, and thus the European Commission’s bias was towards protecting consumers of the technology and not promoting its production: This might turn the EU’s lag into a permanent disadvantage. From the point of view of the rest of the world, an alternative gloomy scenario is that — as with the EU’s data privacy legislation — Brussels’ AI approach becomes a de facto global regulation that reduces user access and inhibits user experience far beyond Europe’s borders.
This week, meanwhile, the New York Times sued OpenAI (the producer of ChatGPT) and Microsoft (a major investor in OpenAI) for infringing the newspaper’s copyright in the development of its large language model AIs. Essentially the newspaper argues that AI models trained on the NYT’s for-profit product can replicate and substitute its articles, fatally undermining its business model. Under copyright law in the United States — developed, remember, for the era of print — this may not count as acceptable “fair use”. There is a good chance then that OpenAI will have to scale back its use of high-quality, for-profit sources. The company has already started coming to side agreements with other publishers, such as Axel Springer in Germany, which could be viewed as tacit recognition that they do not have the strongest of cases. (Similar negotiations with the NYT broke down earlier this year, which OpenAI might have cause to regret.) From a broader point of view, however, judges will be left writing the restrictions on AI use and training that should emerge from a more comprehensive and engaged regulatory process.
The next year will see this extraordinary and impactful field alter the fabric of our lives — but it may not see any comprehensive regulatory approach taken by expert regulators in countries, such as the US, where this innovation is actually happening. This is the worst of all possible worlds from a regulatory perspective.
To read the full story, Subscribe Now at just Rs 249 a month
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper