Don’t miss the latest developments in business and finance.

AI has to remain under human control, says Microsoft's Brad Smith

In a video interview, Brad Smith, vice-chair and president of Microsoft, talks about the concerns around AI, the need to regulate it and the opportunity it brings

Brad Smith, vice-chair and president, Microsoft
Brad Smith, vice-chair and president of Microsoft
Shivani Shinde
6 min read Last Updated : Aug 28 2023 | 7:42 AM IST
Brad Smith, vice-chair and president of Microsoft is in India for the G20 summit. In a video interview with Shivani Shinde, he talks about the concerns around AI, the need to regulate it and the opportunity it brings. Edited excerpts.

What is your take on India's data protection law?

It's a positive development. There are a few features that I think are interesting and important. One, there is a recognition that there are data processors and data fiduciaries. I frankly think it's better to think about a data fiduciary than what most countries do. India did a good job of providing that kind of legal protection, while enabling data to move around, and cross borders. I know that people will now focus on the regulations, and they probably won't feel that they have a complete answer until the regulations are adopted. I think people here should feel encouraged by it.

Does AI give you sleepless nights, or do you see it as an opportunity?

I believe the opportunity to do good for the world with AI is probably far greater than any other technology in our lifetime, and that's exciting. I'll say that it’s an enormous responsibility to develop it wisely and well. We have to get it right. If we fail to do so, every generation that comes after us will pay a price for our mistakes.

What are some of the challenges around AI?

There are two concerns that I hear about. First, AI won't remain properly under human control. That it will exercise its own consciousness, speak its own mind and take its own steps in ways that will be of concern to humanity. The good news from my perspective is that we are dedicated to the principle that AI has to be a tool that serves people, which remains under human control.

The second concern is that tech companies are moving too fast. We are making products available to the public too soon. As the year has gone by, people have realised that, in fact, we are trying to take a measured pace, we gate the number of people who can use a product. We don't just let everybody on the planet use it from day one. We do need some real-world experience to understand how people might want to use the product.

India will begin its process around AI regulation soon. What are your recommendations?

First, build on the laws that exist already. Don't assume that there's no law that applies. Especially at the application level, where people worry about the sensitive uses of AI, there are well-established laws in place to protect consumers and prohibit unlawful discrimination. It's not going to be permissible to do things that are unlawful simply by using a computer to make decisions. That's a good thing to recognise. It means that there's a lesser need for wholesale development of new laws. It means it's critically important to help judges, lawyers, agencies and companies develop AI skills and expertise.

Second, we'll see in some of the most powerful frontier models the need for a new set of AI laws and regulations. I think India rightly will put priority around innovation, which makes sense, given the strengths of its economy and the role of the technology sector.

It will be especially important for India to collaborate with other countries. There's an initiative to bring together the G-7 plus India and Indonesia to think about a voluntary code of conduct before trying to adopt laws. I believe it will be important. It will serve India well. India has an important role to play in the creation in that kind of effort.

How soon do you see regulations around AI emerging?

We may not be able to write every rule and law in 2023, but the conversations around the world are progressing quickly and we should assume that over the next 12 to 18 months, it will become increasingly possible to adopt a set of laws. I don't think we should feel that we need to be able to address every conceivable issue surrounding AI. For example, one of the things that we've said is that if a company, a government or a public agency is going to use AI to control critical infrastructure, it needs to put in place ‘safety brakes’.

If AI is going to control critical infrastructure, there should be the ability to slow it down or turn it off, so as to ensure public safety. That can be applied at the model layer or at the application layer or at the data centre layer or at multiple layers. It is an example of a reasonably discrete issue where governments can come together and in a relatively short period, fashion something like a licensing requirement with deployment obligations, post deployment monitoring obligations, and safety brake obligations.

You also had discussions with the US Congress on AI. What are their concerns?

If we talk about policymakers in the US, there's two characteristics. I don't think they’re overwhelmingly optimistic, and neither are they overwhelmingly concerned. People are balanced, and they’re looking at both sides of the equation. Policymakers, especially in the US Congress, are focused on wanting to learn before they act and wanting to learn and act together rather than turn it into a partisan or polarised topic. It's not particularly typical in Congress, so I am encouraged by all of that. You'll see in the Senate, in particular, from next month, a real effort at bringing people together to do more of that.

Globally, governments are getting protectionist. After the pandemic, they want to correct and control the global supply chains, and now want to have a consensus on AI. Your thoughts.

I think the opportunity is for like-minded countries to work together. Especially for the world's democracies, there's a common set of values. There's an interest in promoting trade and investment. For all the challenges, a lot of this is playing in India's favour. It has successfully sustained a level of trust with most of the world and even in a divided world, that's an enormous asset. I sometimes look at India's role as a country and Microsoft's role as a company, and see some similarities. We both need to be the trusted suppliers of technology the world wants to use, and feels confident relying upon. We both have succeeded far more than we have failed. We have recognised how important it is to sustain the kind of trust we have built. I think that's fundamentally the formula for the future. Yes, the world is more geopolitically complicated than it was a decade ago. But success requires that one develop the capability to navigate it. As a company, that's what we've focused on. When I come to India, I see an entire nation doing the same thing.

Topics :Artificial intelligenceMicrosoft India

Next Story