Don’t miss the latest developments in business and finance.

Dealing with deepfakes requires vigilance, investment: Yatharth Saraf

The director of machine learning at social media platforms ShareChat and Moj says consumers and companies will adapt to such misinformation

Yatharth Saraf, director of machine learning at homegrown social media platforms ShareChat and Moj
Yatharth Saraf, director of machine learning at homegrown social media platforms ShareChat and Moj
Peerzada Abrar Bengaluru
7 min read Last Updated : Dec 24 2023 | 8:15 PM IST
Deepfakes, realistic yet fabricated videos and audio created by artificial intelligence (AI) algorithms, have become a worry. Such misinformation is increasing because technology has made it easier to do so, Yatharth Saraf, director of machine learning at homegrown social media platforms ShareChat and Moj, told Peerzada Abrar in a video interview. Edited excerpts.

What are the AI technology bets that ShareChat is making?

The modern way that users consume content or social media is that they don't necessarily create a well-curated (graph), where they say that they want to be able to see content from a particular set of people. They rely more on the platform to decide what they're interested in. Then they expect the platform to give them relevant and meaningful content that matches their interest. However, users have different needs based on languages and the places they live in. If one has to do it at scale, you need to use AI to create a recommendations engine that can personalise content to users' needs. Also, the needs of the users keep evolving and changing depending on the context and time. The system has to be smart enough to understand what you're looking for at that particular moment and then out of the millions of pieces of content created, you have to pick up 3-4 most relevant items and show it to the user. It's like finding a needle in a haystack and the only way to do it is by building this large-scale recommendation engine based on AI. The other area we are focusing on is advertising, which is the source of revenue for our platform. Making the ads relevant and targeting them to the right users is a well-understood playbook across the industry. You have to use AI at scale to do it.

Another area of focus is community standards. We want to make sure that the content that is being created and being shown to users is meeting the standards that we have. To ensure that, you have to use AI that may directly flag content that violates the standards. We have human agents who review it for content moderation. But there is so much data that they can’t do the whole thing manually. Also, a lot of the content goes viral quickly. You will not be able to review everything as fast as needed. You have to use AI models to classify content in terms of whether it's good or not and then humans, maybe double-check before allowing it to go through. That is another big area where we apply AI. The scale is large for us. We have two apps: ShareChat, which has 180 million users, and Moj, which has 160 million users. Anytime someone opens the apps, we're trying to make sure that they're seeing the most relevant and delightful piece of content that they want to see at that moment. The only way you can do that for millions of users is through AI. For us, AI is one of the key levers in driving the business forward.

What kind of investment are you making in the area of AI?

The AI models that we are using for content moderation are helping keep people safe. For example, a lot of it [the technology] understands content through either computer vision models or multimodal models (text, images, audio, and video). These areas are evolving. We are working to make sure that we are using the latest advancements in areas such as content understanding, ads and recommendations. The biggest part of this is deploying it at scale. Our best bet is to invest heavily in our AI teams and to make sure that those models keep getting better. There is a large investment in AI within the company. There is also a strong pool of talent that has been assembled both in India and outside India. We've hired home-grown talent as well as people outside the country from other major technology companies as well. There is also cross-pollination happening within the teams that are situated across the globe. For a social media company like ours to thrive, we have realised that we need to be close to cutting-edge tech in these areas and do it at scale. A large chunk of our engineering teams is focused on building these AI solutions that we use in the products.

You mentioned responsible AI concerns. What are these?

AI models are trained on data that has been generated by humans in some form or the other. However, there are biases in a lot of the datasets and human-generated data. If you are not cognizant of it and don't monitor this properly, the models can amplify the biases that exist in datasets. For example, before joining ShareChat, I used to work at Meta. One of the areas that I worked on was speech recognition, and audio and video understanding. Let’s say you have a model which understands people who speak English. The model may be trained on data dealing with a particular or dominant accent. It may do a good job of understanding people speaking in that accent. But when you have people speaking in a slightly different accent, then the model may start making errors and frustrate the user experience. At Meta, we evaluated our English speech recognition model on all the different types of accents that exist across the world. If the model is doing badly on a certain subset of users, we make extra efforts to get the data in that accent to improve the model. It is important to make sure that you're not adversely hurting some subpopulation of the users too much even if the average performance is okay.

What is the role of social media firms in addressing deepfakes?

I don't see it necessarily getting out of control. But I do think it is something that requires vigilance and investment on the part of the platforms and education on the user side. If you take the example of social media platform X (formerly Twitter), anyone in the world can broadcast without requiring an editor to check what they're saying. Things like misinformation will increase because it just becomes much easier to do that. But over time I think both consumers as well as platforms adapt. You understand what you can trust and the mechanisms that as a platform you need to put in place to help users trust the content that they see. We need to figure out how we are dealing with it (deepfakes). We are at the beginning of dealing with it.

How concerned are you about the problem of deepfakes?

It is a concern. We are at an early phase of this. But the models and technology have become so good and easily accessible, that anyone can create malicious content using deep fake models. As the quality of the AI models improves, it is becoming harder. It can be misused to create more authentic-looking deepfakes. But there is also a lot of work related to addressing the problem of deepfakes happening in the industry so that one can differentiate between authentic and fake content.

On the content moderation side, we keep ourselves updated with the latest trends in the (deepfakes) space. We also look at solutions that should be deployed to combat this in a quick and agile way. AI itself can help to solve the problem (deepfakes) that it has created. You can build models to differentiate between fake and authentic content. We can flag the content and take it down or label the information that is not authentic and generated through a machine. We are (also) working on such technologies.

Topics :Artificial intelligenceShareChatSocial media appsTechnology

Next Story