Don’t miss the latest developments in business and finance.

Responsible content

New EU law can improve social media

social media, internet, technology, data, smartphone, digital, facebook, twitter, instagram, sharechat
The rules related to social media will be administered by the Ministry of Electronics and Information Technology
Business Standard Editorial Comment New Delhi
3 min read Last Updated : Apr 25 2022 | 10:38 PM IST
The European Union (EU) has approved landmark legislation giving social media users in the world’s largest economic bloc protection against hate speech, disinformation, and other harmful content. The EU’s Digital Services Act (DSA) makes social media businesses more responsible for content disseminated and amplified on their platforms. It specifies fines of up to 6 per cent of annual global revenues, or outright bans, for non-compliance. The DSA is yet to be formally passed by the EU Parliament and the 27 member-states. It will not come into force until at least 15 months after its passage, or until January 1, 2024, whichever is later. The DSA empowers governments to ask platforms to take down content that may be deemed illegal. Such content includes the stuff promoting terrorism, child sexual abuse, hate speech, and commercial scams. Social media platforms like TikTok, Facebook, and Twitter would have to create tools for users to flag such content in “easy, effective ways”. Marketplaces like Amazon would have to create tools to allow users to flag products. Platforms can review content before deciding upon deletion and must carry out annual reviews and risk assessments of content.

Importantly, the DSA also bans advertisements targeted at minors, as well as advertisements specifically based on gender, ethnicity, or sexual orientation. It also bans some of the deceptive techniques used to nudge people into online commitments without realising that they were opting to do so, such as signing up by default for online services. There are multiple ways in which this law impacts the ecosystem. Currently most platforms have algorithms designed to maximise user-engagement, regardless of quality of content. If someone is interested in conspiracy theories or hate speech, the algorithm will flood that individual’s timeline with such content. But companies will now have to devise better content-moderation and may need to prevent algorithms being gamed to disseminate disinformation or hate speech. The need to be more selective about advertising could impact revenues. The advertising components of the DSA will have to be monitored on the ground. There are products (toys, games, movies) intended for minors, for instance, and other products (such as feminine hygiene products), which are gender-specific.

Every social media platform has more than its fair share of disinformation and hate speech. There is also ample evidence, including leaked internal studies, which indicate platforms are aware that they have been used to influence elections, foment racist and casteist violence, and spread all sorts of poisonous lies and disinformation. So long as revenue models depend on engagement, no major social media platform (the Act places more stringent conditions on large platforms with over 45 million active users) has an incentive to deal with toxic content. This law with its substantial penalties could force platforms to review their business models. The EU includes many nations that rank very high on the Democracy Index with strong safeguards on free speech. The DSA could, therefore, be a model for similar legislation in the US, Canada, and other democracies outside the EU. But the EU also includes nations with far-right authoritarian political movements which are in power in a few countries. This law could also be misused by such authoritarian regimes that may impose their own bad faith definitions of hate speech or disinformation. It is a good initiative but remains to be seen how it works in practice.


Topics :Business Standard Editorial CommentEuropean UnionSocial Media

Next Story