Don’t miss the latest developments in business and finance.

Social media conundrum

Moderating content will affect revenue

facebook
Photo: Bloomberg
Business Standard Editorial Comment Mumbai
3 min read Last Updated : Jan 11 2024 | 10:11 PM IST
After facing pressure from regulatory authorities and civil society around the world, Meta has said it will start hiding harmful content from teenagers using Instagram and Facebook. The move comes after the social-media company was accused of making the platforms addictive and harmful to teenagers’ mental health by more than 33 states in the United States, citing evidence from internal studies released by whistle-blowers. The European Commission had asked Meta for information on how it plans to protect children from harmful content online. The social-media giant says it intends to remove posts that centre on suicide, or self-harm, or eating disorders from the feeds of teenagers. It will rigorously excise such content from their feeds even if the users follow adults who post on such topics. This move is obviously prompted by apprehension that regulators could lay down restrictive rules that prevent Meta from signing up or engaging with teenagers.

However, it will be difficult to enforce or oversee this initiative from multiple angles. These platforms rely largely on self-certifications for age verification, although underage users are in theory supposed to be protected. Facebook and Instagram ask for sign-ups to be at least 13 years old and it requires parental consent for minors to open an account. They are supposed to be shielded from “crude indecent language, frequent coarse language, explicit sexual dialogue and/or activity, or graphic violence and/or shocking images”. In practice, it is fairly easy to bypass the age-verification restrictions. Tech-savvy youngsters do so and sign up as adults quite often, even though it’s a violation of terms of service and can get accounts shut down. Instagram has further restrictions in that users under 19 cannot be privately messaged by anybody who is not followed by them. In addition, their content cannot be tagged, mentioned, or used in remixes, etc by anybody whom they don’t follow. There are also sensitive content controls for minors and minor accounts are not cited as “accounts suggested for you (to follow)”.

There are loopholes within those restrictions, quite apart from the fact that it’s easy enough for a youngster to sign up as an adult. In addition to any content that may be created by an adult, minors can also create disturbing content. Instagram, in particular, has a poor reputation for issues related to self-image, and bullying of minors by their peers. Moderating content to ensure it is “vanilla” is a titanic task. It is entirely possible for a disturbing post to remain viral for a substantial period before it is flagged. An army of moderators with a much improved reporting system would be required to make this credible. That has costs. The biggest issue for a social-media platform is perhaps the conflict between engagement and content moderation. Internal studies cited by whistle-blowers made it clear that the company would push for higher user engagement even though it was aware that minors spending inordinate amounts of time on Facebook and Instagram had suffered harm. It will be tough for social media to rebuild the business model to reduce access to harmful content without reducing engagement. If engagement is reduced, there will be a negative financial impact. It’s hard to see how the platforms can resolve this conundrum without being driven to do so by the threat of strict regulatory oversight.

Topics :Business Standard Editorial CommentBS OpinionSocial MediaThe Content Trap

Next Story