Meta announced a series of updates to its content moderation policies on Tuesday, introducing significant changes to how content is managed on its platforms. These include the elimination of professional fact-checking in the United States, adjustments to automated systems, and revisions to its hateful conduct policy, according to CNN.
Revised hateful conduct policy
Meta’s updated hateful conduct policy introduces new allowances for previously prohibited content:
• Gender-based content: Users are now allowed to refer to “women as household objects or property” and describe “transgender or non-binary individuals as ‘it.’” These prohibitions, previously part of the policy, have been removed.
• Mental health allegations: The policy permits “allegations of mental illness or abnormality” when tied to gender or sexual orientation. Meta framed this as part of ongoing political and religious discussions about transgenderism and homosexuality.
• Protected groups: Meta has removed its prohibition on content denying the existence of “protected” groups, allowing users to question whether certain groups exist or should exist.
Also Read
• Profession-based content: The policy now permits arguments favouring gender-based restrictions in professions like law enforcement, military service, and teaching.
The changes are effective immediately. Meta clarified that while restrictions have been relaxed, it will continue enforcing rules against slurs, incitement of violence, and targeted harassment, particularly for protected groups based on race, ethnicity, and religion.
Fact-checking network disbanded
Meta announced it is disbanding its US-based professional fact-checking network, replacing it with a user-driven “community notes” system. This model allows users to add context to posts and aligns with Meta’s goal of promoting free expression while reducing over-enforcement.
Automated systems previously tasked with scanning for violations will now focus exclusively on severe issues, such as child exploitation and terrorism. This shift aims to reduce “over-censorship” of non-violative posts.
Acknowledging risks, CEO Mark Zuckerberg said, “We’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
Concerns over misinformation
Meta’s decision to disband its professional fact-checking network has raised concerns among disinformation researchers and online content experts. Critics argue that the reliance on user-generated notes may lack accountability and rigor, potentially increasing the spread of harmful content and viral false claims.
Meta emphasised it would continue taking action against harmful misinformation when necessary but provided limited specifics on enforcement under the new system.