In a step to curb misleading information on its platforms, Social Media giant Meta on Wednesday announced a new policy that will require advertisers to disclose digital alteration in advertisements starting 2024.
According to the policy, advertisers will have to disclose whenever a social issue, electoral, or political advertisement contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do.
Pictures and videos depicting a realistic-looking person that does not exist, realistic-looking event that did not happen, or alter footage of a real event that happened will also have to be reported on the platform.
“We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI,” read the official announcement by Meta.
The policy also asks for altered media depicting a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event, to be disclosed.
Once the advertiser discloses in the advertising flow that the content is digitally created or altered, Meta will add the required tags/information to the ad. This information will also appear in the ad library of the platform.
Meta has also specified that advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad. “This may include image size adjusting, cropping an image, colour correction, or image sharpening, unless such changes are consequential or material to the claim, assertion, or issue raised in the ad,” says the Meta website.
More From This Section
If the advertisers do not disclose the details of the ads as per the requirements of the new policy, their ads will be rejected, says the company. Repeatedly failing to disclose may invite penalties against the advertiser.
“As always, we remove content that violates our policies whether it was created by AI or a person. Our independent fact-checking partners review and rate viral misinformation and we do not allow an ad to run if it’s rated as False, Altered, Partly False, or Missing Context,” read the official announcement.
The development comes amidst worldwide criticism of social media platforms for increased spread of misinformation and fake news using AI tools.
In India, the Ministry of Electronics and Information Technology (MeitY) recently issued an advisory to all social media platforms reminding them of the legal obligations that require them to identify and remove misinformation promptly. The actions came as a response to a deep fake video of actor Rashmika Mandanna going viral over the internet. Many Indian celebrities called for a special policy and legal action over the issue.