Twitter on Wednesday said it will soon begin labelling tweets containing "manipulated media" which aim to mislead people, and would take steps including removal of tweets if such content has the potential to harm public safety or lead to voter supression.
Other measures include flashing a warning to users before they share or like tweets that contain 'synthetic and manipulated media' and reducing visibility of the said tweet on its platform.
The crackdown over synthetic and manipulated media comes at a time when there are widespread concerns globally over altered, forged content on social media, including deepfake videos, and its catastrophic implications.
"If we believe that media shared in a tweet have been significantly and deceptively altered or fabricated, we will provide additional context on the tweet. This means we may apply a label to the tweet, show a warning to people before they retweet or like the tweet...," the microblogging platform said in its latest blog post.
Other actions under the updated rules include reducing visibility of the tweet on Twitter and preventing it from being recommended, and providing extra explanations or clarifications, as available, such as a landing page with more context.
"In most cases, we will take all of the above actions on tweets we label. Our teams will start labeling Tweets with this type of media on March 5, 2020," Twitter said.
To determine whether the media has been significantly and deceptively altered or fabricated, Twitter said it will consider whether the content has been substantially edited in a manner that fundamentally alters its composition and sequence.
More From This Section
It will also take stock of any visual or auditory information -- such as new video frames, overdubbed audio, or modified subtitles -- that has been added or removed.
Other consideration will be whether media depicting a real person has been fabricated or simulated.
Twitter said it will also consider whether the context in which the media is shared could lead to misunderstanding or suggests a deliberate intent to deceive people about the content.
It will also take note of the context provided alongside media, including the text of the accompanying tweet and metadata associated with the media.
Tweets that share manipulated media are subject to removal under this policy if they are likely to cause harm, Twitter said.
It said definition of 'harm' could include threats to the physical safety of a person or group, risk of mass violence or widespread civil unrest, threats to the privacy or ability of a person or group to freely express themselves or participate in civic events, voter suppression or intimidation.
Deepfake technology can be misused to manipulate videos to show people saying things they never said.
It gained eyeballs in 2018, when actor and director Jordan Peele created a video that was doctored to show as if former US President Barack Obama was making derogatory remarks about the current US President Donald Trump.
If deepfake content makes its way to unsuspecting users on social media, the ramifications could be damaging and dangerous, warn experts.
The latest announcement by the microblogging platform came three months after it sought public views on ways to effectively clampdown on manipulated or deceptively altered media.
In its statement on Wednesday, Twitter noted that it had previously announced plans to seek input from around the globe on how to address synthetic and manipulated media.
Twitter said it had received over 6,500 responses from people around the world, and consulted with civil society and academic experts on the draft rules.
"Overall, people recognize the threat that misleading altered media poses and want Twitter to do something about it," it said.
Globally, over 70 percent of people who use Twitter were of the view that "taking no action" on misleading altered media would be unacceptable.
Respondents, it said, were nearly unanimous in their support for Twitter providing additional information or context on tweets that have altered or misleading media.
Twitter said 9 out of 10 individuals said placing warning labels next to significantly altered content would be acceptable.