Business Standard

Artificial intelligence muddies battleground in unexpected way: Report

Researchers have found relatively few AI fakes. The mere possibility that AI content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic

AI, google, Artificial Intelligence

NYT
It was a gruesome image that shot rapidly around the internet: a charred body, described as a deceased child, that was apparently photographed in the opening days of the conflict between Israel and Hamas. Some observers on social media quickly dismissed it as an “AI-generated fake” — created using artificial intelligence tools that can produce photorealistic images with a few clicks.

Several AI specialists have since concluded that the technology was probably not involved. By then, however, the doubts about its veracity were already widespread.

Since Hamas’ terror attack October 7, disinformation watchdogs have feared that fakes created by AI tools, including the realistic renderings known as deepfakes, would confuse the public and bolster propaganda efforts. So far, they have been correct in their prediction that the technology would loom large over the war — but not exactly for the reason they thought.
 

Disinformation researchers have found relatively few AI fakes, and even fewer that are convincing. Yet the mere possibility that AI content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic.

On forums and social media platforms like X, Truth Social, Telegram and Reddit, people have accused political figures, media outlets and other users of brazenly trying to manipulate public opinion by creating AI content, even when the content is almost certainly genuine. “Even by the fog of war standards that we are used to, this conflict is particularly messy,” said Hany Farid, a computer science professor at the University of California, Berkeley and an expert in digital forensics, AI and misinformation. “The specter of deepfakes is much, much more significant now. It doesn’t take tens of thousands; it just takes a few, and then you poison the well, and everything becomes suspect.”

Amid highly emotional discussions about Gaza, many happening on social media platforms that have struggled to shield users against graphic and inaccurate content, trust continues to fray. And now experts say that malicious agents are taking advantage of AI’s availability to dismiss authentic content as fake — a concept known as the liar’s dividend.

Their misdirection during the war has been bolstered partly by the presence of some content that was created artificially.

A post on X with 1.8 million views claimed to show soccer fans in a stadium in Madrid holding an enormous Palestinian flag; users noted that the distorted bodies in the image were a telltale sign of AI generation. A Hamas-linked account on X shared an image that was meant to show a tent encampment for displaced Israelis but pictured a flag with two blue stars instead of the single star featured on the actual Israeli flag. The post was later removed. Users on Truth Social and a Hamas-linked Telegram channel shared pictures of Prime Minister Benjamin Netanyahu of Israel synthetically rendered to appear covered in blood.

Far more attention was spent on suspected footage that bore no signs of AI tampering, such as video of the director of a bombed hospital in Gaza giving a news conference — called “AI-generated” by some — which was filmed from different vantage points by multiple sources.

Other examples have been harder to categorise. The Israeli military released a recording of what it described as a wiretapped conversation between two Hamas members but what some listeners said was spoofed audio (The New York Times, the BBC and CNN reported that they have yet to verify the conversation).

In an attempt to discern truth from AI, some social media users turned to detection tools, which claim to spot digital manipulation but have proved to be far from reliable. A test by the Times found that image detectors had a spotty track record, sometimes misdiagnosing pictures that were obvious AI creations or labelling real photos as inauthentic.

In the first few days of the war, Netanyahu shared a series of images on X, claiming they were “horrifying photos of babies murdered and burned” by Hamas. When conservative commentator Ben Shapiro amplified one of the images on X, he was repeatedly accused of spreading AI-generated content.

“People will believe anything that confirms their beliefs or makes them emotional,” says Alex Mahadevan, the director of the Poynter media literacy programme MediaWise.


©2023 The New York Times News Service

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Oct 29 2023 | 11:04 PM IST

Explore News