By Davey Alba and Daniel Zuidijk
Posts about the attack in Israel have led to confusion, misinformation and conflict on Elon Musk’s X, formerly known as Twitter, exposing how his acquisition and policy changes have transformed the social media site into an unreliable resource during a time of crisis, researchers said.
Hours after Hamas gunmen from the Gaza Strip surged into Israel, carrying out the most significant attack of the country in decades, unverified photos and videos of missile air strikes, buildings and homes being destroyed and other posts depicting military violence — in Israel and Gaza — swirled on the platform. Many of the posts repurposed old images of armed conflict, passing them off as new, and were pushed by anonymous accounts that carried blue checkmarks — signaling that they had purchased verification under X’s “premium” subscription service, formerly known as Twitter Blue. Other accounts posted military footage that actually originated from video games. And a handful of viral falsehoods were pushed by far-right pundits on the platform, a common tactic for increasing engagement.
Mike Rothschild, a conspiracy theory researcher who has studied viral falsehoods on social media, said that news of the attack on Israel was “the first real test of Elon Musk’s version of Twitter, and it failed spectacularly.”
X, under Musk’s ownership since October 2022, has made changes to its content safety policies, with the consequences now glaringly apparent in this moment of geopolitical crisis, researchers said. Over the past year, the company loosened its platform’s rules, cut trust-and-safety employees after previously saying it would expand the team, reinstated once-banned accounts and allowed people to pay for a checkmark on the social network. Though falsehoods about the Israeli-Palestinian conflict have spread on social media platforms across the internet, the researchers said the effect on X stood out as false posts became unavoidable.
“It's now almost impossible to tell what's a fact, what's a rumor, what's a conspiracy theory, and what's trolling,” Rothschild said. “Musk's changes haven't just made X useless during a time of crisis. They've made it actively worse."
More From This Section
An X representative couldn’t be reached for comment. An X Corp. account said Monday that there have been more than 50 million posts about the attack since it happened, and that “a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.” At the same time, “X believes that, while difficult, it's in the public's interest to understand what's happening in real time.” The company suggested that users change their settings to control what media they see, and pointed to an option to turn off visibility for posts with sensitive media.
Earlier on Monday, X’s safety account posted another message suggesting the Community Notes feature will help users understand what they’re seeing. “When critical moments happen, people on X share their perspective in real time,” the company said in the post. “@CommunityNotes is a way for people on X to add context to posts, helping the others understand more about what they are seeing. We add new contributors regularly and just added more today.”
Imran Ahmed, chief executive officer of the Center for Countering Digital Hate, a nonprofit, said that X’s statement showed the platform was pushing the burden for a solution onto its users. “We keep telling people that it’s their job to wade through an ever-growing wave of misinformation that is increasingly indistinguishable from reality,” said Ahmed, whose group is being sued by X Corp. after publishing research in July showing a rise in hate speech on the social network.
But the platforms have a responsibility to create a safe environment for their users, including mitigating the risk of their tools becoming a threat to the public “by amplifying misinformation and hate, and distorting the lens through which so many people see the world,” especially in times of crisis, Ahmed added.
As news of the Israeli-Palestinian conflict began to emerge Saturday, a far-right political commentator published a post on X that claimed to show video evidence of Palestinian militants going door to door and killing Israeli citizens. “Imagine if this was happening in your neighborhood, to your family,” said the commentator, Ian Miles Cheong, who has frequently interacted with Musk on X.
Over three days, the short video gained nearly 50 million likes, shares and comments; it was viewed 12.7 million times on X. Later, a “community note” was attached to the post, noting that the video showed Israeli law enforcement — not members of the Palestinian military group Hamas. But it wasn’t clear how far the misleading post spread before the correction, and the post remains live on the platform.
Ian Miles Cheong didn’t respond to a request for comment.
A few hours later, a paid X account with an anonymous handle weighed in with a false post. “And there it is…” the account said. “The US is sending $8B worth of military aid to Israel.” The post included a screenshot of what appeared to be a statement from the White House authorizing the aid to Israel.
But no such statement has ever appeared on the US government’s website. The dateline and details in the screenshot were manipulated, copying a White House statement in July that announced financial aid for Ukraine, according to an independent misinformation researcher who posted a fact-check online. A community note was also added to the post on X, but the false claim was repeated in at least 1,400 other posts on the platform, not all of them with a label appended, according to research compiled by NewsGuard, a group that documents viral online posts as part of its work to assess the quality of websites and news outlets.
Altogether, the posts received more than 604,100 views on the platform, NewsGuard said. It was also repeated in several posts on ByteDance Ltd.’s TikTok where it spread unchecked, collecting at least 17,600 views, according to a Bloomberg review of the platform. It also spread on Telegram channels and QAnon forum posts, according to Bloomberg’s review.
Around the same time, an account purporting to represent the Taliban posted on X, claiming without evidence that the group was asking the governments of Iran, Iraq and Jordan for passage to join up with Hamas. The unsubstantiated claim collected 2.5 million views on X and spread widely on Meta Platform Inc.’s Facebook, through an article published by The Gateway Pundit, a far-right website that often spreads conspiracy theories, which picked up the unproven claim.
On Facebook, The Gateway Pundit’s article was shared 1,600 times, reaching as many as 440,000 people on the social network, according to CrowdTangle, a Meta-owned social media analysis tool. But Michael Kugelman, director of the South Asia Institute at the Wilson Center, a nonpartisan think tank, said there was no reason to believe the claim from the account is true.
The Taliban “have never staged any operations outside Afghanistan,” said Kugelman, who has studied Afghanistan and the Taliban since 2007. “Their ideology and operational strategies have always focused on Afghanistan, and Afghanistan alone.” He also pointed out that previous posts made by the account were uncharacteristically critical of Qatar, which the Taliban would never be.
“Finally, if we suspend our disbelief and imagine that the Taliban really were preparing to send their fighters to Gaza, then they would not announce this publicly,” Kugelman added. “Broadcasting your covert plans to the world makes no sense.”
Gateway Pundit, Meta and TikTok didn’t respond to a request for comment.
The falsehood that Ukraine sold weapons to Hamas also spread on X, in spite of reports that the Pentagon found no evidence that Ukraine aid was being diverted away from the country. Joey Mannarino, a far-right podcast host who is verified on X, collected the most likes and posts of the claim on the platform, according to NewsGuard’s research. His post stating that Hamas had claimed Ukraine sold the group weapons reached nearly 4,000 likes and shares on X, and it collected nearly 7 million views on the platform.
Mannarino quickly followed up with a post saying, “For the record, we don’t know if this is true or not.” Jack Brewster, NewsGuard’s enterprise editor, said such posts are a common tactic for misinformers “to escape the culpability of being proved wrong.” Social media users who spread the falsehood, meanwhile, get to “escape doing the hard work that journalists do of verifying viral content before they report something as true,” according to Brewster.
Mannarino didn’t respond to a request for comment.
Ahmed, the CCDH executive, said that the risk of the viral falsehoods that remain online was not merely giving people an inaccurate picture of the conflict, but that further violence occurs as a result of the lies being spread online. “Lies underpin the hate,” Ahmed said. “They act reflexively both to create hatred, and to reinforce it.”
“The real-world consequences of these lies are violence on the streets, innocents being hurt and potentially, lives lost,” he added, “because some of these images and videos are designed to invoke the most extreme reactions possible.”