Social Media

EU probes Musk's X for 'illegal content' related to Israel-Hamas war

The goal is to assess whether the platform complies with strict new regulations aimed at ensuring online user safety.

Elon Musk, who owns X/Twitter, Tesla and SpaceX.
Michel Euler / AP
SMS

The European Commission on Thursday made a formal, legally binding request for information from Elon Musk's social media platform X over its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war.

It is the first step in what could become the EU's inaugural investigation under the Digital Services Act, in this case to determine if the site formerly known as Twitter is in compliance with the tough new rules meant to keep users safe online and stop the spread of harmful content.

San Francisco-based X has until Wednesday to respond to questions related to how its crisis response protocol is functioning. Responses to other questions must be received by Oct. 31. The commission said its next steps, which could include the opening of formal proceedings and penalties, would be determined by X's replies.

Representatives for X did not immediately respond to a message seeking comment. The company's CEO, Linda Yaccarino, said earlier that the site has removed hundreds of Hamas-linked accounts and taken down or labeled tens of thousands of pieces of content since the militant group's attack on Israel. One social media expert called the actions "a drop in the bucket."

Yaccarino on Thursday outlined steps taken by X to combat illegal content flourishing on the platform. She was responding to an earlier letter from a top European Union official for information on how X is complying with the EU's new digital rules during the Israel-Hamas war. That letter, which essentially served as a warning, was not legally binding — the latest one, however, is.

"X is proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis," Yaccarino said in a letter to European Commissioner Thierry Breton, the 27-nation bloc's digital enforcer.

EU warns TikTok about Hamas-Israel disinformation spreading on its app
EU warns TikTok about Hamas-Israel disinformation spreading on its app

EU warns TikTok about Hamas-Israel disinformation spreading on its app

European regulators have warned TikTok about disinformation and violent content related to the war spreading on its app.

LEARN MORE

But some say the efforts are not nearly enough to tackle the problem.

"While these actions are better than nothing, it is not enough to curtail the misinformation problem on X," said Kolina Koltai, a researcher at the investigative collective Bellingcat who previously worked at Twitter on Community Notes.

"There is an overwhelming amount of misinformation on the platform," Koltai said. "From what we have seen, the moderation efforts from X are only addressing a drop in the bucket."

Since the war erupted, photos and videos have flooded social media of the carnage, including haunting footage of Hamas fighters taking terrified Israelis hostage, alongside posts from users pushing false claims and misrepresenting videos from other events.

The conflict is one of the first major tests for the EU's groundbreaking digital rules, which took effect in August. Breton fired off a similar letter Thursday to TikTok, telling CEO Shou Zi Chew that he has a "particular obligation" to protect child and teen users from "violent content depicting hostage taking and other graphic videos" reportedly making the rounds on the video sharing app.

For X, changes that Musk has made to the platform since he bought it last year mean accounts that subscribe to X's blue-check service can get paid if their posts go viral, creating a financial incentive to post whatever gets the most reaction. Plus, X's workforce — including its content moderation team — has been gutted.

Those changes are running up against the EU's Digital Services Act, which forces social media companies to step up policing of their platforms for illegal content, such as terrorist material or illegal hate speech, under threat of hefty fines.

"There is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time, including proactive efforts," Yaccarino wrote in the letter posted to X.

X has taken action to "remove or label tens of thousands of pieces of content," Yaccarino said, pointing out that there are 700 unique Community Notes — a feature that allows users to add their own fact-checks to posts — "related to the attacks and unfolding events."

The platform has been "responding promptly" and in a "diligent and objective manner" to takedown requests from law enforcement agencies from around the world, including more than 80 from EU member states, Yaccarino said.

Koltai, the researcher and former Twitter employee, said Community Notes are not an "end-all solution to curtailing misinfo" and that there are gaps that the feature just can't fill yet.

"There are still many videos and photos on X that don't have notes that are unmoderated, and continue to spread misleading claims," she said.

Since Musk acquired Twitter and renamed it, social-media watchers say the platform has become not just unreliable but actively promotes falsehoods, while a study commissioned by the EU found that it's the worst-performing platform for online disinformation.

Rivals such as TikTok, YouTube and Facebook also are coping with a flood of unsubstantiated rumors and falsehoods about the Middle Eastern conflict, playing the typical whack-a-mole that erupts each time a news event captures world attention.

Breton, the EU official, urged TikTok's leader to step up its efforts at tackling disinformation and illegal content and respond within 24 hours. The company did not reply immediately to an email seeking comment.

Breton's warning letters have also gone to Mark Zuckerberg, CEO of Facebook and Instagram parent Meta.