CCDH Accuses X (Twitter) Of Not Moderating Hate Speech
Out of 200 posts about the Israel-Hamas war, the CCDH says 98 per cent still remained live on X 7 days after they were reported to moderators.
Elon Musk-owned social media platform X (formerly Twitter) is reportedly struggling to curb hate speech amidst the Israel-Hamas war.
According to a report by the Center for Countering Digital Hate (CCDH), X is failing to moderate posts that violate the platform's own community rules regarding hate speech, Islamophobia, anti-Semitism and misinformation.
On October 31, researchers in the CCDH discovered and reported 200 Israel-Hamas war-centric hateful posts that failed to comply with the platform rules to X moderators. It is worth noting that X previously denied EU allegations of allowing illegal content about the Israel-Hamas on the platform.
However, the researchers found that 98 per cent of the posts weren't removed from the platform despite allowing 7 days to process the reports.
The researchers claim the reported posts, which were collected from 101 separate X accounts, incited violence against Jews, Palestinians or Muslims and promoted bigotry.
Notably, X suspended just one account citing their report, while the posts that remained live garnered a combined 24,043,693 views at the time the report was published.
Likewise, a recently shared X post showing 3 girls beating 10 Muslims in France has been making the rounds online. While some commenters claim the video is fake, it had gained over 9,000 views at the time of writing.
X criticised for failing to moderate hate speech
In its earlier report, the CCDH accused X of failing to remove posts they reported for extreme hate speech. Unsurprisingly, X denied the report's conclusions via a post on the platform.
In fact, X went on to file a lawsuit against the CCDH for unlawfully scrapping X's data to create "flawed" studies about the platform.
Interestingly, 43 of the 101 X accounts highlighted in the study were verified. Users who pay £8 monthly for X Premium benefit from algorithmic boosts that improve the visibility of their X posts.
A recent study by NewsGuard found that verified users are behind the vast majority of viral misinformation about the Israel-Hamas war on X.
X's head of business operations, Joe Benarroch told The Verge that the company has urged X users to read a new blog post that sheds some light on the "proactive measures" the platform has taken to ensure its safety during the ongoing Israel-Hamas war.
In the blog post, X indicated that it removed 3,000 accounts that were linked to violent entities in the region. Aside from this, it took action against more than 325,000 pieces of content that did not comply with its terms of service.
X did not reveal how long it took to remove these posts and accounts after they were reported. Musk recently made a slight change to creator monetisation as well. Now, X will disable revenue sharing on posts with misinformation.
© Copyright IBTimes 2024. All rights reserved.