当前位置:首页 > Stocks > 正文

Online hate surges after Hamas attacks Israel. Why everyone is blaming social media.

2024-12-19 10:46:59 Stocks

Allison Josephs got a bone-chilling threat in 2012 from someone who identified herself as Becky. “Hitler should have finished his good work,” the message read. The writer said she had her Louisville Slugger ready for the “next chance we get.”

The frequency of hate-filled social media posts targeting Jews has only increased since then, said Josephs, a mother of four who runs the nonprofit Jew in the City. 

So she said she was not surprised by the wave of online hate immediately following the deadly attacks in Israel as people celebrated Hamas’ acts of terror, striking fear of more violence to come.

“From a social media perspective, it’s already been so bad, it’s kind of hard for it to get worse,” Josephs said.

But it is getting worse. Groups who study online hate speech say it has spiked in recent days – not just for Jewish communities but also for Palestinians, who have faced increasing online hatred. And representatives of both communities agree on one thing: U.S.-based social media companies are still not doing anywhere near enough to rid their platforms of hate against targeted groups.

“When it comes to serious crises like these where you're going to see the worst of the worst types of violent content, gore, incitement to violence, it's really incumbent on social media companies to coordinate across platforms,” said Daniel Kelley, director of strategy and operations at the Anti-Defamation League’s Center for Tech and Society, “in order to make sure that the platforms play the minimal amount of role possible in inflaming further violence.”

The escalation in harsh rhetoric comes at a time when online hate speech was already increasing, experts said. In the weeks before the conflict, the ADL was locked in a bitter dispute with X, formerly Twitter, over Elon Musk’s alleged promotion of extremists and hands-off approach to content moderation. Meanwhile, 7amleh, an Arab civil rights organization has been meeting with X for months, trying to persuade the company to quell hate speech against Palestinians.

And it’s not just X. Representatives of civil rights groups said they face a constant struggle working with powerful social media platforms to combat hate speech at the best of times. As the conflict in the Middle East escalates, they said, these companies really need to step up their game.

Israel-Hamas War newsletter: Sign up to get the latest news and analysis on the conflict in your inbox.

“Social media platforms are putting the burden of monitoring hate speech, incitement and violent speech on civil society organizations with limited resources and this is not OK,” said Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy. “We are talking about tech giants that are making billions of dollars annually, who cannot invest more money and resources in their platforms to protect oppressed communities, especially in times of crisis.”

Hate speech spikes on social media following attack

Several organizations monitoring online hate speech agree there has been a significant increase in such posts in the four days since Hamas attacked Israel. 

Matthew Williams, founder and chief scientist of the online harms monitoring platform nisien.ai, said the online harms monitoring platform has been identifying trends across social media since the conflict broke out. 

"The Hamas terror attack and the subsequent declaration of war by Israel coincided with significant spikes in both forms of hate speech, indicating their roles as 'trigger events,'" Williams said. “Nisien.ai has identified an increase in both antisemitic and anti-Muslim rhetoric, as might have been anticipated.” 

Debunked conspiracy theories, racist posts against Jewish people and Palestinians and hate-filled calls for violence flooded X and other social media platforms over the weekend, according to the groups monitoring them.The ADL noted several examples of such posts in a report released this weekend.

On X, a popular antisemitic influencer posted photographs of the World Trade Center attacks on 9/11 referencing the completely disproven conspiracy theory that the attack was connected to Israel: “It’s hard to have much sympathy for the Israeli regime when they helped perpetrate this attack on my country.” The post had more than 3,800 likes as of Tuesday afternoon.

The ADL counted 347 messages on Telegram from extremists calling for violence against Jews, Israelis and Zionists in the first 18 hours of Saturday, up approximately 488% from the day before. 

7amleh, The Arab Center for Social Media Advancement, a nonprofit that partners with Meta, Facebook’s parent company, has also been monitoring hate speech against Palestinians. The organization’s researchers manually found 260 examples of hate speech across all platforms in the last few days. 7amleh’s automated analysis identified 4,305 posts on X related to the invasion and ensuing conflict that contained violent or hateful speech.

Noting the “surge of antisemitism, hate, disinformation, misinformation, and propaganda spread on social media and messaging platforms,” the ADL released a guide for social media platforms over the weekend. It details seven steps the organization wants social media companies to take immediately, including increasing human and automated trust and safety resources and promoting trusted and reliable news sources.

The ADL’s report also noted that domestic extremists, white supremacists and conspiracy theorists have flocked online to cheer on the attack on Israel. Much of that hate unfolded on Telegram, the go-to messaging and social media platform for extremists. But hateful posts have also been shared on X and on Gab, another fringe platform popular with far-right extremists. 

Social media platforms offer mixed responses to hate speech

Social media companies say they are closely monitoring the situation. 

Meta said it has Hebrew and Arabic speakers working around the clock to respond in real time. TikTok said it has also ramped up resources to help curb violent, hateful, or misleading content and boosted moderation resources in Hebrew and Arabic.

YouTube owner Google said during major world events like the Hamas attacks on Israel, it prioritizes news and information from authoritative sources and removes harmful content. Content produced by terrorist organizations including footage showing hostages is not permitted on YouTube and that includes content produced by Hamas, according to YouTube. 

“Hate speech targeting the Jewish, Palestinian or any other religious or ethnic communities is not allowed on YouTube,” the company said in a statement. “This applies to all forms of content, including videos, livestreams and comments, and our policies are enforced across languages and locales.”

Adding to the challenge in monitoring antisemitic and anti-Muslim rhetoric is that “some of this hate is coded, with some posters avoiding recognized racial and religious slurs, and instead constructing their posts using negative stereotypes and tropes,” Williams said.

That makes it more difficult for automatic moderation and human moderators to catch inappropriate posts. As hateful language evolves, and new terms and slogans become popular, moderators must constantly evolve their efforts.

But social media companies like Meta, looking to reduce costs, have eliminated jobs focused on safety issues. They are increasingly leaning on artificial intelligence systems to moderate content and enforce their rules. 

“For the moment we're seeing a lack of investment – we're seeing a lack of ability to independently verify the rate and the amount and the severity of hate,” the ADL’s Kelley said.

X has been widely criticized for its hands-off approach to posts trading in hate speech, misinformation and violence, according to experts monitoring social media.

Posts depicting graphic pictures of murdered civilians and Israeli soldiers, antisemitic hate speech and anti-Palestinian messages are being widely shared on Elon Musk’s platform, those experts said.

X, which gutted its content moderation team under Musk, said late Monday that it was taking the situation seriously as more than 50 million posts about the Hamas attack flooded the platform.

“As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response,” the company’s safety account tweeted.

X said its escalations team was “continuing to proactively monitor for antisemitic speech,” removed newly created Hamas-affiliated accounts and took action against “tens of thousands of posts for sharing graphic media, violent speech, and hateful conduct.” It urged people who were concerned about the flood of graphic and misleading content to choose not to see “sensitive media.” X did not respond to a request for comment.

European officials warned X on Tuesday that the company appears to be hosting misinformation about the war in violation of the European Union's content moderation law, CNN reported.

Nadim Nashif, founder and director of 7amleh, said his group has been trying for months to get X to take anti-Palestinian hate speech on the platform seriously. 

Nashif said the company doesn’t do anything to moderate hateful posts written in Hebrew.

The last time 7amleh met with Twitter, they offered the company a lexicon of commonly used hate speech terms in Hebrew, Nashif said. The X representative asked 7amleh to translate it into English, he said.

X is “horrible at content moderation,” he said. “Specifically, in Hebrew, they have nothing. Basically, you can write, ‘kill them,’ ‘rape them,’ ‘burn them’ – whatever you want. It's like an open stage for extremism.”

X’s response to the flood of hate speech has been to promote “community notes,” an outsourced effort that relies on verified volunteers to attach notes to posts that are inaccurate.

Musk praised the open-source effort to moderate harmful content in a post on Tuesday, writing “Thank you for fighting for truth.”

最近关注

友情链接