Here’s how Twitter and Elon Musk can prevent racist ‘raids’ on the social network

Nearly half of the Twitter accounts that spewed hate speech there as Elon Musk took over have been suspended, according to new research.

In this photo illustration, the image of Elon Musk is displayed on a computer screen and the logo of twitter on a mobile phone in Ankara, Turkiye on October 06, 2022.

Muhammed Selim Korkutata | Anadolu Agency | Getty Images

When Tesla and SpaceX CEO Elon Musk took over at Twitter, showing up at headquarters on Oct. 27, 2022, online trolls and bigots raided the social network, polluting it with a deluge of racist epithets and other hate speech.

But a new study from the non-profit Network Contagion Research Institute (NCRI) and Rutgers finds that Twitter’s safety team responded better to that “raid” than the company did to a similar event in April 2022.

According to NCRI’s CEO Adam Sohn, a raid is when bad actors online engage in coordinated activity to try to disrupt social media platforms, usually to harm marginalized people or specific targets.

GamerGate is probably the most infamous raid, and took place around 2014 when 4Chan trolls who were a part of the video game community lobbed misogynistic attacks against women who were in the industry. They specifically targeted one woman and critic who had spoken out about sexist tropes in games. Their campaign was waged across myriad social platforms including Twitter and Reddit, and manifested in real world rape and death threats, and a bomb scare targeting the critic.

Conspiracy-driven communities online are also known to use raid tactics.

Some people engage in so-called “inauthentic” activity on social networks just to see if they can get away with it (“for the lulz”).

NCRI analyst Alex Goldenberg says that while Twitter’s action in response to the hate speech last week was effective, the company could have forecast and prevented it, too.

Hours before the deluge of hate speech, he said, “We assessed that this particular online troll campaign was being driven by coordinated, inauthentic activity that originated specifically on 4Chan. There, we detected a surge in mentions of the n-slur in tandem with mentions of Twitter.”

NCRI uses sophisticated machine learning software and systems to monitor huge amounts of social network content, and to track rising hatred and threats against marginalized groups online, including Black, Jewish, Hindu and Muslim people.

It makes research tools available and publishes reports, safety recommendations and warnings, sometimes delivering them directly to social networks, about where threats are rising, and may be likely to spill over into the physical world. According to Sohn, NCRI’s hope is to use this information to prevent real-world harm those online efforts.

NCRI was previously able to forecast an uptick of violence against Asian Americans as the Covid pandemic emerged, and identify an imminent threat from an anti-government group (the Boogaloo Boys) against law enforcement personnel. They also warned of the rise of communities encouraging self-harm, primarily cutting, on Twitter.

What NCRI found this time

The NCRI found that in the 12 hours after Musk arrived at Twitter headquarters, the use of an anti-Black epithet (the n-word) on the social network increased nearly 500% from the previous average. NCRI published this quick study the next morning as Musk’s deal officially closed.

For the new study, NCRI dug back into the historic data. The firm found that when Musk first disclosed that he had agreed to buy Twitter for $54.20 per share, back in April 2022, a similar raid had occurred.

Comparing the two events, NCRI found that Twitter did a better job stopping the raid this time.

“While nearly half of the accounts recently disseminating the n-slur have been suspended, less than 10% of accounts had been suspended in the previous raid, suggesting this is a historical problem predating the purchase with historically uneven enforcement.”

Despite Twitter’s forceful response to the hate speech, some damage had already been done.

Several advertisers have paused spending on Twitter for now until they can get a better indication of how Musk will deliver on his promise to keep it “warm and welcoming” and prevent it from becoming a “free-for-all hellscape.”

Among those who have quit Twitter for now are Shonda Rhimes, who is the creator of “Grey’s Anatomy,” “Bridgerton” and other hit TV shows, Grammy-winning singer and songwriter Sarah Bareilles, and actor and “This Is Us” producer Ken Olin.

Others are waiting to see where Musk and his teams take the product, but have threatened they may leave depending on the results.

Basketball icon LeBron James expressed his concern about the rise in racist tweets, and Musk replied to him on Twitter with a link to a thread from the social network’s current head of safety, Yoel Roth. The long-time Twitter exec said their teams had taken steps to quash accounts that were responsible for a huge portion of the attacks.

NCRI’s analysis confirms that the steps Roth and the safety team took were effective.

In the future, NCRI would like to see greater use of “automated anomaly detection,” technology commonly used in cybersecurity to monitor network performance, or to detect when somebody may be trying to hack into a company’s systems, says NRCI’s lead intelligence analyst Alex Goldenberg.

Anomaly detection applied in social media would have let Twitter take preventative action once the planned raid was initially detected.

Goldenberg and Sohn compare this technology to a smoke detector or carbon-monoxide detector for social problems brewing online.

While Musk has billed himself as a free speech absolutist, his track record defending other’s rights is mixed. More recently, he has acknowledged a need to balance free speech ideals with trust and safety on Twitter.

One thing he has not promised to do publicly is take better care with his own tweets.

Musk has a history of posting unfounded conspiracy theories, comments and jokes that have been widely interpreted as sexist, anti-LGBTQ, racist or antisemitic. Memorably, he has posted Hitler memes to his widely followed Twitter account.

Just after he took over Twitter, Musk shared an unfounded, anti-LGBTQ conspiracy theory about a home invasion and assault on Paul Pelosi, husband of the speaker of the House Nancy Pelosi. Musk later deleted the tweet without an explanation.

He currently boasts 113.7 million listed followers on the platform, a number that’s rapidly growing.