In late 2022, Elon Musk spent $44 billion dollars to buy the social media app Twitter. By July 2023, he had changed the platform's name to “X”, fired more than 3,000 employees and re-instated the once-banned Donald Trump (before he was elected to a second term).

Something else happened to the platform. According to a new study by the University of California, Berkeley, the amount of bot and bot-like fake accounts did not decline as Musk had promised, while the rate of hate speech sky-rocketed.

This kind of rhetoric, along with bot accounts, comes at a moral cost. Previous research had linked online hate speech to real life hate crimes. Research also connected bot and bot-like accounts that promote misinformation to more scams, election interference and the hindrance of public health campaigns.

Not only was there an increase in homophobic, transphobic and racist slurs, but the average number of “likes” on hate posts increased by 70 percent.

The question lingered whether these negative trends would continue under Musk's ownership. The latest UC Berkeley analysis shows that the hate speech spike that occurred just before Musk purchased X did not stop. In fact, the study reports that it continued through May 2023. Not only was there an increase in homophobic, transphobic and racist slurs, but the average number of “likes” on hate posts rose 70 percent.

These findings run counter to the claims from X that exposure to hate speech decreased after Musk purchased the platform.

The researchers note that they can't trace a direct cause-and effect relationship between Musk's ownership of X and their findings since information on specific internal changes at the platform is limited. They do put forth one likely explanation for the increase in most types of bots, however: a growing lack of moderation, probably the result of the large reduction in Twitter's workforce following Musk's purchase.

If workers who normally remove bot accounts resigned from Twitter or were laid off, then fraudulent bot accounts would be allowed free rein.

There has been an increase in fraudulent accounts, likely the result of the big reduction in Twitter's workforce following the sale and a subsequent lack of moderators.

The study's authors expressed concern about the safety of online platforms and called for increased moderation on X, as well as further research to investigate activity across all social media platforms.

Meanwhile, what can you do to help limit hate speech on social media?

  • Report hateful content directly to the platform
  • Actively challenge harmful rhetoric by engaging with factual information
  • Support organizations working to counter online extremism
  • Advocate for stricter platform policies and enforcement mechanisms, while also pressuring X to improve its hate speech detection algorithms and the transparency in its moderation practices
  • Educate yourself and others about the dangers of hate speech and how to identify it

The study is published in PLOS One.