Against a backdrop of rising violence against religious minorities around the world, Twitter today said that it would update its hateful conduct rules to include dehumanizing speech against religious groups.
“After months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion,” the company wrote on its Twitter Safety blog.
The company said it will require tweets that target specific religious groups to be removed as violations of the company’s code of conduct.
The company said that any previous tweets containing the offending language would need to be removed, but would not cause the suspension of a user’s account, because they were made before Twitter implemented and communicated the policy.
Around the world, religious minorities have been attacked in hate crimes that some organizations believe to be inspired (at least in part) by hate speech on social media. Whether it’s white supremacists responsible for the murders of Jewish congregants in Pittsburgh or Islamic worshippers in Christchurch, New Zealand, or attacks by Islamic militants like ISIS, which left more than 100 people dead in attacks on Easter Sunday, social media has played a key role in disseminating hate speech and radicalizing untold numbers of users.
In the U.S. alone, the Anti-Defamation League found that 37% of Americans had experienced severe online hate and harassment in 2018. According to the recent survey, roughly 35% of Muslims and 16% of Jews experienced harassment online because of their religious affiliation. The ADL also reported that 28% of Twitter users had experienced harassment.
Twitter said it started with religious groups after receiving more than 8,000 responses from people located in more than 30 countries around the world.
When modifying its rules, Twitter said it focused on narrowing down what’s considered in the category for religious organizations, restricting it to just religions rather than political groups, hate groups or other non-marginalized groups with this type of language. Twitter also said it had developed a longer, more in-depth training process with teams to ensure they were informed when reviewing reports.
“It’s good that Twitter is seeking public comment as they’re developing their policy decisions and seeking input from external experts on hate, but hate and harassment on Twitter is a serious, longstanding problem,” wrote a spokesperson with the Anti-Defamation League in an email. “The fact that language dehumanizing others on the basis of religion only now violates Twitter’s rules shows how far they have to go to truly combat hate. We have urged Twitter to track and release the results of this and other policy changes to be transparent about the efficacy of their efforts.”