CST Blog

New Facebook guidelines for removing hate content

17 March 2015

Facebook has published a new set of rules explaining to users what kind of content is not allowed on the site. These include tougher restrictions on hate content and a ban on the use of Facebook by organisations that promote hate.

These new Community Standards are welcome, but as with all social media sites their effectiveness depends on how they will be intepreted and enforced by Facebook. We encourage Facebook users to report hate content to Facebook according to the rules set out in these new guidelines. (You can learn how to report content to Facebook here).

The Community Standards cover a range of areas and can be read in full here. The following sections are of most relevance to CST's work:

Hate Speech

Facebook removes hate speech, which includes content that directly attacks people based on their:

  • Race
  • Ethnicity,
  • National origin,
  • Religious affiliation,
  • Sexual orientation,
  • Sex, gender, or gender identity, or
  • Serious disabilities or diseases.

Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us.

People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding. Sometimes people share content containing someone else's hate speech for the purpose of raising awareness or educating others about that hate speech. When this is the case, we expect people to clearly indicate their purpose, which helps us better understand why they shared that content.

We allow humor, satire, or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary. For that reason, we ask that Page owners associate their name and Facebook Profile with any content that is insensitive, even if that content does not violate our policies. As always, we urge people to be conscious of their audience when sharing this type of content.

Direct Threats: How we help people who feel threatened by others on Facebook.

We carefully review reports of threatening language to identify serious threats of harm to public and personal safety. We remove credible threats of physical harm to individuals. We also remove specific threats of theft, vandalism, or other financial harm.

We may consider things like a person's physical location or public visibility in determining whether a threat is credible. We may assume credibility of any threats to people living in violent and unstable regions.

Bullying and Harassment: How we respond to bullying and harassment.

We don’t tolerate bullying or harassment. We allow you to speak freely on matters and people of public interest, but remove content that appears to purposefully target private individuals with the intention of degrading or shaming them. This content includes, but is not limited to:

  • Pages that identify and shame private individuals,
  • Images altered to degrade private individuals,
  • Photos or videos of physical bullying posted to shame the victim,
  • Sharing personal information to blackmail or harass people, and
  • Repeatedly targeting other people with unwanted friend requests or messages.

We define private individuals as people who have neither gained news attention nor the interest of the public, by way of their actions or public profession.

Attacks on Public Figures: What protection public figures receive on Facebook.

We permit open and critical discussion of people who are featured in the news or have a large public audience based on their profession or chosen activities. We remove credible threats to public figures, as well as hate speech directed at them – just as we do for private individuals.

Dangerous Organizations: What types of organizations we prohibit on Facebook.

We don’t allow any organizations that are engaged in the following to have a presence on Facebook:

  • Terrorist activity, or
  • Organized criminal activity.

We also remove content that expresses support for groups that are involved in the violent or criminal behavior mentioned above. Supporting or praising leaders of those same organizations, or condoning their violent activities, is not allowed.

We welcome broad discussion and social commentary on these general subjects, but ask that people show sensitivity towards victims of violence and discrimination.

Violence and Graphic Content

Facebook has long been a place where people share their experiences and raise awareness about important issues. Sometimes, those experiences and issues involve violence and graphic images of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, they are condemning it or raising awareness about it. We remove graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence.

When people share anything on Facebook, we expect that they will share it responsibly, including carefully choosing who will see that content. We also ask that people warn their audience about what they are about to see if it includes graphic violence.

These guidelines are significantly stronger and clearer than Facebook's previous Community Standards. They are in keeping with Facebook's endorsement of Best Practices in Combating Cyberhate, a statement of principles that CST helped to create. In particular, the inclusion of a rule that "Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook" is welcome.

However, previously Facebook had refused to remove content that clearly breached its previous guidelines, which frustrated many users. We hope that the new set of Community Standards indicates a new willingness to tackle the presence of hate content on Facebook and the use of the platform to promote hatred and bigotry.

Read More