Policy details

Change log

CHANGE LOG

Change log

Today

Current version

Sep 28, 2023
Sep 22, 2023
Jan 27, 2023
Nov 24, 2022
Sep 29, 2022
Feb 25, 2022
Nov 25, 2021
Oct 28, 2021
May 4, 2021
Feb 9, 2021
Jan 29, 2021
Nov 19, 2020
Sep 3, 2020
Jun 26, 2020
May 28, 2020
Dec 17, 2019
Dec 1, 2019
Sep 29, 2019
Jul 30, 2019
Jul 10, 2019
Jul 1, 2019
Apr 26, 2019
Dec 29, 2018
Oct 15, 2018
Jul 27, 2018
May 25, 2018
Show olderShow fewer

Policy Rationale

We aim to prevent potential offline violence that may be related to content on our platforms. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious and casual ways, we remove language that incites or facilitates violence and credible threats to public or personal safety. This includes violent speech targeting a person or group of people on the basis of their protected characteristic(s) or immigration status. Context matters, so we consider various factors such as condemnation or awareness raising of violent threats, non-credible threats directed at terrorists or other violent actors (e.g. "Terrorists deserve to be killed") or the public visibility and vulnerability of the target of the threats. We remove content, disable accounts, and also work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.

We Remove:

We remove threats of violence against various targets. Threats of violence are statements or visuals representing an intention, aspiration, or call for violence against a target, and threats can be expressed in various types of statements such as statements of intent, calls for action, advocacy, aspirational statements and conditional statements.

We do not prohibit threats when shared in awareness-raising or condemning context, when less severe threats are made in the context of contact sports, or when threats are directed against certain violent actors, like terrorist groups.

Universal protections for everyone
Everyone is protected from the following threats:

  • Threats of violence that could lead to death (or other forms of high-severity violence)
  • Threats of violence that could lead to serious injury (mid-severity violence). We remove such threats against public figures and groups not based on protected characteristics when credible, and we remove them against any other targets (including groups based on protected characteristics) regardless of credibility
  • Admissions to high-severity or mid-severity violence (in written or verbal form, or visually depicted by the perpetrator or an associate), except when shared in a context of redemption, self-defense, contact sports (mid-severity or less), or when committed by law enforcement, military or state security personnel
  • Threats or depictions of kidnappings or abductions, unless it is clear that the content is being shared by a victim or their family as a plea for help, or shared for informational, condemnation or awareness-raising purposes

Additional protections for Private Adults, All Children, high-risk persons and persons or groups based on their protected characteristics:
In addition to the universal protections for everyone, all private adults (when self-reported), children and persons or groups of people targeted on the basis of their protected characteristic(s), are protected from threats of low-severity violence.

Other Violence
In addition to all of the protections listed above, we remove the following:

  • Content that asks for, offers, or admits to offering services of high-severity violence (for example, hitmen, mercenaries, assassins, female genital mutilation) or advocates for the use of these services
  • Instructions on how to make or use weapons where there is language explicitly stating the goal to seriously injure or kill people, or imagery that shows or simulates the end result, unless in the context of recreational self-defense, training by a country’s military, commercial video games, or news coverage (posted by Page or with news logo).
  • Instructions on how to make or use explosives, unless with context that the content is for a non-violent purpose (for example, part of commercial video games, clear scientific/educational purpose, fireworks, or specifically for fishing)
  • Threats to take up weapons or to bring weapons to a location or forcibly enter a location (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election), or locations where there are temporary signals of a heightened risk of violence.
  • Threats of violence related to voting, voter registration, or the administration or outcome of an election, even if there is no target.

For the following Community Standards, we require additional information and/or context to enforce:

We Remove:

  • Threats against law enforcement officers or election officials, regardless of their public figure status or credibility of the threat.
  • Coded statements where the method of violence is not clearly articulated, but the threat is veiled or implicit, as shown by the combination of both a threat signal and contextual signal from the list below.
  • Threat: a coded statement that is one of the following:
  • Shared in a retaliatory context (e.g., expressions of desire to engage in violence against others in response to a grievance or threat that may be real, perceived or anticipated)
  • References to historical or fictional incidents of violence (e.g., content that threatens others by referring to known historical incidents of violence that have been committed throughout history or in fictional settings)
  • Acts as a threatening call to action (e.g., content inviting or encouraging others to carry out violent acts or to join in carrying out the violent acts)
  • Indicates knowledge of or shares sensitive information that could expose others to violence (e.g., content that either makes note of or implies awareness of personal information that might make a threat of violence more credible. This includes implying knowledge of a person's residential address, their place of employment or education, daily commute routes or current location)
  • Context
  • Local context or expertise confirms that the statement in question could lead to imminent violence.
  • The target of the content or an authorized representative reports the content to us.
  • Implicit threats to bring armaments to locations, including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election (or encouraging others to do the same) or locations where there are temporary signals of a heightened risk of violence.
  • Claims or speculation about election-related corruption, irregularities, or bias when combined with a signal that content is threatening violence (e.g., threats to take up or bring a weapon, visual depictions of a weapon, references to arson, theft, vandalism), including:
  • Targeting individual(s)
  • Targeting a specific location (state or smaller)
  • Where the target is not explicit
  • References to election-related gatherings or events when combined with a signal that content is threatening violence (e.g., threats to take up or bring a weapon, visual depictions of a weapon, references to arson, theft, vandalism).
User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.

Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.

Reporting
1
Universal entry point

We have an option to report, whether it’s on a post, a comment, a story, a message or something else.

2
Get started

We help people report things that they don’t think should be on our platform.

3
Select a problem

We ask people to tell us more about what’s wrong. This helps us send the report to the right place.

4
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

Post-report communication
1
Update via notifications

After we’ve reviewed the report, we’ll send the reporting user a notification.

2
More detail in the Support Inbox

We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we got the decision wrong, they can request another review.

4
Post-appeal communication

We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that doesn't follow our rules, we’ll tell them.

2
Additional context

We’ll also address common misperceptions and explain why we made the decision to enforce.

3
Policy Explanation

We’ll give people easy-to-understand explanations about the relevant rule.

4
Option for review

If people disagree with the decision, they can ask for another review and provide more information.

5
Final decision

We set expectations about what will happen after the review has been submitted.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.

Get help with violence and incitement

Learn what you can do if you see something on Facebook that goes against our Community Standards.