Policy details

Change log

CHANGE LOG

Change log

Today

Current version

Nov 18, 2020
Aug 27, 2020
Jun 22, 2020
May 28, 2020
Apr 30, 2020
Feb 27, 2020
Dec 16, 2019
Nov 30, 2019
Jul 30, 2019
Jul 1, 2019
Apr 26, 2019
Mar 20, 2019
Nov 30, 2018
Jul 27, 2018
Jul 27, 2018
May 25, 2018

Policy Rationale

We remove content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation. We allow graphic content (with some limitations) to help people raise awareness about these issues.

We know that people value the ability to discuss important issues like human rights abuses or acts of terrorism. We also know that people have different sensitivities with regard to graphic and violent content. For that reason, we add a warning label to especially graphic or violent content so that it is not available to people under the age of 18 and so people are aware of the graphic or violent nature before they click to see it.

Do not post:

Imagery of people

Videos of people or dead bodies in non-medical settings if they depict

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people unless in the context of cremation or self-immolation when that action is a form of political speech or newsworthy.
  • Victims of cannibalism.
  • Throat-slitting.

Live streams of capital punishment of a person

For the following content, we include a warning screen so that people are aware the content may be disturbing. We also limit the ability to view the content to adults, ages 18 and older:

Imagery of people

Videos of people or dead bodies in a medical setting if they depict:

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people, including cremation or self-immolation when that action is a form of political speech or newsworthy.
  • Victims of cannibalism.
  • Throat-slitting.

Photos of wounded or dead people if they show:

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people.
  • Victims of cannibalism.
  • Throat-slitting.

Imagery that shows the violent death of a person or people by accident or murder

Imagery that shows capital punishment of a person

Imagery that shows acts of torture committed against a person or people

Imagery of non-medical foreign objects (such as metal objects, knives, nails) involuntarily inserted or stuck into people causing grievous injury

Imagery of animals

The following content involving animals:

  • Videos depicting humans killing animals if there is no explicit manufacturing, hunting, food consumption, processing or preparation context.
  • Imagery of animal to animal fights, when there are visible innards or dismemberment of non-regenerating body, unless in the wild.
  • Imagery of humans committing acts of torture or abuse against live animals.
  • Imagery of animals showing wounds or cuts that render visible innards or dismemberment, if there is no explicit manufacturing, hunting, taxidermy, medical treatment, rescue or food consumption, preparation or processing context, or the animal is already skinned or with its outer layer fully removed.

For the following content, we include a label so that people are aware the content may be sensitive:

Imagery of non-medical foreign objects voluntarily inserted into people through skin in religious or cultural context

Imagery of visible innards in a birthing context

Imagery of fetuses and newborn babies that show:

  • Dismemberment.
  • Visible innards.
  • An abortion or abandonment context.

Imagery of newborn babies in an abandonment context

Imagery of animals in a ritual slaughter context showing dismemberment, or visible innards, or charring or burning

For the following Community Standards, we require additional information and/or context to enforce:

We remove:

  • Videos and photos that show the violent death of someone when a family member requests its removal.

User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.

Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.

Data
Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Reporting
1
Universal entry point

We have an option to report, whether it’s on a post, a comment, a story, a message or something else.

2
Get started

We help people report things that they don’t think should be on our platform.

3
Select a problem

We ask people to tell us more about what’s wrong. This helps us send the report to the right place.

4
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

Post-report communication
1
Update via notifications

After we’ve reviewed the report, we’ll send the reporting user a notification.

2
More detail in the Support Inbox

We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we got the decision wrong, they can request another review.

4
Post-appeal communication

We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that violates our Community Standards, we’ll tell them.

2
Additional context

We’ll also address common misperceptions around enforcement.

3
Explain the policy

We’ll give people easy to understand explanations about why their content was removed.

4
Ask for input

After we’ve established the context for our decision and explained our policy, we’ll ask people what they'd like to do next, including letting us know if they think we made a mistake.

5
Tell us more

If people disagree with the decision, we’ll ask them to tell us more.

6
Set expectations

Here, we set expectations on what will happen next.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.