Policy details

Change log

CHANGE LOG

Change log

Today

Current version

Feb 29, 2024
Nov 29, 2023
Oct 17, 2023
May 25, 2023
Dec 22, 2022
Nov 23, 2022
Sep 29, 2022
Oct 28, 2020
Nov 18, 2020
Aug 27, 2020
Jun 22, 2020
May 28, 2020
Apr 30, 2020
Feb 27, 2020
Dec 16, 2019
Nov 30, 2019
Jul 30, 2019
Jul 1, 2019
Apr 26, 2019
Mar 20, 2019
Nov 30, 2018
Jul 27, 2018
Jul 27, 2018
May 25, 2018
Show olderShow fewer

Policy Rationale

To protect users from disturbing imagery, we remove content that is particularly violent or graphic, such as videos depicting dismemberment, visible innards or charred bodies. We also remove content that contains sadistic remarks towards imagery depicting the suffering of humans and animals.

In the context of discussions about important issues such as human rights abuses, armed conflicts or acts of terrorism, we allow graphic content (with some limitations) to help people to condemn and raise awareness about these situations.

We know that people have different sensitivities with regard to graphic and violent imagery. For that reason, we add a warning label to some graphic or violent imagery so that people are aware it may be sensitive before they click through. We also restrict the ability for users under 18 to view such content.

Do not post:

Imagery of people

Videos of people or dead bodies in non-medical settings if they depict

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people unless in the context of cremation or self-immolation when that action is a form of political speech or newsworthy.
  • Victims of cannibalism.
  • Throat-slitting.

Live streams of capital punishment of a person

Sadistic Remarks

  • Sadistic remarks towards imagery that is put behind a warning screen under this policy advising people that the content may be disturbing, unless there is a self-defense context or medical setting.
  • Sadistic remarks towards the following content which includes a label so that people are aware it may be sensitive:
    • Imagery of one or more persons subjected to violence and/or humiliating acts by one or more uniformed personnel doing a police function.
    • Imagery of fetuses and babies outside of the womb that are deceased.
    • Explicit sadistic remarks towards the suffering of animals depicted in the imagery.
  • Offering or soliciting imagery that is deleted or put behind a warning screen under this policy, when accompanied by sadistic remarks.

For the following content, we include a warning screen so that people are aware the content may be disturbing. We also limit the ability to view the content to adults, ages 18 and older:

Imagery of people

Videos of people or dead bodies in a medical setting if they depict:

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people, including cremation or self-immolation when that action is a form of political speech or newsworthy.
  • Victims of cannibalism.
  • Throat-slitting.

Photos of wounded or dead people if they show:

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people.
  • Victims of cannibalism.
  • Throat-slitting.

Imagery depicting a person’s violent death (including their moment of death or the aftermath) or a person experiencing a life threatening event (such as being struck by a car, falling from a great height, or experiencing other possibly-fatal physical injury).

Imagery that shows capital punishment of a person

Imagery that shows acts of torture committed against a person or people

Imagery of non-medical foreign objects (such as metal objects, knives, nails) inserted or stuck into a person causing grievous injury

Imagery of animals

The following content involving animals:

  • Videos depicting live animals or animals going from live to dead experiencing dismemberment, visible innards, charring, being boiled alive or burning
  • Photos depicting live animals experiencing dismemberment, visible innards, charring, being boiled alive or burning
  • Imagery of animal to animal fights, when there are visible innards or dismemberment of non-regenerating body, unless in the wild.

For the following content, we include a label so that people are aware the content may be sensitive:

Imagery of non-medical foreign objects inserted into a person through their skin in a religious or cultural context

Imagery of visible innards in a birthing context

Imagery depicting one or more persons subjected to violence and/or humiliating acts by one or more uniformed personnel doing a police function

Imagery of fetuses and babies outside of the womb that are deceased, unless another person is present in the image.

Imagery of fetuses and babies outside the womb in an abandonment context

Videos of animals going from live to dead that do not showdismemberment, or visible innards, or charring or burning

Imagery of humans committing acts of torture or abuse against live animals

For the following Community Standards, we require additional information and/or context to enforce:

We remove:

  • Videos and photos that show the violent death of someone when a family member requests its removal.
  • Videos of violent death of humans where the violent death is not visible in the video but the audio is fully or partially captured and the death is confirmed by either a law enforcement record, death certificate, Trusted Partner report or media report.

User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.

Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.

Data
Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Reporting
1
Universal entry point

We have an option to report, whether it’s on a post, a comment, a story, a message or something else.

2
Get started

We help people report things that they don’t think should be on our platform.

3
Select a problem

We ask people to tell us more about what’s wrong. This helps us send the report to the right place.

4
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

Post-report communication
1
Update via notifications

After we’ve reviewed the report, we’ll send the reporting user a notification.

2
More detail in the Support Inbox

We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we got the decision wrong, they can request another review.

4
Post-appeal communication

We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that doesn't follow our rules, we’ll tell them.

2
Additional context

We’ll also address common misperceptions and explain why we made the decision to enforce.

3
Policy Explanation

We’ll give people easy-to-understand explanations about the relevant rule.

4
Option for review

If people disagree with the decision, they can ask for another review and provide more information.

5
Final decision

We set expectations about what will happen after the review has been submitted.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.

Get help with violent and graphic content

Learn what you can do if you see something on Facebook that goes against our Community Standards.