Policy details

Change log

CHANGE LOG

Change log

Today

Current version

Oct 13, 2021
Aug 26, 2021
May 4, 2021
Apr 2, 2021
Jan 28, 2021
Nov 18, 2020
Jul 30, 2020
May 28, 2020
Feb 27, 2020
Jan 30, 2020
Sep 29, 2019
Jul 30, 2019
Jul 1, 2019
Apr 26, 2019

Policy Rationale

Bullying and harassment happen in many places and come in many different forms from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact. We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook.

We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment.

For private individuals, our protection goes further: We remove content that's meant to degrade or shame, including, for example, claims about someone's sexual personal activity. We recognize that bullying and harassment can have more of an emotional impact on minors, which is why our policies provide heightened protection for users between the ages of 13 and 18.

Context and intent matter, and we allow people to post and share if it is clear that something was shared in order to condemn or draw attention to bullying and harassment. In certain instances, we require self-reporting because it helps us understand that the person targeted feels bullied or harassed. In addition to reporting such behavior and content, we encourage people to use tools available on Facebook to help protect against it.

We also have a Bullying Prevention Hub, which is a resource for teens, parents, and educators seeking support for issues related to bullying and other conflicts. It offers step-by-step guidance, including information on how to start important conversations about bullying. Learn more about what we are doing to protect people from bullying and harassment here.

Note: This policy does not apply to individuals who are part of designated organizations under the Dangerous Organizations and Individuals policy or individuals who died prior to 1900.

Do not:

Tier 1: Target anyone maliciously by:

  • Repeatedly contacting someone in a manner that is:
    • Unwanted, or
    • Sexually harassing, or
    • Directed at a large number of individuals with no prior solicitation.
  • Attacking someone based on their status as a victim of sexual assault, sexual exploitation, sexual harassment, or domestic abuse.
  • Calling for self-injury or suicide of a specific person, or group of people.
  • Attacking someone through derogatory terms related to sexual activity (for example: whore, slut).
  • Posting content about a violent tragedy, or victims of violent tragedies that include claims that a violent tragedy did not occur.
  • Posting content about victims or survivors of violent tragedies or terrorist attacks by name or by image, with claims that they are:
    • Acting/pretending to be a victim of an event.
    • Otherwise paid or employed to mislead people about their role in the event.
  • Threatening to release an individual's private phone number, residential address or email address.
  • Making statements of intent to engage in a sexual activity or advocating for them to engage in a sexual activity.
  • Making severe sexualized commentary
  • Sharing derogatory sexualized photoshopped imagery or drawings
  • Calling for, or making statements of intent to engage in, bullying and/or harassment.
  • Posting content that further degrades or expresses disgust toward individuals who are depicted in the process of, or right after, menstruating, urinating, vomiting, or defecating
  • Creating Pages or Groups that are dedicated to attacking individual(s) by:
    • Calling for death, or to contract or develop a medical condition.
    • Making statements of intent of advocating to engage in sexual activity.
    • Making claims that the individual has or may have a sexually transmitted disease.
    • Sexualizing another adult.
  • Sending messages that contain the following attacks when aimed at an individual or group of individuals in the thread:
    • Attacks referenced in Tier 1, 2 and 4 of this policy.
    • Targeted cursing.
    • Calls for death, serious disease, disability, epidemic disease or physical harm.

Tier 2: Target private individuals, limited scope public figures (for example, individuals whose primary fame is limited to their activism, journalism, or those who become famous through involuntary means) or public figures who are minors with:

  • Calls for death, or to contract or develop a medical condition.
  • Female-gendered cursing terms when used in a derogatory way.
  • Claims about sexual activity or sexually transmitted diseases except in the context of criminal allegations against adults about non-consensual sexual touching.
  • Pages or Groups created to attack through:
    • Targeted cursing.
    • Negative physical descriptions.
    • Claims about religious identity or blasphemy.
    • Expressions of contempt or disgust.
    • Female-gendered cursing terms when used in a derogatory way.

Tier 3: Target public figures by purposefully exposing them to:

  • For adults and minors:
    • Calls for death, or to contract or develop a medical condition.
    • Claims about sexually transmitted disease
    • Female-gendered cursing terms when used in a derogatory way.
    • Content that praises, celebrates or mocks their death or medical condition.
    • Attacks through negative physical descriptions.
  • For minors:
    • Comparisons to animals or insects that are culturally perceived as intellectually or physically inferior or to an inanimate object (“cow," “monkey” “potato”).
    • Content manipulated to highlight, circle or otherwise negatively draw attention to specific physical characteristics (nose, ear and so on).

Tier 4: Target private individuals or limited scope public figures with:

  • Comparisons to animals or insects that are culturally perceived as intellectually or physically inferior or to an inanimate object (“cow," “monkey” “potato”).
  • Content manipulated to highlight, circle or otherwise negatively draw attention to specific physical characteristics (nose, ear and so on).
  • Attacks through negative physical descriptions.
  • Content that ranks individuals on physical appearance or personality.
  • Content sexualizing another adult.
  • Content that further degrades individuals who are depicted being physically bullied except in self-defense and fight-sport contexts.
  • Content that praises, celebrates, or mocks their death or serious physical injury.
  • In addition to the above, attacks through Pages or Groups:
    • Negative character or ability claims.
    • First-person voice bullying only if the object targets more than one private individual.

Tier 5: Target private adults (who must self-report) or any private minors or involuntary minor public figures with:

  • Targeted cursing.
  • Claims about romantic involvement, sexual orientation or gender identity.
  • Coordination, advocacy or promotion of exclusion.
  • Negative character or ability claims, except in the context of criminal allegations and business reviews against adults. We allow criminal allegations so that people can draw attention to personal experiences or offline events. In cases in which criminal allegations pose off-line harm to the named individual, however, we may remove them.
  • Expressions of contempt or disgust, except in the context of criminal allegations against adults.

Tier 6: Target private individuals who are minors with:

  • Allegations about criminal or illegal behavior.
  • Videos of physical bullying shared in a non-condemning context.

Tier 7: Target private individuals (who must self-report) with:

  • First-person voice bullying.
  • Unwanted manipulated imagery.
  • Comparison to other public, fictional or private individuals on the basis of physical appearance
  • Claims about religious identity or blasphemy.
  • Comparisons to animals or insects that are not culturally perceived as intellectually or physically inferior (“tiger," “lion").
  • Neutral or positive physical descriptions.
  • Non-negative character or ability claims.
  • Any bullying or harassment violation, when shared in an endearing context.
  • Attacks through derogatory terms related to a lack of sexual activity.

We add a cover to this content so people can choose whether to see it:

Videos of physical bullying against minors shared in a condemning context

For the following Community Standards, we require additional information and/or context to enforce:

Do not:

  • Post content that targets private individuals through unwanted Pages, Groups and Events. We remove this content when it is reported by the victim or an authorized representative of the victim.
  • Create accounts to contact someone who has blocked you.
  • Post attacks that use derogatory terms related to female gendered cursing. We remove this content when the victim or an authorized representative of the victim informs us of the content, even if the victim has not reported it directly.
  • Post content that would otherwise require the victim to report the content or an indicator that the poster is directly targeting the victim (for example: the victim is tagged in the post or comment). We will remove this content if we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
  • Post content praising, celebrating or mocking anyone's death. We also remove content targeting a deceased individual that we would normally require the victim to report.
  • Post content calling for or stating an intent to engage in behavior that would qualify as bullying and harassment under our policies. We will remove this content when we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
  • Post content sexualizing a public figure. We will remove this content when we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
  • Repeatedly contact someone to sexually harass them. We will remove this content when we have confirmation from the victim or an authorized representative of the victim that the content is unwanted.
  • Engage in mass harassment against individuals that targets them based on their decision to take or not take the COVID-19 vaccine with:
    • Statements of mental or moral inferiority based on their decision, or
    • Statements that advocate for or allege a negative outcome as a result of their decision, except for widely proven and/or accepted COVID-19 symptoms or vaccine side effects.
  • Remove directed mass harassment, when:
    • Targeting, via any surface, ‘individuals at heightened risk of offline harm’, defined as:
      • Human rights defenders
      • Minors
      • Victims of violent events/tragedies
      • Opposition figures in at-risk countries during election periods
      • Government dissidents who have been targeted based on their dissident status
      • Ethnic and religious minorities in conflict zones
      • Member of a designated and recognizable at-risk group
    • Targeting any individual via personal surfaces, such as inbox or profiles, with:
      • Content that violates the bullying and harassment policies for private individuals or,
      • Objectionable content that is based on a protected characteristic
  • Disable accounts engaged in mass harassment as part of either
    • State or state-affiliated networks targeting any individual via any surface.
    • Adversarial networks targeting any individual via any surface with:
      • Content that violates the bullying and harassment policies for private individuals or,
      • Content that targets them based on a protected characteristic, or,
      • Content or behavior otherwise deemed to be objectionable in local context
User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.

Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.

Data
Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Reporting
1
Universal entry point

We have an option to report, whether it’s on a post, a comment, a story, a message or something else.

2
Get started

We help people report things that they don’t think should be on our platform.

3
Select a problem

We ask people to tell us more about what’s wrong. This helps us send the report to the right place.

4
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

Post-report communication
1
Update via notifications

After we’ve reviewed the report, we’ll send the reporting user a notification.

2
More detail in the Support Inbox

We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we got the decision wrong, they can request another review.

4
Post-appeal communication

We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that violates our Community Standards, we’ll tell them.

2
Additional context

We’ll also address common misperceptions around enforcement.

3
Explain the policy

We’ll give people easy to understand explanations about why their content was removed.

4
Ask for input

After we’ve established the context for our decision and explained our policy, we’ll ask people what they'd like to do next, including letting us know if they think we made a mistake.

5
Tell us more

If people disagree with the decision, we’ll ask them to tell us more.

6
Set expectations

Here, we set expectations on what will happen next.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.