Policy details

Change log

CHANGE LOG

Change log

Today

Current version

29 Jun 2023
30 Mar 2023
27 Jan 2023
29 Sep 2022
24 Dec 2021
13 Oct 2021
26 Aug 2021
4 May 2021
2 Apr 2021
29 Jan 2021
19 Nov 2020
30 Jul 2020
28 May 2020
28 Feb 2020
31 Jan 2020
17 Dec 2019
29 Sep 2019
30 Jul 2019
1 Jul 2019
26 Apr 2019
Show olderShow fewer

Policy rationale

Bullying and harassment happen in many places and come in many different forms, from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact. We do not tolerate this kind of behaviour because it prevents people from feeling safe and respected on Facebook.

We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe, as well as certain attacks where the public figure is directly tagged in the post or comment. We define public figures as state- and national-level government officials, political candidates for those offices, people with over one million fans or followers on social media and people who receive substantial news coverage.

For private individuals, our protection goes further: We remove content that's meant to degrade or shame, including, for example, claims about someone's personal sexual activity. We recognise that bullying and harassment can have more of an emotional impact on minors, which is why our policies provide heightened protection for users between the ages of 13 and 18.

Context and intent matter, and we allow people to post and share if it is clear that something was shared in order to condemn or draw attention to bullying and harassment. In certain instances, we require self-reporting because it helps us understand that the person targeted feels bullied or harassed. In addition to reporting such behaviour and content, we encourage people to use tools available on Facebook to help protect against it.

We also have a Bullying Prevention Hub, which is a resource for teenagers, parents and educators seeking support for issues related to bullying and other conflicts. It offers step-by-step guidance, including information on how to start important conversations about bullying. Learn more about what we are doing to protect people from bullying and harassment here.

Note: This policy does not apply to individuals who are part of designated organisations under the Dangerous Organisations and Individuals Policy or individuals who died prior to 1900.

Tier 1: Universal protections for everyone:

  • Everyone is protected from:
    • Repeated contact that is:
      • Unwanted or
      • Sexually harassing or
      • Directed at a large number of individuals with no prior solicitation.
    • Calls for self-injury or suicide of a specific person, or group of individuals.
    • Attacks based on their experience of sexual assault, sexual exploitation, sexual harassment or domestic abuse.
    • Statements of intent to engage in a sexual activity or advocating to engage in a sexual activity.
    • Severe sexualised commentary.
    • Derogatory sexualised photoshop or drawings
    • Attacks through derogatory terms related to sexual activity (for example: whore, slut).
    • Claims that a violent tragedy did not occur.
    • Claims that individuals are lying about being a victim of a violent tragedy or terrorist attack, including claims that they are:
      • Acting or pretending to be a victim of a specific event, or
      • Paid or employed to mislead people about their role in the event.
  • Threats to release an individual's private phone number, residential address, email address or medical records (as defined in the Privacy Violations policy).
  • Calls for, or statements of intent to engage in, bullying and/or harassment.
  • Content that degrades or expresses disgust towards individuals who are depicted in the process of, or straight after, menstruating, urinating, vomiting or defecating
  • Everyone is protected from the following, but for adult public figures, they must be purposefully exposed to:
    • Calls for death and statements in favour of contracting or developing a medical condition.
    • Celebration or mocking of death or medical condition.
    • Claims about sexually transmitted infections.
    • Derogatory terms related to female gendered cursing.
    • Statements of inferiority about physical appearance.

Tier 2: Additional protections for all minors, private adults and limited scope public figures (for example, individuals whose primary fame is limited to their activism or journalism, or those who become famous through involuntary means):

  • In addition to the universal protections for everyone, all minors (private individuals and public figures), private adults and limited scope public figures are protected from:
    • Claims about sexual activity, except in the context of criminal allegations against adults (non-consensual sexual touching).
    • Content sexualising another adult (sexualisation of minors is covered in the Child Sexual Exploitation, Abuse and Nudity policy).
  • All minors (private individuals and public figures), private adults and limited scope public figures) are protected from the following, but for minor public figures, they must be purposefully exposed to:
    • Dehumanising comparisons (in written or visual form) to or about:
      • Animals and insects, including subhuman creatures, that are culturally perceived as inferior.
      • Bacteria, viruses, microbes and diseases.
      • Inanimate objects, including rubbish, filth, faeces.
  • Content manipulated to highlight, circle or otherwise negatively draw attention to specific physical characteristics (nose, ear and so on).
  • Content that ranks them based on physical appearance or character traits.
  • Content that degrades individuals who are depicted being physically bullied (except in self-defence and fight-sport contexts).

Tier 3: Additional protections for private minors, private adults and minor involuntary public figures:

  • In addition to all of the protections listed above, all private minors, private adults (who must self-report) and minor involuntary public figures are protected from:
    • Targeted cursing.
    • Claims about romantic involvement, sexual orientation or gender identity.
    • Calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting exclusion.
    • Negative character or ability claims, except in the context of criminal allegations and business reviews against adults.
    • Expressions of contempt or disgust, or content rejecting the existence of an individual, except in the context of criminal allegations against adults.
  • When self-reported, private minors, private adults and minor involuntary public figures are protected from the following:
    • First-person voice bullying.
    • Unwanted manipulated imagery.
    • Comparison to other public, fictional or private individuals on the basis of physical appearance.
    • Claims about religious identity or blasphemy
    • Comparisons to animals or insects that are not culturally perceived as intellectually or physically inferior ("tiger", "lion").
    • Neutral or positive physical descriptions.
    • Non-negative character or ability claims.
    • Attacks through derogatory terms related to a lack of sexual activity.

Tier 4: Additional protections for private minors only:

  • Minors get the most protection under our policy. In addition to all of the protections listed above, private minors are also protected from:
    • Allegations about criminal or illegal behaviour.
    • Videos of physical bullying against minors, shared in a non-condemning context.

Tier 5: Bullying and harassment through Pages, groups, events and messages

  • The protections of Tiers 1 to 4 are also enforced on Pages, groups, events and messages.

We add a cover to this content so that people can choose whether to see it:

Videos of physical bullying against minors shared in a condemning context

For the following Community Standards, we require additional information and/or context to enforce:

Do not:

  • Post content that targets private individuals through unwanted Pages, groups and events. We remove this content when it is reported by the victim or an authorised representative of the victim.
  • Create accounts to contact someone who has blocked you.
  • Post attacks that use derogatory terms related to female-gendered cursing. We remove this content when the victim or an authorised representative of the victim informs us of the content, even if the victim has not reported it directly.
  • Post content that would otherwise require the victim to report the content or an indicator that the poster is directly targeting the victim (e.g. the victim is tagged in the post or comment). We will remove this content if we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Post content praising, celebrating or mocking anyone's death. We also remove content targeting a deceased individual that we would normally require the victim to report.
  • Post content calling for or stating an intent to engage in behaviour that would qualify as bullying and harassment under our policies. We will remove this content when we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Post content sexualising a public figure. We will remove this content when we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Repeatedly contact someone to sexually harass them. We will remove this content when we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Engage in mass harassment against individuals that targets them based on their decision to take or not take the COVID-19 vaccine with:
    • Statements of mental or moral inferiority based on their decision, or
    • Statements that advocate for or allege a negative outcome as a result of their decision, except for widely proven and/or accepted COVID-19 symptoms or vaccine side effects.
  • Remove directed mass harassment, when:
    • Targeting, via any surface, "individuals at heightened risk of offline harm", defined as:
      • Human rights defenders
      • Minors
      • Victims of violent events/tragedies
      • Opposition figures in at-risk countries during election periods
      • Election officials
      • Government dissidents who have been targeted based on their dissident status
      • Ethnic and religious minorities in conflict zones
      • Member of a designated and recognisable at-risk group
    • Targeting any individual via personal surfaces, such as inbox or profiles, with:
      • Content that violates the bullying and harassment policies for private individuals, or
      • Objectionable content that is based on a protected characteristic
  • Disable accounts engaged in mass harassment as part of either
    • State or state-affiliated networks targeting any individual via any surface.
    • Adversarial networks targeting any individual via any surface with:
      • Content that violates the bullying and harassment policies for private individuals, or
      • Content that targets them based on a protected characteristic, or
      • Content or behaviour otherwise deemed to be objectionable in local context
User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something that you don't think should be on Facebook, to be told that you've violated our Community Standards and to see a warning screen over certain content.

Note: We're always improving, so what you see here may be slightly outdated compared to what we currently use.

Data
Prevalence

Percentage of times that people saw violating content

Content actioned

Number of pieces of violating content that we took action on

Proactive rate

Percentage of violating content that we found before people reported it

Appealed content

Number of pieces of content that people appealed after we took action on it

Restored content

Number of pieces of content that we restored after we originally took action on it

Prevalence

Percentage of times that people saw violating content

Content actioned

Number of pieces of violating content that we took action on

Proactive rate

Percentage of violating content that we found before people reported it

Appealed content

Number of pieces of content that people appealed after we took action on it

Restored content

Number of pieces of content that we restored after we originally took action on it

Reporting
1
Universal entry point

We have an option to report, whether it's on a post, a comment, a story, a message or something else.

2
Getting started

We help people report things that they don't think should be on our platform.

3
Select a problem

We ask people to tell us more about what's wrong. This helps us send the report to the right place.

4
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

Post-report communication
1
Update via notifications

After we've reviewed the report, we'll send the reporting user a notification.

2
More detail in the Support Inbox

We'll share more details about our review decision in the Support Inbox. We'll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we made the wrong decision, they can request another review.

4
Post-appeal communication

We'll send a final response after we've re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that doesn't follow our rules, we'll tell them.

2
Additional context

We'll also address common misperceptions and explain why we made the decision to enforce.

3
Policy explanation

We'll give people easy-to-understand explanations about the relevant rule.

4
Option for review

If people disagree with the decision, they can ask for another review and provide more information.

5
Final decision

We set expectations about what will happen after the review has been submitted.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we've covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.

Get help with bullying and harassment

Learn what you can do if you see something on Facebook that goes against our Community Standards.