How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
We aim to prevent potential offline harm that may be related to content on Facebook. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence. We remove content, disable accounts and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety. We also try to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information like a person's public visibility and the risks to their physical safety.
In some cases, we see aspirational or conditional threats directed at terrorists and other violent actors (e.g. "Terrorists deserve to be killed"), and we deem those non-credible, absent specific evidence to the contrary.
Threats that could lead to death (and other forms of high-severity violence) targeting people or places where threat is defined as any of the following:
Content that asks or offers services for hire to kill others (for example, hitmen, mercenaries, assassins) or advocates for the use of a hitman, mercenary or assassin against a target
Admissions, statements of intent or advocacy, calls to action, or aspirational or conditional statements to kidnap a target
Content that depicts abductions or kidnappings if it is clear the content is not being shared by a victim or their family as a plea for help, or shared for informational, condemnation or awareness raising purposes
Threats of high-severity violence using digitally-produced or altered imagery to target living people with armaments, methods of violence or dismemberment
Threats that lead to serious injury (mid-severity violence) toward private individuals, unnamed specified persons, minor public figures, high-risk persons or high-risk groups where threat is defined as any of the following:
Threats that lead to physical harm (or other forms of lower-severity violence) towards private individuals (self-reporting required) or minor public figures where threat is defined as any of the following:
Any content created for the express purpose of outing an individual as a member of a designated and recognisable at-risk group
Instructions on how to make or use weapons if there is evidence of a goal to seriously injure or kill people through:
Providing instructions on how to make or use explosives, unless there is clear context that the content is for a non-violent purpose (for example, part of commercial video games, clear scientific/educational purpose, fireworks or specifically for fishing)
Any content containing statements of intent, calls for action, conditional or aspirational statements, or advocating for violence due to voting, voter registration or the administration or outcome of an election
Statements of intent or advocacy, calls to action, or aspirational or conditional statements to bring weapons to locations, including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election (or encouraging others to do the same).
Do not post:
See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.
Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.
We have an option to report, whether it’s on a post, a comment, a story, a message or something else.
We help people report things that they don’t think should be on our platform.
We ask people to tell us more about what’s wrong. This helps us send the report to the right place.
After these steps, we submit the report. We also lay out what people should expect next.
After we’ve reviewed the report, we’ll send the reporting user a notification.
We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.
If people think we got the decision wrong, they can request another review.
We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.
When someone posts something that violates our Community Standards, we’ll tell them.
We’ll also address common misperceptions around enforcement.
We’ll give people easy to understand explanations about why their content was removed.
After we’ve established the context for our decision and explained our policy, we’ll ask people what they'd like to do next, including letting us know if they think we made a mistake.
If people disagree with the decision, we’ll ask them to tell us more.
Here, we set expectations on what will happen next.
We cover certain content in News Feed and other surfaces, so people can choose whether to see it.
In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers