Policies that outline what is and isn't allowed on the Facebook app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies in the Facebook app and on Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional Internet restrictions that limit people's ability to access the Internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Misinformation is different from other types of speech addressed in our Community Standards because there is no way to articulate a comprehensive list of what is prohibited. With graphic violence or hate speech, for instance, our policies specify the speech that we prohibit, and even persons who disagree with those policies can follow them. With misinformation, however, we cannot provide such a line. The world is changing constantly, and what is true one minute may not be true the next minute. People also have different levels of information about the world around them, and may believe something is true when it is not. A policy that simply prohibits "misinformation" would not provide useful notice to the people who use our services and would be unenforceable, as we don't have perfect access to information.
Instead, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it. For each category, our approach reflects our attempt to balance our values of expression, safety, dignity, authenticity and privacy.
We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media. In determining what constitutes misinformation in these categories, we partner with independent experts who possess knowledge and expertise to assess the truth of the content and whether it is likely to directly contribute to the risk of imminent harm. This includes, for instance, partnering with human rights organisations with a presence on the ground in a country to determine the truth of a rumour about civil conflict, and partnering with health organisations during the global COVID-19 pandemic.
For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. We know that people often use misinformation in harmless ways, such as to exaggerate a point ("This team has the worst record in the history of the sport!") or in humour or satire ("My husband just won Husband of the Year.") They also may share their experience through stories that contain inaccuracies. In some cases, people share deeply held personal opinions that others consider false or share information that they believe to be true but others consider incomplete or misleading.
Recognising how common such speech is, we focus on slowing the spread of hoaxes and viral misinformation, and directing users to authoritative information. As part of that effort, we partner with third-party fact-checking organisations to review and rate the accuracy of the most viral content on our platforms (see here to learn more about how our fact-checking programme works). We also provide resources to increase media and digital literacy so people can decide what to read, trust and share themselves.
Finally, we prohibit content and behaviour in other areas that often overlap with the spread of misinformation. For example, our Community Standards prohibit fake accounts, fraud and coordinated inauthentic behaviour.
As online and offline environments change and evolve, we will continue to evolve these policies. Pages, groups, profiles and Instagram accounts that repeatedly share the misinformation listed below may, in addition to having their content removed, receive decreased distribution, limitations on their ability to advertise or be removed from our platforms. Additional information on what happens when Facebook removes content can be found here.
We remove the following types of misinformation:
I. Physical harm or violence
We remove misinformation or unverifiable rumours that expert partners have determined are likely to directly contribute to a risk of imminent violence or physical harm to people. We define misinformation as content with a claim that is determined to be false by an authoritative third party. We define an unverifiable rumour as a claim whose source expert partners confirm is extremely hard or impossible to trace, for which authoritative sources are absent, where there is not enough specificity for the claim to be debunked, or where the claim is too incredulous or too irrational to be believed.
We know that sometimes misinformation that might appear benign could, in a specific context, contribute to a risk of offline harm, including threats of violence that could contribute to a heightened risk of death, serious injury or other physical harm. We work with a global network of non-governmental organisations (NGOs), not-for-profit organisations, humanitarian organisations and international organisations that have expertise in these local dynamics.
II. Harmful health misinformation
We consult with leading health organisations to identify health misinformation likely to directly contribute to imminent harm to public health and safety. The harmful health misinformation that we remove includes the following:
III. Voter or census interference
In an effort to promote election and census integrity, we remove misinformation that is likely to directly contribute to a risk of interference with people's ability to participate in those processes. This includes the following:
We have additional policies intended to cover calls for violence, the promotion of illegal participation and calls for coordinated interference in elections, which are represented in other sections of our Community Standards.
IV. Manipulated media
Media can be edited in a variety of ways. In many cases, these changes are benign, such as content being cropped or shortened for artistic reasons or music being added. In other cases, the manipulation is not apparent and could mislead, particularly in the case of video content. We remove this content because it can go viral quickly and experts advise that false beliefs regarding manipulated media often cannot be corrected through further discourse.
We remove videos under this policy if specific criteria are met: (1) the video has been edited or synthesised, beyond adjustments for clarity or quality, in ways that are not apparent to an average person and would likely mislead an average person to believe a subject of the video said words that they did not say; and (2) the video is the product of artificial intelligence or machine learning, including deep learning techniques (e.g. a technical deepfake), that merges, combines, replaces and/or superimposes content onto a video, creating a video that appears authentic.
See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something that you don't think should be on Facebook, to be told that you've violated our Community Standards and to see a warning screen over certain content.
Note: We're always improving, so what you see here may be slightly outdated compared to what we currently use.
We have an option to report, whether it's on a post, a comment, a story, a message or something else.
We help people report things that they don't think should be on our platform.
We ask people to tell us more about what's wrong. This helps us send the report to the right place.
After these steps, we submit the report. We also lay out what people should expect next.
After we've reviewed the report, we'll send the reporting user a notification.
We'll share more details about our review decision in the Support Inbox. We'll notify people that this information is there and send them a link to it.
If people think we made the wrong decision, they can request another review.
We'll send a final response after we've re-reviewed the content, again to the Support Inbox.
When someone posts something that violates our Community Standards, we'll tell them.
We'll also address common misperceptions around enforcement.
We'll give people easy-to-understand explanations about why their content was removed.
After we've established the context for our decision and explained our policy, we'll ask people what they'd like to do next, including letting us know if they think we made a mistake.
If people disagree with the decision, we'll ask them to tell us more.
Here, we set expectations on what will happen next.
Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.
Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.
Learn what you can do if you see something on Facebook that goes against our Community Standards.