Policies that outline what is and isn't allowed on the Facebook app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
What: Content that has been debunked as "False, Altered, or Partly False" by non-partisan, third-party fact-checking organizations who partner with Meta, and have been certified by the International Fact-Checking Network (IFCN).
We also undertake various efforts to inform our community related to misinformation. We show additional information from third-party fact-checkers on the reduced content and display a clear label to warn people that the content has been rated as "False, Altered, or Partly False." We generally label but do not limit distribution for content that fact checkers rate as "Missing Context.” We send people and Page Admins notifications if they shared content that is later rated, and notify Group admins about rated content shared in their Group. When people try to share content that has been rated, we show them a pop-up with fact-checkers' debunking articles.
The focus of this fact-checking program is identifying and addressing viral misinformation, particularly clear hoaxes that have no basis in fact. Opinion content, as well as speech from politicians, is not eligible to be fact-checked, however, content presented as opinion but based on underlying false information may still be eligible for a rating.
Fact-checking partners prioritize provably false claims, especially those that are trending, or timely and important to the average person. Content predicted to be misinformation is sent to third-party fact-checkers and may be temporarily shown lower in News Feed before it is reviewed.
In addition to the above, we remove misinformation and unverifiable rumors that violate our Community Standards by contributing to a risk of imminent violence or physical harm. We may temporarily reduce the distribution of misinformation that may violate our Community Standards while we work to investigate the possible harm it could cause.
Why: We’re committed to reducing the spread of misinformation, particularly clear hoaxes that have no basis in fact.