How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Although fact-checkers are independent from Meta and certified through the non-partisan International Fact-Checking Network (IFCN), we work with them to address false information on Meta technologies. While fact-checkers focus on the legitimacy and accuracy of information, we focus on taking action by informing people when content has been rated false. Here’s how it works.
Our technology can detect posts that are likely to be misleading based on various signals, including how people are responding and how fast the content is spreading. People on Facebook and Instagram can also flag a piece of content for fact-checkers to give a post a closer look. Other signals that help us identify false information include:
Comments on posts that express disbelief.
Machine learning models that continuously improve our ability to predict false information.
Fact-checkers identifying content to review on their own.
All fact-checkers in the International Fact-Checking Network (IFCN) also have access to CrowdTangle—a tool from Meta that helps them identify posts that contain misinformation by providing insights into how public content is performing on social media. Publishers, journalists, researchers and academics can also use CrowdTangle to follow, analyze and report on public content on social media.
Fact-checkers will review a piece of content and rate its accuracy. This process occurs independently from Meta and may include calling sources, consulting public data, authenticating images and videos and more.
When content has been rated by fact-checkers, we add a notice to it so people can read additional context. We also notify people before they try to share this content or if they shared it in the past.
Once a fact-checker rates a piece of content as False, Partly False or Altered, it appears lower in News Feed on Facebook. On Instagram, it gets filtered out of Explore and is featured less prominently in feed and stories. This significantly reduces the number of people who see it. We also reject ads with content that has been rated by fact-checkers.
Pages, Groups, websites and Instagram accounts that repeatedly share information rated False or Altered will be put under some restrictions for a given time period. This includes removing them from the recommendations we show people, reducing their distribution, removing their ability to monetize and advertise and removing their ability to register as a news Page.Content ratings fact-checkers use