How fact-checking works

UPDATED

MAR 9, 2022

Fact-checkers are independent from Meta and certified through the non-partisan International Fact-Checking Network (IFCN). We work with them to address misinformation on Facebook and Instagram. While fact-checkers focus on the legitimacy and accuracy of information, we focus on taking action by informing people when content has been rated. Here’s how it works.

Identifying misinformation

In many countries, our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding and how fast the content is spreading. It also considers if people on Facebook and Instagram flag a piece of content as “false news” and comments on posts that express disbelief. Fact-checkers also identify content to review on their own.

Content predicted to be misinformation may be temporarily shown lower in Feed before it is reviewed.

Reviewing content

Fact-checkers will review a piece of content and rate its accuracy. This process occurs independently from Meta and may include calling sources, consulting public data, authenticating images and videos and more.

The ratings fact checkers can use are False, Altered, Partly False, Missing Context, Satire, and True. These ratings are fully defined here.

The actions we take based on these ratings are described below. Content rated False or Altered makes up the most inaccurate content and therefore results in our most aggressive actions, with lesser actions for Partly False and Missing Context. Content rated Satire or True won’t have labels or restrictions.

Clearly labeling misinformation and informing people about it

When content has been rated by fact-checkers, we add a notice to it so people can read additional context. We apply our strongest warning labels for content rated False or Altered and lighter labels for Partly False and Missing Context. Content rated Satire or True won’t be labeled but a fact-check article will be appended to the post on Facebook. We also notify people before they try to share this content or if they shared it in the past.

  • We use our technology to detect content that is the same or almost exactly the same as that rated by fact checkers, and add notices to that content as well.

  • We generally do not add notices to content that makes a similar claim rated by fact checkers, if the content is not identical. This is because small differences in how a claim is phrased might change whether it is true or false.

Ensuring fewer people see misinformation

Once a fact-checker rates a piece of content as False, Altered, or Partly False, or we detect it as near identical, it appears lower in Feed on Facebook. We dramatically reduce the distribution of False and Altered posts, and reduce the distribution of Partly False to a lesser extent. On Instagram, this content it gets filtered out of Explore and is featured less prominently in feed and stories. This significantly reduces the number of people who see it. For Missing Context, we focus on surfacing more information from fact checkers.

We also reject ads with content that has been rated by fact-checkers as False, Altered, Partly False, or Missing Context and we do not recommend this content.

Taking action against repeat offenders

Pages, Groups, Profiles, websites, and Instagram accounts that repeatedly share content rated False or Altered will be put under some restrictions for a given time period. This includes removing them from the recommendations we show people, reducing their distribution, removing their ability to monetize and advertise and removing their ability to register as a news Page.

Content fact checkers prioritizeContent ratings fact-checkers usePenalties for sharing fact-checked content