Policies that outline what is and isn't allowed on the Facebook app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies in the Facebook app and on Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional Internet restrictions that limit people's ability to access the Internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
DEC 6, 2022
On 21 October 2021, the Oversight Board published its transparency report. As part of its report, the board noted that our response to explain our cross-check system was lacking. We take the board's feedback seriously and are now sharing additional information about cross-check. The board also announced in its report that it accepted our request for recommendations about how we can continue to improve our cross-check system. We look forward to the board's recommendations.
Overview of cross-check
Facebook and Instagram users create billions of pieces of content each day. Moderating content at this scale presents challenges, including trade-offs between important values and goals. We seek to quickly review potentially violating content and remove it if it violates our policies. But we must balance this goal against the risk of "false positives" (erroneous removal of non-violating content) to protect users' voice. (Here, we refer to the "removal" of content, which we are using to describe integrity actions more generally. These can also include, for example, the use of warning screens or removal of Pages.)
To balance these considerations, Meta implemented the cross-check system to identify content that presents a greater risk of false positives and provide additional levels of review to mitigate that risk. Cross-check provides additional levels of review for certain content that our internal systems flag as violating (via automation or human review), with the goal of preventing or minimising the highest-risk false-positive moderation errors that might otherwise occur due to various factors such as the need to understand nuance or context. (Here, we refer to "content" that is reviewed through our cross-check system. We also use cross-check to review other actions such as removing a Page or profile.) While cross-check provides additional levels of review, reviewers apply the same Community Standards that apply to all other content on Facebook. (Cross-check also applies to Instagram. Where we reference "Community Standards" in this web page, it is meant to include the Instagram Community Guidelines as well.)
The cross-check system plays a crucial function in helping to protect human rights. For instance, the cross-check system includes entities and posts from journalists reporting from conflict zones and community leaders raising awareness of instances of hate or violence. Cross-check reviews take into account the context that is helpful to action this content correctly. Cross-check reviews may also apply to civic entities, where users have a heightened interest in seeing what their leaders are saying.
In addition, cross-check serves an important role in managing Meta's relationships with many of our business partners. Incorrectly removing content posted by a Page or profile with a large following, for instance, can result in negative experiences for both Meta's business partners and the significant number of users who follow them. We also apply cross-check to some very large groups, where an error can affect hundreds of thousands or millions of users. Cross-check does not exempt Meta's business partners or groups from our content policies, but it does sometimes provide additional levels of review to ensure that those policies are applied accurately.
Facebook and Instagram users post billions of pieces of content each day. Even with thousands of dedicated reviewers around the world, it is not possible to manually review every piece of content that potentially violates our Community Standards. The vast majority of violating content that we remove is proactively detected by our technology before anyone reports it. When someone posts on Facebook or Instagram, our technology checks to see if the content may violate the Community Standards. In many cases, identification is a simple matter. The post either clearly violates our policies or it doesn't. But in other cases, the content is escalated to a human reviewer for further evaluation.
Our primary review systems use technology to prioritise high-severity content, which includes "viral" content that spreads quickly. When the systems flag content for escalation, our reviewers make difficult and often nuanced judgement calls about whether content should remain on the platform. While we always aim to make the right decisions, we recognise that false positives do occur and some content is set for removal for violating Meta's policies when it actually does not. Meta has therefore invested in mistake prevention to further review false positives and mitigate them. Cross-check is one of these mistake prevention strategies.
Cross-check is a system used to help ensure that enforcement decisions are made accurately and with additional levels of human review. If during cross-check, a reviewer confirms that content violates our Community Standards, we enforce those policies and address the violating content accordingly. Depending on the complexity of the content, we may apply multiple levels of review, including, in rare instances, review by leadership. If the final reviewer determines that the content at issue does not violate our Community Standards, the reviewer can "overturn" the initial action and leave the content on the platform.
Historical cross-check practices
We first implemented the system, now known as cross-check, in 2013. The details of the system have evolved over the years and, where possible, we have provided dates and date ranges explaining when these changes occurred.
To determine what content or entities received cross-check review, our teams identified and compiled lists of users or entities perceived to have higher associated risk with false-positive actions against them. "False-positive risk" refers to the risk of incorrect enforcement against content or entities that do not actually violate our Community Standards. To determine which users and entities were associated with a higher false-positive risk, our teams applied a variety of criteria, including the type of user or entity (e.g. an elected official, journalist, significant business partner, human rights organisation), the number of followers and the subject matter of the entity. (Entity is a general term for where content could originate or appear, such as a user account, Page or group.)
When users or entities identified on those lists posted content or took actions that our systems flagged as potentially violating our policies, we would add the content or entity to a queue for cross-check review.
Beginning in 2020, we made changes so that most content in the queue was prioritised using a risk framework, which assigned a level of false-positive risk that could result if Meta incorrectly removed that content. This risk framework generally relied on three factors: (1) the sensitivity of the entity, (2) the severity of the alleged violation and (3) the severity of the potential enforcement action.
Current cross-check practices
As with all of our policies and processes, we continually look for ways to improve and we are constantly making changes. Earlier this year, we identified additional opportunities to improve the cross-check system. One structural change we made is that the cross-check system is now made up of two components: "General secondary review" and "Early response (ER) secondary review". While we will continue to use the list-based approach described above for inclusion in ER secondary review for a percentage of certain users and entities, with general secondary review, we are in the process of ensuring that content from all users and entities on Facebook and Instagram are eligible for cross-check review based on a dynamic prioritisation system called "cross-check ranker".
General secondary review involves contract reviewers and people from our markets team who perform a secondary review of content and entities that may violate our policies before an enforcement action is taken. This review does not rely solely on the identity of a user or entity to determine what content receives cross-check review. The cross-check ranker ranks content based on false-positive risk using criteria such as topic sensitivity (how trending/sensitive the topic is), enforcement severity (the severity of the potential enforcement action), false-positive probability, predicted reach and entity sensitivity (based largely on the compiled lists described above). The cross-check ranker is already used for the majority of cross-check reviews today.
ER secondary review is similar to the legacy cross-check system. To determine which content or entities receive ER secondary review, we continue to maintain lists of users and entities whose enforcements receive additional cross-check review if flagged as potentially violating the Community Standards. We have, however, added controls to that process of compiling and revising these lists. Prior to September 2020, most employees had the ability to add a user or entity to the cross-check list. After September 2020, while any employee can request that a user or entity be added to cross-check lists, only a designated group of employees have the authority to make additions to the list. We are also considering annual audits of cross-check lists, exploring ways to include time limits and periodic reverification requirements for inclusion, and improving our governance structure to include additional analysis and controls in place to define the list of users and entities eligible for this review.
In recent months, Meta reviews an average of several thousand cross-checked jobs per day, with a large majority completed in general secondary review. (Relative to the millions of pieces of content being flagged and actioned for violating our Community Standards daily, this is a small proportion.) ER secondary review now makes up the minority of these daily reviews. We anticipate a continued shift in the number of cross-check review jobs being the result of general secondary review prioritisation until the end of 2021 and into 2022.
If a piece of content is from an individual or entity that is included as part of ER secondary review, it is typically first reviewed by the markets team. The early response team will then review to confirm whether the content is violating. In general, if the markets team finds that the content does not violate our policies, the early response team will not review. If a piece of content is from an individual or entity that is prioritised by the cross-check ranker, contractors or the markets team typically review it, unless there is additional early response team capacity to review. As with legacy cross-check, high-complexity issues may receive additional review, including, in rare instances, review by leadership. If the final review finds that it violates our Community Standards, we'll remove it. If our reviews find that it does not violate, we'll leave it up.
As of 16 October 2021, approximately 660,000 users and entities have actions that require some form of ER secondary review based on inclusion on the lists described above. This number regularly changes as we add or remove users and entities to the lists described above based on evolving criteria for inclusion. Examples of users and entities eligible for ER secondary review include, but are not limited to:
Entities related to escalation responses or high-risk events. Currently, there is an informal process in place where teams preparing for a high-risk event identify entities at high risk of over-enforcement. For instance, if a user's controversial content is going viral (e.g. live video of police violence), we may identify that user for ER secondary review to prevent erroneous removal.
Entities included for legal compliance purposes. We use ER secondary review in certain instances to comply with legal or regulatory requirements.
High-visibility public figures and publishers. We identify entities for ER secondary review because over-enforcement may result in a negative experience for a large segment of users.
Marginalised populations. We identify human rights defenders, political dissidents and others who we believe may be targeted by state-sponsored or other adversarial harassment, brigading or mass reporting in order to protect against these attacks.
Civic entities. We follow objective criteria and the expertise of our in-region policy teams to identify politicians, government officials, institutions, organisations, advocacy groups and civic influencers. We include these entities for ER secondary review in order to prevent mistakes that would limit non-violating political speech and inadvertently affect discussion of civic topics such as elections, public policy and social issues. We aim to ensure parity across a country's civic entities – for example, if we include a national cabinet ministry in ER secondary review, we would include all ministries in that country's government in ER secondary review.
We are currently reviewing how to improve the criteria for identifying entities who should receive ER secondary review. For instance, we are exploring evolving our criteria in areas such as the number of followers, the number of previous false-positive enforcements and legal/regulatory requirements, as well important political/societal issues.
Although we have made significant improvements to the cross-check system, we are still exploring ways to further ensure that this system appropriately balances our goals of removing content that violates our Community Standards, while ensuring that we minimise our enforcement mistakes that have the greatest effect.