Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUN 12, 2023
Today, the Oversight Board selected a case referred by Meta regarding a post by the Tigray Communications Affairs Bureau Facebook page calling for violence in Ethiopia. The post outlines losses suffered by Ethiopia's Federal National Defense Forces and calls for their troops to surrender to the Tigray People’s Liberation Front (TPLF) and turn against the Ethiopian Prime Minister or face death. The page states that it is the official page of the communications bureau for a regional state in Ethiopia.
Upon initial review, after the post was reported by users and flagged by our own automated systems, we found the content to be non-violating and it was left up. However, when the content was flagged again by our crisis response team, further review determined the post did in fact violate Meta’s policy on Violence and Incitement and it was removed.
Meta removes content that “incites or facilitates serious violence,” however, in this instance, doing so also requires removing official government speech that could be considered newsworthy.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board upheld Meta's decision to remove the content so we have taken no further action related to this case or the content.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this post.
In line with the Board’s recommendation in the “Former President Trump’s Suspension,” as reiterated in the “Sudan Graphic Video,” Meta should publish information on its Crisis Policy Protocol. The Board will consider this recommendation implemented when information on the Crisis Policy Protocol is available in the Transparency Center, within six months of this decision being published, as a separate policy in the Transparency Center in addition to the Public Policy Forum slide deck.
Our commitment: We recently published information on our Crisis Policy Protocol, an overview of the crisis response policies intended to balance consistent global responses with the flexibility to adapt to varied, quickly-changing conditions. Because this information describes a codified framework rather than a policy, we believe that including it on a separate Transparency Center page designated for standalone policies could create confusion about its purpose and application.
Considerations: We recently published the presentation from the Crisis Policy Protocol Policy Forum, held in January 2022 in response to a board recommendation, to our Transparency Center. The presentation provides insights into policy efforts to promote more timely, systematic and proportionate responses to crises. This presentation also includes examples of the types of situations we may consider to be crises that merit evaluation under the protocol. More recently, we briefed board members and staff on implementation of the framework, how it will allow us to designate crisis categories for situations that may require unique policy responses, and integration of the protocol with ongoing preparedness efforts for countries at risk.
The Crisis Policy Protocol codifies a principled, calibrated, and sustainable approach to crises from a policy perspective. Under the protocol, our response to a crisis event may involve a range of policy measures drawn from different sections of our Community Standards. Between the details we share regarding the protocol in our Transparency Center and the Community Standards, users can find a significant amount of information regarding how we evaluate potential crisis situations and deploy policy levers to address them. We also highlighted the protocol and our process for developing it in the Q2 2022 Quarterly Update on the Oversight Board, publicly accessible in the Transparency Center.
While we have shared information about the Crisis Policy Protocol in the Transparency Center, we do not intend to add it to the site as a separate policy because it is not a policy per se, but a protocol that guides more effective deployment of the range of available policy levers. For example, the goal of publishing our Community Standards in the Transparency Center is to provide clear detail on what can or cannot be shared on Facebook and Instagram. Including the Crisis Policy Protocol as a standalone policy here would not align with this purpose. While we will have no further updates with respect to publishing the Crisis Policy Protocol as a stand-alone section of the Transparency Center for the reasons shared here, we will continue to assess opportunities for additional transparency on this framework and how we apply it.
To improve enforcement of its content policies during periods of armed conflict, Meta should assess the feasibility of establishing a sustained internal mechanism that provides the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict. The Board will consider this recommendation implemented when Meta provides an overview of the feasibility of a sustained internal mechanism to the Board.
Our commitment: We plan to implement process improvements that will improve our identification of risks on our platforms and coordination of mitigation efforts during sustained conflicts. This includes assessing the feasibility of a new crisis coordination team to provide dedicated Operations oversight of all tactical execution of support efforts before, during, and after crises.
Considerations: We continue to improve and standardize our response mechanisms and guiding protocols for high-risk events, including periods of armed conflict. We currently deploy Integrity Product Operations Centers (IPOCs) to anticipate and respond to high-risk and crisis events. IPOCs bring together subject matter experts from across the company – including threat intelligence, data science, software engineering, research, global operations, policy, civil rights and other legal teams – for real-time monitoring of events so we can quickly identify and address any emerging trends or potential abuse. IPOCs are generally established for a defined time period but can be extended or abbreviated based on situational needs. Their duration varies based on our assessment of multiple factors, including risk level and ability to address issues with existing regional functions.
IPOCs are not intended to be a long-term solution for extended periods of armed conflict, but concluding an IPOC does not mean that we believe the issues associated with the crisis are solved or that special interventions are no longer necessary. Ongoing conflict situations often require continuous monitoring and different types of interventions and/or crisis response over time. Shifting from the IPOC to a more steady-state response indicates that we have identified and erected the support infrastructure needed to address the key issues for a region. For example, an IPOC command center allows us to gather the training data necessary for classifiers to understand the regional and situational nuances to carry on effective at-scale content moderation in and after the crisis context.
Our work developing and implementing the Crisis Policy Protocol has codified our policy-specific responses to crises to ensure they are applied systematically and complements our existing company-wide crisis response efforts. To address longer-term conflict situations, some regional teams are implementing more sustained conflict response mechanisms through expanded product and enforcement interventions. These efforts include interventions targeting how reshared content is distributed as well as continued optimization of automated tools in smaller regions with more limited language support. In areas with limited language support, we also continue to improve our detection and enforcement of abusive behaviors, such as addressing inauthentic accounts, repeat offenders and obfuscation tactics by bad actors.
We are working on implementing global standards for crisis response to ensure consistency and sustained support for high-risk regions. Ongoing improvement efforts include tools that standardize data collection and analysis and better predict the spread of harmful content. In line with the board’s recommendation, we are also assessing the feasibility of establishing a new dedicated crisis coordination team to complement our Risk Management and Strategic Response Policy teams with dedicated Operations oversight of tactical support execution before, during, and after crises. We will provide further updates on our progress in a future Quarterly Update.