Oversight Board Selects a PAO on the Removal of COVID-19 Misinformation

UPDATED

JUN 20, 2023

2022-002-FB-PAO

Today, the Oversight Board announced the selection of a policy advisory opinion (PAO) referred by Meta that requests review of how we handle health misinformation related to the COVID-19 pandemic.

Meta removes misinformation during public health emergencies when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm because it violates our policy on Misinformation, as laid out in our Community Standards.

Since COVID-19 was declared a Public Health Emergency of International Concern (PHEIC) in January 2020, we have applied this policy to COVID-19 content that public health experts have determined falls into the category of likely to directly contribute to imminent physical harm. More on our COVID-19 policies can be found in our Help Center.

These policies were extraordinary measures implemented to keep up with challenges imposed by the pandemic. Furthermore, the state of the pandemic is constantly changing and Meta recognizes that the COVID-19 information ecosystem has evolved since the creation of our COVID-19 misinformation policy.

As a result, Meta is seeking the board’s insight on whether certain COVID-19 misinformation still satisfies our standard for removing harmful health misinformation, or whether Meta should address that information through alternative enforcement options in the future.

Options available to Meta include:

  • Continuing to remove certain COVID-19 misinformation that directly contributes to a risk of imminent physical harm.

  • Temporary emergency reduction measures where Meta would cease removing COVID-19 misinformation and instead reduce the distribution of those claims.

  • Third-party fact checking where, instead of removing COVID-19 misinformation, Meta would defer to independent third-party fact checkers to find and rate the falsity of those claims.

  • Labels, whereby Meta would temporarily affix labels that direct users to authoritative information underneath this content on our platforms.

Once the board has finished deliberating on the PAO, we will consider and publicly respond to its recommendations within 60 days, and will update this post accordingly. Please visit the board’s website for the recommendations when they are published.

Case decision

We welcome the Oversight Board’s decision today on this policy advisory opinion referral.

After conducting a review of the recommendations provided by the board, we will publicly respond to its recommendations within 60 days and will update this post accordingly.

Recommendations

Recommendation 1 (No Further Action)

Given the World Health Organization’s declaration that COVID-19 constitutes a global health emergency and Meta’s insistence on a global approach, Meta should continue its existing approach of removing globally false content about COVID-19 that is “likely to directly contribute to the risk of imminent physical harm”. At the same time, it should begin a transparent and inclusive process for robust and periodic reassessment of each of the 80 claims subject to removal to ensure that:

(1) Each of the specific claims about COVID-19 that is subject to removal is false and “likely to directly contribute to the risk of imminent physical harm”; and

(2) Meta’s human rights commitments are properly implemented (e.g. the legality and necessity principles).

Based on this process of reassessment, Meta should determine whether any claims are no longer false or no longer “likely to directly contribute to the risk of imminent physical harm.” Should Meta find that any claims are no longer false or no longer “likely to directly contribute to the risk of imminent physical harm,” such claims should no longer be subject to removal under this policy. The Board will consider this recommendation implemented when Meta announces a reassessment process and announces any changes to the 80 claims on the Help Center page.

Measure of Implementation:The Board will consider this recommendation implemented when Meta announces a reassessment process and announces any changes to the 80 claims on the Help Center page.

Our commitment: Under our current approach to COVID-19 misinformation, we “remove misinformation during public health emergencies when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm.” In light of the World Health Organization’s May 5th decision to lift the Public Health Emergency of International Concern (PHEIC) status for COVID-19, we reassessed our existing global approach to removing COVID-19 misinformation at scale on the platform. Given that recommendation #1 was premised on the continued existence of the PHEIC, we will not proceed with reviewing each individual claim that was removed under our prior COVID-19 policy and will take no further action on recommendation #1 or its subparts. Instead, we will explain the additional work we are undertaking in our response to recommendation #4, which concerns how Meta will address COVID-19 misinformation after the WHO lifted the PHEIC.

Considerations: In our request for a Policy Advisory Opinion, we noted that we remove harmful health misinformation specific to COVID-19 if the following criteria are met: (1) there is a public health emergency; (2) leading global health organizations or local health authorities tell us a particular claim is false; and (3) those organizations or authorities tell us the claim can directly contribute to the risk of imminent physical harm (these criteria are also explicitly stated in our public-facing Community Standards). When the Oversight Board’s Policy Advisory Opinion on COVID-19 was released, the World Health Organization’s Public Health Emergency of International Concern (PHEIC) was still in effect. Just two weeks later, the WHO announced that it had formally lifted this designation and no longer considers the pandemic a global emergency. Given the WHO recategorization of COVID-19, the criteria we set forth for the at-scale removal of COVID-19 misinformation is no longer met, prompting a reassessment of our approach to COVID-19 misinformation. The new approach we are adopting, however, will continue to take these three criteria into account. We describe this approach in more detail in our response to recommendation #4.

Recommendation 1A (No Further Action)

The company must put a process in place, as soon as feasible, to consider a broader set of perspectives in evaluating whether the removal of each claim is needed by the exigencies of the situation. The experts and organizations consulted should include public health experts, immunologists, virologists, infectious disease researchers, misinformation and disinformation researchers, tech policy experts, human rights organizations, fact-checkers, and freedom of expression experts.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publishes information about its processes for consultation with a diverse set of experts on its policy on Misinformation about health during public health emergencies, as well as information about the impact of those conversations on its policy.

Our commitment: Given the World Health Organization’s May 5th decision to lift COVID-19’s status as a Public Health Emergency of International Concern (PHEIC), the criteria we set forth for the at-scale removal of COVID-19 misinformation is no longer met, prompting a reassessment of our approach to COVID-19 misinformation on our platform. We are currently reassessing our COVID-19 misinformation policy to best balance the values of safety and voice on our platform. We have described our reassessment process in detail in our response to recommendation #4 and will have no further updates on this recommendation.

Considerations: For a full list of considerations, please see our response to recommendation #4, below.

Recommendation 1B (No Further Action)

Meta should establish the timing for this review (e.g. every three or six months) and make this public to ensure notice and input.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publishes the minutes of its review meeting publicly, in a similar fashion to its publication of its public policy forum minutes in its Transparency Center.

Our commitment: Given the World Health Organization’s May 5th decision to lift COVID-19’s status as a Public Health Emergency of International Concern (PHEIC), the criteria we set forth for the at-scale removal of COVID-19 misinformation is no longer met, prompting a reassessment of our approach to COVID-19 misinformation on our platform. We are currently reassessing our COVID-19 misinformation policy to best balance the values of safety and voice on our platform. We have described our reassessment process in detail in our response to recommendation #4 and, while we will have no further updates on this recommendation.

Considerations: For a full list of considerations, please see our response to recommendation #4, below.

Recommendation 1C (No Further Action)

Meta should articulate a clear process for regular review, including means for interested individuals and organizations to challenge an assessment of a specific claim (e.g., by providing a link on the Help Center page for public comments, and virtual consultations.

Measure of Implementation: The Board will consider this recommendation implemented when Meta creates a mechanism for public feedback and shares information on the impact of that feedback on its internal processes with the Board.

Our commitment: Given the World Health Organization’s May 5th decision to lift COVID-19’s status as a Public Health Emergency of International Concern (PHEIC), the criteria we set forth for the at-scale removal of COVID-19 misinformation is no longer met, prompting a reassessment of our approach to COVID-19 misinformation on our platform. We are currently reassessing our COVID-19 misinformation policy to best balance the values of safety and voice on our platform. This approach aligns with our current processes for incorporating external feedback into our policy development process and allows us to regularly gather consolidated feedback from representative sources, rather than creating a more resource-intensive process to gather unlimited external feedback.

We have described our reassessment process in detail in our response to recommendation #4, and while we will have no further updates on this recommendation we will report on our progress towards addressing the spirit of this recommendation in future updates to recommendation #4.

Considerations: For a full list of considerations, please see our response to recommendation #4, below.

Recommendation 1D (No Further Action)

Meta’s review of the claims should include the latest research on the spread and impact of such online health misinformation. This should include internal research on the relative effectiveness of various measures available to Meta, including removals, fact-checking, demotions, and neutral labels. The company should consider the status of the pandemic in all regions in which it operates, especially those in which its platforms constitute a primary source of information and where there are less digitally literate communities, weaker civic spaces, a lack of reliable sources of information, and fragile health care systems. Meta should also evaluate the effectiveness of its enforcement of these claims. Meta should gather, if it doesn’t already possess, information about which claims have systemically resulted in under and over enforcement problems. This information should inform whether a claim should continue to be removed or should be addressed through other measures.

Measure of Implementation: The Board will consider this recommendation implemented when Meta shares data on its policy enforcement review and publishes this information publicly.

Our commitment: Given the World Health Organization’s May 5th decision to lift COVID-19’s status as a Public Health Emergency of International Concern (PHEIC), the criteria we set forth for the at-scale removal of COVID-19 misinformation is no longer met, prompting a reassessment of our approach to COVID-19 misinformation on our platform. We are currently reassessing our COVID-19 misinformation policy to best balance the values of safety and voice on our platform. We have described our reassessment process in detail in our response to recommendation #4 and, while we will have no further updates on this recommendation, we will report on our progress towards addressing the spirit of this recommendation in future updates to recommendation #4.

Considerations: Over the course of the pandemic, we collaborated with government officials and public health experts across the globe to improve our measurement systems – both informing our enforcement efforts and allowing for more reliable reporting. Prior to the WHO’s decision to lift COVID-19’s PHEIC status, we also conducted a series of internal research efforts examining the spread and impact of COVID-19 misinformation on our platform under our previous policy.

For a full list of current assessment considerations, please see our response to recommendation #4, below.

Recommendation 1E (No Further Action)

In order to provide transparency on the types of experts consulted, their input, the internal and external research considered and how the information impacted the outcome of the analysis, Meta should provide to the Board a summary of the basis for its decision on each claim. The summary should specifically include the basis for the company’s decision for continuing to remove a claim. Meta should also disclose what role, if any, government personnel or entities played in its decision-making. If the company decides to cease removing a specific claim, the company should explain the basis of that decision (including: (a) what input led the company to determine that the claim is no longer false; (b) what input, from what source, led the company to determine the claim no longer directly contributes to the risk of imminent physical harm, and whether that assessment holds in countries with lowest vaccination rates and under-resourced public health infrastructure; (c) did the company determine that its enforcement system led to over-enforcement on the specific claim; (d) did the company determine that the claim is no longer prevalent on the platform.) The Board will consider this recommendation implemented when Meta shares the assessment of its policy evaluation process. This information should align with the reasons listed publicly in the Help Center post for any changes made to the policy, as outlined in the first paragraph of this recommendation.

Measure of Implementation: The Board will consider this recommendation implemented when Meta shares the assessment of its policy evaluation process. This information should align with the reasons listed publicly in the Help Center post for any changes made to the policy, as outlined in the first paragraph of this recommendation.

Our commitment: Given the World Health Organization’s May 5th decision to lift COVID-19’s status as a Public Health Emergency of International Concern (PHEIC), the criteria we set forth for the at-scale removal of COVID-19 misinformation is no longer met, prompting a reassessment of our approach to COVID-19 misinformation on our platform. We are currently reassessing our COVID-19 misinformation policy to best balance the values of safety and voice on our platform. Once we complete this assessment, we will update our Help Center to reflect any changes made to the policy as a result.

We have described our reassessment process in detail in our response to recommendation #4 and, while we will have no further updates on this recommendation, we will report on our progress towards addressing the spirit of this recommendation in future updates to recommendation #4.

Considerations: For a full list of considerations, please see our response to recommendation #4.

Recommendation 2 (Work Meta Already Does)

Meta should immediately provide a clear explanation of the reasons why each category of removable claims is “likely to directly contribute to the risk of imminent physical harm.”

Measure of Implementation: The Board will consider this recommendation implemented when Meta amends the Help Center page to provide this explanation.

Our commitment: We believe that our COVID-19 Help Center sufficiently explains how each category of claims could directly contribute to the risk of physical harm and how removing those claims from our platforms helps to prevent that potential harm. Based on potential changes to our COVID misinformation policies in light of the removal of COVID-19’s PHEIC status, we will follow our standard procedure of updating the Help Center language accordingly.

Considerations: During the PHEIC, we collaborated with external experts to identify categories of content which could directly contribute to the risk of imminent physical harm and rationales for why they might. This external input was key as we developed the policy and serves as the basis for language which exists on our Help Center page today. These examples include increasing the likelihood of exposure to or transmission of the virus or having adverse effects on the public health system’s ability to cope with the pandemic. For each category of claims, we also provide an explanation of the claim category’s potential harm and how removing the claims in each category helps to prevent that harm. For example, in our description of the “Guaranteed Cures or Prevention Methods for COVID-19” category, we explain that false information of this nature can directly contribute to the risk of imminent physical harm because, “we have heard from public health authorities that if people thought there was a guaranteed cure or prevention for COVID-19, they might take incorrect safety measures, ignore appropriate health guidance, or even attempt harmful self-medication.” In light of this, we believe that the current language on the Help Center provides sufficient explanation for people who use our platforms. As the pandemic and our approach to our COVID-19 Misinformation policies evolves, we will also continue to follow our standard process of updating the language in our Help Center accordingly, but will have no further updates on this recommendation.

Recommendation 3 (Implementing in Full)

Meta should clarify its Misinformation about health during public health emergencies policy by explaining that the requirement that information be “false” refers to false information according to the best available evidence at the time the policy was most recently re-evaluated.

Measure of Implementation: The Board will consider this recommendation implemented when Meta clarifies the policy in the relevant Help Center page.

Our commitment: We will include language in our Help Center explaining that the “approach to content that is ‘false’” refers to content that is considered false according to the best available evidence at the time that policy was mostly recently re-evaluated.

Considerations: On the COVID-19 Help Center that has been live during the PHEIC, we note that we remove content that is considered by experts to be both false and likely to contribute to imminent physical harm. Across our Community Standards, we are constantly evaluating our policies based on new inputs and information, and expect this to remain the case for our misinformation policies.

Recommendation 4 (Implementing in Full)

Meta should immediately initiate a risk assessment process to identify the necessary and proportionate measures that it should take, consistent with this policy decision and the other recommendations made in this policy advisory opinion, when the WHO lifts the global health emergency for COVID-19, but other local public health authorities continue to designate COVID-19 as a public health emergency. This process should aim to adopt measures addressing harmful misinformation likely to contribute to significant and imminent real-life harm, without compromising the general right to freedom of expression globally. The risk assessment should include:

  • A robust evaluation of the design decisions and various policy and implementation alternatives;

  • Their respective impacts on freedom of expression, the right to health and to life and other human rights; and

  • A feasibility assessment of a localized enforcement approach.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publicly communicates its plans for how it will conduct the risk assessment and describes the assessment process for detecting and mitigating risks and updates the Help Center page with this information.

Our commitment: In response to this recommendation and the lifting of the World Health Organization’s Public Health Emergency of International Concern (PHEIC) designation in May, we will continue enforcing our COVID-19 misinformation policy in countries that still consider COVID-19 a public health emergency when we are made aware of content that violates this policy. As part of this process, we are consulting with internal and external experts to understand the current status of COVID-19 across the world and are using these inputs to establish a more localized enforcement approach. We will share more details about this change in future Quarterly Updates.

Considerations: Upon receiving the board’s COVID-19 Policy Advisory Opinion, we immediately began a re-evaluation of our existing COVID-19 Misinformation policy in line with the board’s recommendations. This included discussions with internal teams and external experts to assess possible policy shifts in the event that the World Health Organization lifted COVID-19’s PHEIC designation. The Oversight Board’s PAO was published shortly after media reported that the World Health Organization’s International Health Regulations (IHR) Emergency Committee would hold a meeting in May to assess whether to lift the PHEIC, and thus we also factored this into those discussions.

As mentioned in our response to recommendation #1 and its subparts, following the May 5 decision by the World Health Organization to lift the PHEIC, we consulted with internal teams and external public health experts to assess the feasibility of a more localized and limited approach to removing content under our COVID-19 Misinformation policy. This included analyzing which claims we still see on our platform and consulting with external experts to identify countries in which COVID-19 is still considered a public health emergency.

At the beginning of the pandemic, when there was far less information about COVID-19 and effective treatment methods, we relied on guidance from external global health organizations and local health authorities to identify claims that were considered false and likely to contribute to a risk of imminent physical harm. In these unprecedented circumstances, we assessed that we needed to take a stronger approach to content removal. These experts helped us identify categories of claims that could increase risk of imminent physical harm – including content that could increase the likelihood of exposure or transmission of the virus and content that might adversely impact the public health system’s ability to address the pandemic. We also worked with health experts to keep pace with the evolving nature of the pandemic and regularly updated our policies as new facts and trends emerged. To provide transparency around our approach to COVID-19 content removal, we published detailed information about each of these claims in our Help Center. We also determined that the most effective way to enforce our COVID-19 related policies was to take a globally consistent, at-scale enforcement approach. This was because the global nature of the disease paired with the risk of offline harm meant that a scaled approach was both the most efficient and the most effective mechanism to fulfill our human rights and safety responsibilities.

As we move out of a PHEIC, however, we recognize that the public health emergency status of the pandemic varies greatly across different regions of the world. As such, we are consulting with internal teams and external global public health experts to identify countries in which COVID-19 is still considered a public health emergency and, in those countries, will apply an escalation-only enforcement approach to COVID-19 misinformation claims that we are still seeing on our platforms. The goal of this approach is to allow us to continue removing harmful health misinformation in places that still have a declared emergency (as per our misinformation policies about health during public health emergencies), while allowing for conversation about COVID-19 on our platforms more generally. A country-by-country approach to content moderation at scale is not feasible given operational constraints, but given the new circumstances and the Board’s guidance, we are able to enforce global policies when we are made aware of violating content in countries with ongoing public health emergencies.

More broadly, our existing policy for misinformation and harm includes the removal of certain claims that fall under harmful health misinformation. In order to ensure that our approach to misinformation is proportionate, we only remove content that is likely to lead to imminent physical harm to public health and safety. To satisfy the principle of legality articulated in Article 19 of the International Covenant on Civil and Political Rights, we provide detailed information about the types of harmful health misinformation we may remove in our Community Standards. For example, we may remove content which advocates for harmful “miracle” cures that may result in serious injury or death and for which authoritative experts have not identified any legitimate health use. In the case of a claim such as “black salve cures cancer,” our Harmful Health Misinformation policy states that we will remove the claim both in reference to COVID-19 and to all diseases more broadly. For other types of misinformation, we rely on review and ratings from independent third-party fact-checkers. More on our approach to misinformation, including harmful health misinformation, can be found in our Community Standards.

Recommendation 5 (No Further Action)

Meta should translate internal implementation guidelines into the working languages of the company’s platforms.

Measure of Implementation: The Board will consider this recommendation implemented when Meta translates its internal implementation guidelines and updates the Board in this regard.

Our commitment: Consistent with our responses to recommendation #1 in the Discussing the Situation in Myanmar While Using Profanity case and recommendation #1 in the Post Containing Derogatory Words in Arabic case, we will take no further action on this recommendation to ensure consistency and accuracy in our global enforcement.

Considerations: Our Community Standards are currently available in 80 translations and are regularly updated to reflect policy changes and clarifications. This number has increased substantially in the past two years, and continues to expand. These standards are regularly updated when there are policy changes, or when we provide clarifications to definitions, often as a result of Oversight Board recommendations. These translations require substantial time and care to make sure they accurately and precisely convey our policies to the billions of people who use our platforms.

In addition to our public-facing Community Standards, our global team of human reviewers are provided with detailed implementation guidance in English, reinforced with supplementary lists of context-specific terms and phrases for their region. While our content reviewers come from around the world and speak many languages, they are also all required to be fluent in English. The ability to use identical guidelines helps better facilitate a more objective application of our policies at scale. As explained in our response to Post Containing Derogatory Words in Arabic recommendation #1, maintaining our internal review guidance in English is important for maintaining global enforcement consistency. Because this guidance rapidly evolves (it is routinely updated with new clarifications and definitions), relying on translations could lead to irregular lags and inconsistent interpretations. Our COVID-19 policy guidance is no exception, particularly during the period in which these policies were first being introduced in tandem with rapidly evolving new information about the pandemic. We will be taking no further action on this recommendation and will have no further updates.

Recommendation 6 (No Further Action)

User appeals for a fact-check label should be reviewed by a different fact-checker than the one who made the first assessment. To ensure fairness and promote access to a remedy for users that have their content fact-checked, Meta should amend its process to ensure a different fact-checker that has not already made the assessment on the given claim, can evaluate the decision to impose a label.

Measure of Implementation: The Board will consider this recommendation implemented when Meta provides a mechanism to users to appeal to a different fact-checker, and when it updates its fact-checking policies with this new appeals mechanism.

Our commitment: We will take no further action on this recommendation as it is not aligned with the existing structure and purpose of our third-party fact checking (3PFC) system.

Considerations: Our third-party fact-checking partners operate independently from Meta and play a critical role in our approach to potential misinformation on Facebook and Instagram. They are certified through the non-partisan International Fact-Checking Network and are based around the world. These fact-checkers review a piece of content that may contain misinformation and rate its accuracy. The ratings they can apply are “False”, “Altered”, “Partly False”, “Missing Context”, “Satire”, or “True”. Notably, fact-checkers do not apply a label, rather, Meta then applies a label to the content based on the rating. If a user corrects the rated content or believes the rating is inaccurate, they can appeal the fact-check rating. We believe this provides a meaningful system of checks and balances for the ratings issued by external fact-checkers, ensuring their perspectives are not final or without redress where necessary.

Our fact-checking program and tools are not set up to allow a fact-checker different from the original fact-checker to evaluate an appeal. However, if two fact-checkers rate the same piece of content, we apply the label and enforcement associated with the less stringent rating. Given the limited number of fact-checkers, the varying sizes and language capabilities of the different fact-checking organizations we partner with, and the amount of content in their review queues, we believe that fact-checkers should be able to prioritize reviewing potentially viral misinformation rather than reviewing other fact-checkers’ ratings. This will allow fact-checkers to optimize for reviewing and rating the accuracy of the most viral content on our platforms rather than spending their limited time and resources processing appeals. For these reasons, we will have no further updates on this recommendation.

Recommendation 7 (Implementing in Part)

Meta should allow profiles (not only pages and groups) that have content labeled by third party fact-checkers enforcing Meta’s misinformation policy, to appeal the label to another fact-checker through the in-product appeals feature.

Measure of Implementation: The Board will consider this recommendation implemented when Meta rolls out the appeal feature to profiles in all markets and demonstrates that users are able to appeal fact-check ratings through enforcement data.

Our commitment: Prior to December 2022, individual users (profiles) were able to appeal a rating applied by a third-party fact checker by contacting the fact-checker directly via email. In December 2022, however, after requesting a Policy Advisory Opinion from the board on our COVID-19 misinformation policies, we streamlined the process by allowing people who use our platforms to appeal a third party fact-checker rating directly through Facebook and Instagram; however, as noted in our response above to Recommendation 6, we will not introduce a change to our existing fact-checking program that allows users to appeal to a different third party fact-checker than the one who initially provided a rating. This in-product appeals feature has been rolled out to profiles in all markets, and we now consider this recommendation complete.

Considerations: As mentioned in our response to recommendation #6, third-party fact-checkers do not apply a label, rather, fact-checkers submit a rating and Meta subsequently applies a label to the content based on that rating. If a user corrects the rated content or believes the rating is inaccurate, they can appeal the fact-check rating. We understand the importance of this recommendation for maintaining fairness in the content moderation process and have globally launched a feature allowing profiles that have content rated by third party fact-checkers to appeal the rating directly through our apps.

Currently, users are given two options for appealing content rated by fact-checkers: they can either issue a correction on the rated content or dispute the fact-check rating altogether. To dispute a rating, individual profiles, pages, and group admins in all markets can appeal the rating on the post itself by clearly indicating why the original rating is inaccurate and including a link to a source that supports their explanation for why a rating is inaccurate.

Our goal continues to be providing all our users a fair and efficient appeals mechanism. As part of our long-term initiative to increase transparency around enforcement data, we will aim to publish more information regarding the number of appeals on our Transparency Center. This long term work will be implemented in tandem with our responses to recommendation #6 in the Breast Cancer Symptoms and Nudity case and recommendation #3 in the Punjabi Concern Over the RSS in India case.

Recommendation 8 (Implementing in Full)

Meta should increase its investments in digital literacy programs across the world, prioritizing countries with low media freedom indicators (e.g. Freedom of the Press score by Freedom House) and high social media penetration. These investments should include tailored literacy training.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publishes an article on its increased investments, specifying the amount invested, the nature of the programs and the countries impacted, and information it has about the impacts of such programs.

Our commitment: We will continue to globally roll-out our digital literacy and online safety programs. Our programs are tailored to align with regional objectives and in partnership with regional stakeholders. We will continue to assess training needs and provide public updates for the launch and impact of new initiatives.

Considerations: We have run multiple initiatives geared at improving digital literacy across the world including media literacy training for university students, voter education programs as well as civic education campaigns to increase consciousness of misinformation. Our media literacy programs have reached over 750 million people in 70 countries worldwide. Our most recent off-platform media literacy programs to address misinformation have taken place in Brazil, Turkey, Spain, the United States, France, and India. Many of our programs focus on vulnerable populations. For example, we have partnered with organizations across the world such as COMELEC for the Philippines elections, Poynter to teach older adults how to better identify misinformation online in Brazil, Spain and Turkey, NewsMobile to deliver digital literacy workshops targeting Covid-19 misinformation in India as well as Digify Africa to deliver digital literacy programs across Sub-Saharan Africa.

Our commitment to increasing digital literacy and user education across the world remains a matter of continuous investment as our teams consider the impact of our changing digital environment for our users. We remain accountable to delivering on our commitment through platforms such as our Meta for Media portal which details our available resources and latest investments toward digital literacy and user support. We continue to provide deeper context towards specific interventions focussed on digital literacy as seen in our 2022 highlights on Meta’s investments in media literacy.

Our goal is to increase access to credible information as a critical and proactive measure to stop the spread of mis- and disinformation. We supplement this effort with reactive, yet critical interventions such as fact-checking to ensure that users remain protected while we strengthen our user knowledge base. Additionally, we want to elevate our external partners’ voices to advocate for media literacy and increasing access to credible information by undertaking these initiatives in collaboration with various partners to the furthest extent possible.

We continue to increase our geographic scope of investment as we mature our internal capacity and train our tools to detect the prevalence of abuse across different regions. In India, where the government has made digital skills a core part of its “techade” strategy, we collaborated with the Central Board of Secondary Education (CBSE) to introduce Digital Citizenship and AR-VR Skills to the curriculum for children in grades 6–8 across 25,000 schools. More than 300,000 teachers and a million students in India have also been introduced to training in cyber safety, cyber security, digital citizenship, and immersive technologies. Additionally, we will soon roll out a training initiative in India, aimed at providing digital tools to women in marginalized communities and creating safe knowledge spaces. We will continue to provide updates of this, and any such initiatives, via our reporting platforms and in collaboration with media partners.

We will continue to work across our teams to consolidate our efforts towards digital literacy programs and provide an update on our progress in future Quarterly Updates.

Recommendation 9 (Implementing in Part)

For single accounts and networks of Meta entities that repeatedly violate the misinformation policy, Meta should conduct or share existing research on the effects of its newly publicized penalty system, including any data about how this system is designed to prevent these violations. This research should include analysis of accounts amplifying or coordinating health misinformation campaigns. The assessment should evaluate the effectiveness of the demonetization penalties that Meta currently uses, in addressing the financial motivations/benefits of sharing harmful and false or misleading information.

Measure of Implementation: The Board will consider this recommendation implemented when Meta shares the outcome of this research with the Board and reports a summary of the results on the Transparency Center.

Our commitment: In February 2023, we updated our Transparency Center to outline changes to our penalty system for violations of Facebook’s Community Standards and Instagram’s Community Guidelines. In addition to this system, we also have a penalty system for users who repeatedly post fact-checked content which includes a few key distinctions from the broader system. We will continue assessing the effectiveness of our enforcement and removal of misinformation, including evaluating penalties for repeat violators of the Community Standards. We are continuing to explore ways that we can share relevant updates and research related to the penalty system with the Oversight Board.

Considerations: Due in part to Oversight Board recommendations in previous decisions, we recently shared updates in our Transparency Center regarding changes to our penalty system for violating content. This included sharing more details about how Facebook users will need to accrue seven strikes before account-level restrictions are applied to their account. Distinct from the announced changes to the penalty system associated with content removal, we also penalize users who repeatedly post fact-checked content on our services. While we don't share specific thresholds publicly, a user who repeatedly posts misinformation (either content that violates our COVID-19 or vaccine misinformation policies and is removed, or content that is fact-checked as False or Altered by our third-party fact-checking partners) will face reduced distribution for a period of time and may have their content demonetized.

As part of our approach to the misinformation, we regularly assess the effectiveness of our penalty system in the misinformation space and make adjustments accordingly. We are currently evaluating the best way to further transparency around the results of these assessments and will provide an update in a future Quarterly Update.

Recommendation 10 (Assessing Feasibility)

Meta should commission a human rights impact assessment of how Meta’s newsfeed, recommendation algorithms, and other design features amplify harmful health misinformation and its impacts. This assessment should provide information on the key factors in the feed-ranking algorithm that contribute to the amplification of harmful health misinformation, what types of misinformation can be amplified by Meta’s algorithms, and which groups are most susceptible to this type of misinformation (and whether they are particularly targeted by Meta’s design choices). This assessment should also make public any prior research Meta has conducted that evaluates the effects of its algorithms and design choices in amplifying health misinformation.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publishes the human rights impact assessment, which contains such analysis.

Our commitment: We will assess the feasibility of conducting a human rights impact assessment or other due diligence. Given the anticipated changes to our enforcement of COVID-19 claims, we expect this feasibility assessment to be a multipart endeavor that will take some time to complete.

Considerations: We are still in the early phases of assessing the feasibility of this complex recommendation, and are currently working with a number of internal teams including Human Rights, Policy, Research teams, and others. We are also weighing the salient human rights risks and our Human Rights program priorities in the upcoming year. We will provide additional updates on the status of this recommendation in future Quarterly Updates.

Recommendation 11 (Implementing in Part)

Meta should add a change log to the Help Center page providing the complete list of claims subject to removal under the company’s misinformation about health during public health emergencies policy.

Measure of Implementation: The Board will consider this recommendation implemented when a change log is added to the Help Center Page

Our commitment: As noted in our response to recommendation #4, we are currently reassessing the claims we remove under our Harmful Misinformation policy, and will update the Help Center to reflect any changes. We will then consider ways to keep the Help Center updated, while still sharing information about claims that were enforced globally prior to the WHO lifting COVID-19’s PHEIC status.

Considerations: Our teams are working on an updated approach to the enforcement of our COVID-19 misinformation policy in regions or countries that remain under some form of public health emergency. This policy will remain available in our Help Center and reflect any updates to our commitments to recommendations #2 and #3. Our Help Center will remain the centralized point of reference on how we enforce our COVID-19 misinformation policy.

As changes to our enforcement approach are made, we will consider ways to keep the page up to date, but still share information about the claims that were previously, but no longer enforced upon now that the PHEIC is no longer in effect. Once finalized, we will archive these claims and share our progress in future Quarterly Updates.

Recommendation 12 (Implementing in Part - Long term)

Meta should provide quarterly enforcement data on misinformation in the Quarterly Enforcement Report, broken down by type of misinformation (i.e., physical harm or violence, harmful health misinformation, voter of census interference, or manipulated media) and country and language. This data should include information on the number of appeals and the number of pieces of content restored.

Measure of Implementation: The Board will consider this recommendation implemented when Meta starts including enforcement data on the Misinformation policy in the company’s enforcement reports

Our commitment: We already publicly share enforcement data on misinformation in alignment with our regulatory compliance efforts. We’ll continue to share this data in this manner and work towards publishing these updates in our Transparency Center.

Considerations: Meta supports more detailed and transparent enforcement data on misinformation and we are glad to be a leader in this effort by sharing information publicly on the EU Disinformation Code website. Sharing this information increases the public’s understanding of misinformation on our platforms.

Despite the complexities and significant effort required, we will continue to share updates in the future and are exploring publishing this information on our Transparency Center as well. This effort will inform our long term priorities. We will continue to provide updates on our progress in our future Quarterly Updates.

Recommendation 13 (Assessing Feasibility)

Meta should create a section in its Community Standards Enforcement Report to report on state actor requests to review content for the policy on Misinformation about health during public health emergencies violations. The report should include the details on the number of review and removal requests by country and government agency, and the number of rejections and approvals by Meta.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publishes a separate section in its Community Standards Enforcement Report information on requests from state actors that led to removal for this type of policy violation.

Our commitment: We currently report on content restrictions based on local law across violation categories, including misinformation. In alignment with our commitment to increase transparency and strengthen accountability, we will continue to assess the possibility of more granular disclosure.

Considerations: We value the need for transparency and accountability in our actions and already report on content restrictions based on local law. We seek to comply with various legal and privacy requirements across jurisdictions which all have a considerable amount of variation. At this time, our teams are not prioritizing sharing this level of granularity in our transparency reporting given the complexity and time requirements for the effort. We are cognizant of the cautionary elements of data collection and verification involved in implementing this recommendation. Despite these challenges, we continue to aim to improve transparency and are assessing the feasibility and viability of a way for us to log and share this data. We will continue to provide updates on our progress in our future Quarterly Updates.

Recommendation 14 (Implementing in Part)

Meta should ensure existing research tools, such as CrowdTangle and Facebook Open Research and Transparency (FORT) continue to be made available to researchers.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publicly states its commitment to sharing data through these tools to researchers.

Our commitment: We will continue evolving our research tooling solutions, optimizing for user value and platform efficiency. We will prioritize transparency in the update and rollout of research tools as we build them based on insights from our ongoing engagement with external researchers.

Considerations: We understand that research tools such as CrowdTangle and Researcher Platform (operated by the Public Interest Products team, formerly known as “Facebook Open Research and Transparency” or “FORT”) offer significant support to the research community, and that they help bring transparency to Meta’s policy decisions while improving societal understanding of complex issues.

We’ve been looking at all of the different products we offer to help researchers understand the impact of our platforms and are discussing ways that we can make these tools even more valuable for them. Innovation will strengthen our commitment to transparency and accountability; this means that we expect our research tools to continue to evolve over time. We are committed to offering tools that help researchers understand the impact of our platforms, which is why we constantly work to understand what new product features and data might best support researchers' work.

Recommendation 15 (Implementing in Full)

Meta should institute a pathway for external researchers to gain access to non-public data to independently study the effects of policy interventions related to the removal and reduced distribution of COVID-19 misinformation, while ensuring these pathways protect the right to privacy of Meta’s users and the human rights of people on and off the platform. This data should include metrics not previously made available, including the rate of recidivism around COVID-19 misinformation interventions.

Measure of Implementation: The Board will consider this recommendation implemented when Meta makes these datasets available to external researchers and confirms this with the Board.

Our commitment: We are creating additional pathways for external researchers to gain access to data for independent research. We will ensure that this pathway is aligned with our global regulatory requirements and obligations.

Considerations: We are aligned with the principle of facilitating independent research by providing external researchers with access to data, while protecting our users' privacy and human rights. Our goal is to coordinate our data sharing efforts with Article 40 of the Digital Services Act (DSA).

There are many existing use cases where we have shared data to facilitate research purposes. We are building upon our current data-sharing models by improving existing and creating new products that provide secure and efficient pathways for researchers to access data. With a privacy first mindset, we implement appropriate privacy safeguards to ensure that our data-sharing processes and procedures comply with relevant global privacy regimes and data protection measures. Additionally, we are carefully considering factors that determine how data is organized within a system. Having launched these new data-sharing models and products, we will assess how best to make additional data available to external researchers. We will share an update on our progress with the board and the public in a future Quarterly Update.

Recommendation 16 (No Further Action)

Meta should publish the findings of its research on neutral and fact-checking labels that it shared with the Board during the COVID-19 policy advisory opinion process.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publishes this research publicly in its Transparency Center.

Our commitment: We have shared the raw findings of this research with the board and allowed the public disclosure of the study’s conclusions to be shared as part of the Oversight Board's public decision associated with this case. We will not publicly disclose the specific findings of this research given data sensitivities, however we will continue to share relevant research insights in alignment with our commitments to transparency and accountability within our ongoing reporting.

Considerations: Meta has already shared the findings of this research and has allowed the public disclosure of the study's conclusions within the Oversight Board's public decision. As shared in prior discussions, this kind of data is sensitive and related to user testing for a specific moment. It only partially represents some of the work regarding COVID-19 misinformation. In conjunction with recommendation #10, Meta will assess what information, if any, can be shared to meet the spirit of this recommendation.

We understand the value of sharing research findings. However, in the case of the research on neutral and fact-checking labels we apply, we believe our sharing with the board and their disclosures in the PAO provide sufficient transparency around these findings.

While we will have no further updates on this recommendation, we remain committed to transparency and will continue to share relevant and appropriate research findings with the public.

Recommendation 17 (Implementing in Full)

Meta should ensure equitable data access to researchers around the world. While researchers in Europe will have an avenue to apply for data access through the Digital Services Act (DSA), Meta should ensure it does not over-index on researchers from Global North research universities. Research on prevalence of COVID-19 misinformation and the impact of Meta’s policies will shape general understanding of, and future responses to, harmful health misinformation and future emergencies. If that research is disproportionately focused on the Global North, the response will be too.

Measure of Implementation: The Board will consider this recommendation implemented when Meta publicly shares its plan to provide researchers around the world with data access similar to that provided to EU countries under the DSA.

Our commitment: In alignment with our commitment to recommendation #15 of this case, we will create a research pathway for external researchers to gain access to specific data and information. We aim to ensure that access to this tool is scaled to a global audience.

Considerations: Meta recognizes the importance of fair data access for researchers around the world. The DSA focuses on allowing EU researchers access. However, we will also aim to enable access for a global audience. At the same time, it is important to note that Meta does not have control over which researchers are approved for data access under the DSA.

Meta is planning on providing data access to global researchers via novel public access data products. This process is in development, and we hope to provide more substantial updates in the future. We are committed to ensuring fair data access and fostering a diverse and global research community that can help us better understand and address our challenges. We will share more information on our progress on this commitment with the board and with the public in an upcoming Quarterly Update.

Recommendation 18 (Implementing in Full)

Meta should evaluate the impact of the cross-check Early Response Secondary Review (ERSR) system on the effectiveness of its enforcement of the Misinformation policy and ensure that Recommendations 16 and 17 in the Board’s policy advisory opinion on Meta’s cross-check program apply to entities that post content violating the Misinformation about health during a public health emergency policy.

Measure of Implementation: The Board will consider this recommendation implemented when Meta shares its findings with the Board and publicizes it.

Our commitment: We are committed to regularly evaluating the impact of the cross-check Early Response Secondary Review (ERSR) program on misinformation-related violations on the platform. We will continue to collaborate across our teams to communicate the effectiveness of the ERSR program’s enforcement on misinformation.

Considerations: In our responses to recommendations #16 and #17 in the Oversight Board's PAO on Meta’s Cross-Check Policies, we shared that we are currently conducting foundational work to understand more advanced metrics before establishing robust Service-Level Agreements (SLAs) for review decisions across our mistake-prevention systems. We also communicated that our internal operational teams have collaborated to eliminate our backlogs in cross-check reviews since our March 6th, 2023 response, and we are continuing to explore further options to best protect the users of our platform from harm while content that has been flagged for certain violations is pending cross-check review.

All of these commitments to the board include qualifying categories of misinformation that violate the Community Standards. As our teams work to determine the new approach to COVID-19 misinformation in light of the WHO’s lifting of the PHEIC, we will continue to enforce our misinformation policies accordingly. Any changes adopted to the policy enforcement approach to all users will impact our enforcement of content escalated via our cross-check queues. We will continue to evaluate the impact of the cross-check system on enforcement of our Misinformation policy in alignment with our commitments to the board’s cross-check PAO recommendations. We will also continue evolving our approach to enforcement of our Misinformation policy accordingly and communicate our progress to the board in future Quarterly Updates.