Case on a Brazilian state-level health entity’s post about COVID lockdowns

UPDATED

JUN 12, 2023

2021-008-FB-FBR

Today, the Oversight Board selected a case referred by Facebook regarding a post on the Facebook page of a state-level medical entity in Brazil with a picture of a written notice on reducing the spread of COVID-19. The notice claims that lockdowns are ineffective, against the fundamental rights of the Constitution, and takes out of context a quote from a doctor at the World Health Organization (WHO) to argue that lockdowns are condemned by the WHO.

Facebook left up this content in line with our policy on misinformation and harm with regards to COVID-19, as laid out in our Help Center and Community Standards. Facebook removes misinformation when public health authorities conclude that the information is false and likely to contribute to imminent violence or physical harm. While the WHO and other health experts have advised Facebook to remove claims advocating against specific health practices, such as social distancing, they have not advised Facebook to remove claims advocating against lockdowns.

Facebook referred this case to the board because we found it significant and difficult as this content does not violate Facebook’s policies, but can still be read by some people as advocacy for only taking certain safety measures during the pandemic.

We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today on this case. The board decided to uphold Facebook’s decision so we have taken no further action related to this case or the content.

After conducting a review of the recommendations provided by the board in addition to their decision, we will update this post.

Recommendations

On September 17, 2021, Facebook responded to the board’s recommendations for this case. We are fully implementing 1 recommendation, and are taking no further action on the other 2 because they describe work that Facebook is already doing.

Recommendation 1 (work Facebook already does)

Facebook should conduct a proportionality analysis to identify a range of less intrusive measures than removing the content, including labeling content, introducing friction to posts to prevent interactions or sharing, and downranking. All these enforcement measures should be clearly communicated to all users, and subject to appeal.

Our commitment: Our global expert stakeholder consultations have made it clear that in the context of a health emergency, certain types of health misinformation do lead to imminent physical harm. Because of this, we remove content from our platform that is likely to lead to harm.

That said, we’ll continue to use less intrusive measures than removal where a potential for physical harm is identified, but not imminent, for content that contains misinformation relating to the COVID-19 pandemic. We’ll continue working with fact-checkers to assess potential misinformation, as well as reducing the distribution of false content.

Considerations: In our response to 2020-006-FB-FBR-7, we discussed that we remove certain content from the platform because our global stakeholder consultations have made it clear that imminent physical harm can result from misinformation concerning a health emergency like the COVID-19 pandemic. For example, we know from our work with the World Health Organization and other public health authorities that if people think there is a cure for COVID-19, they are less likely to follow safe health practices, like social distancing or mask-wearing. Exponential viral replication rates mean one person’s behavior can transmit the virus to thousands of others within a few days.

Proportionality is part of our existing strategy to fight health misinformation. When content spreading COVID-19 misinformation does not reach that threshold of imminent physical harm, our responses are less intrusive than removal. But when content does reach the threshold of imminent harm, we remove it from our platform.

Accordingly, our approach to combating misinformation relating to the COVID-19 pandemic involves a range of proportional measures. First, we label content we believe is related to COVID-19 and COVID-19 vaccines, as well as specific sub-topics such as vaccine safety. In these labels, we provide links to authoritative external health resources, such as the World Health Organization.

In addition, we work with our network of independent third-party fact-checking partners to quickly assess content that may contain misinformation. We take all the following enforcement actions when a fact-checker rates a piece of content as false:

  • We apply a warning label to the content that includes a link to the fact-checkers' article debunking the misinformation.

  • We reduce the content’s distribution so that fewer people see it.

  • We notify anybody who previously shared the content, or who tries to share it going forward, that the information is false.

  • We use automation and human review to scale the impact of these fact-checkers by detecting identical or nearly identical pieces of content, applying labels to them, and reducing their distribution.

In our Transparency Center, we provide an explanation of our approach to misinformation so users can understand our strategy. In addition, we are continuing to explore how we can provide users with more information when we take actions on their content.

In most cases, users can indicate to us when they disagree with a content decision on Facebook, including when remove content under our COVID-19 misinformation policies. Additionally, when content that a user created receives a fact-check rating, the user can appeal it to the fact-checker directly. We are exploring additional ways to improve the appeals process for both content removals and fact-checked content.

Next steps: We will have no further updates on this recommendation.

Recommendation 2 (work Facebook already does)

Given the context of the COVID-19 pandemic, Meta should make technical arrangements to prioritize fact-checking of potential health misinformation shared by public authorities which comes to the company’s attention, taking into consideration the local context.

Our commitment: Pieces of content containing potential COVID-19 misinformation are already prioritized in the tool that our independent fact-checking partners use to decide which content to assess. To preserve the independence of our fact-checkers, however, we do not require them to review specific pieces of content.

Considerations: We partner with more than 80 independent fact-checking partners in more than 60 languages and more than 100 countries. Fact checkers review a piece of content and rate its accuracy; this process occurs independently of Meta. The volume of content that fact-checkers might rate is large, and so they choose what to review according to a set of priorities. As we describe in the Help Center, fact-checking partners prioritize provably false claims, especially those that are timely or trending and important to the average person, which includes COVID-19 misinformation. However, we do not require that our fact-checkers rate specific pieces of content, as doing so would undermine their independence.

Fact-checkers use a customized tool to identify potential content to review. Content in this queue may be ranked according to factors like reach and subject matter — including whether a piece of content is health-related. The tool also allows fact-checkers to filter and sort content by topic, platform (e.g. Facebook or Instagram) or format (e.g. images or videos). Additionally, during high profile events or breaking news, we use keyword detection to group related content in one place, making it easy for fact-checkers to find. We use this feature to group content about COVID-19.

Through these features, and the strategic priority we’ve placed on COVID-19, any eligible content for fact-checking that contain potential COVID-19 misinformation — whether shared by public health authorities or others — are already prioritized.

In addition to these processes, we have also made made several investments to provide additional support for fact-checkers and to connect them to the resources they need to better address health misinformation. For example:

  • We launched a $1 million emergency grant program to support fact-checkers tackling COVID-19, in partnership with the International Fact-Checking Network (IFCN).

  • We launched a year-long health fellowship program with 10 fact-checking organizations participating in our fact-checking program to bring on new team members and help them approach fact-checking health misinformation. Participating organizations are based in Africa, Asia, Europe, India, Latin America and the Middle East.

  • We recently launched a new partnership with the Digital Health Lab at Meedan, a global technology nonprofit, to support fact-checkers in fighting health misinformation online. Meedan’s Digital Health Lab will facilitate a series of virtual training sessions between its team of doctors, scientists, and health experts and Meta’s third-party fact checking partners. experts and Meta’s third-party fact-checking partners.

Next steps: We will have no further updates on this recommendation.

Recommendation 3 (implementing fully)

Facebook should provide more transparency within the False News Community Standard regarding when content is eligible for fact-checking, including whether public institutions' accounts are subject to fact-checking.

Our commitment: We will update the False News section of the Community Standards with more information about how the fact-checking program works.

Considerations: In our Transparency Center, we provide information about how our fact-checking program works, including a description of what is prioritized and what is ineligible for fact-checking. A more detailed description is available in our Publisher Help Center. As a result of this recommendation, we will update the False News section of the Community Standards with links to both resources.

As we explain in the Publisher Help Center:

“[O]pinion and speech from politicians is not eligible to be fact-checked...In evaluating when this applies, we ask our fact-checking partners to look at politicians at every level. We define a “politician” as candidates running for office, current office holders — and, by extension, many of their cabinet appointees — along with political parties and their leaders. In some cases, we ask fact-checkers to use their expertise and judgment to determine whether an individual is a politician, like in the case of a part-time elected official.”

Whether a “public institution” is associated with a “politician” depends on local context. We rely on the expertise of our independent fact-checkers to make this determination.

Next steps: We will update the False News section of the Community Standards with these changes by the end of this month.