Oversight Board Selects a case related to an Al Jazeera post on tensions between Israel and Palestine

UPDATED

JUN 12, 2023

2021-009-FB-UA

Today, the Oversight Board selected a case appealed by a Facebook user regarding a shared post from a verified Al Jazeera page. The original post includes quotes that are attributed to a spokesperson for the Al-Qassam Brigades, the military wing of Hamas. The user did not add anything substantive when they shared the post.

Upon initial review, Facebook took down this content for violating our policy on dangerous individuals and organizations , as laid out in the Facebook Community Standards . However, upon further review, we determined we removed this content in error and reinstated it.

We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today on this case. Facebook previously reinstated this content as it did not violate our policies and was removed in error, so no further action will be taken on this content.

After conducting a review of the recommendations provided by the board in addition to their decision, we will update this post.

Recommendations

On October 14, 2021, Facebook responded to the board’s recommendations for this case. We are fully implementing one recommendation, implementing one in part, and are assessing feasibility on the other two.

Recommendation 1 (assessing feasibility)

Add criteria and illustrative examples to its Dangerous Individuals and Organizations policy to increase understanding of the exceptions for neutral discussion, condemnation and news reporting.

Our commitment: We will add examples and language to the existing Dangerous Individuals and Organizations internal policy guidance to help clarify enforcement surrounding neutral discussion, condemnation, and news reporting in this policy area. We are also exploring ways of providing users clearer guidance on non-violating content.

Considerations: In response to recommendation 2020-005-FB-UA-2 from the Nazi Quote case, we published clearer definitions of “praise,” “support,” and “representation” in the Dangerous Individuals and Organizations section of the Community Standards, and included examples of how we apply these terms as part of our content moderation.

Providing examples of allowed content in our internal policy guidance helps our reviewers apply our policies more consistently. As a result of recommendation 2021-006-IG-UA-5 from the case regarding the support of Abdullah Ӧcalan, we are working on providing additional detail in our internal policy guidance describing what “support” means, including examples of content that is allowed. As a result of this recommendation, we will update our internal policy guidance to provide additional detail and examples of non-violating content that references dangerous individuals and organizations to report on, condemn, or neutrally discuss them. As highlighted in our Community Standards, we design our policies to allow room for this discussion, but do require clear indication of users’ intent.

As a result of several recommendations from the board, including this one, that we publish examples of non-violating content, we are also exploring how to provide users with more clarity, without compromising the effectiveness of our enforcement.

Next steps: We will add clarifying examples and language to the Dangerous Individuals and Organizations internal policy guidance in the first half of 2022. We will continue to explore the tradeoffs of publishing examples of non-violating content, while not compromising the effectiveness of our enforcement, and provide an update in a future quarterly update.

Recommendation 2 (assessing feasibility)

Ensure swift translation of updates to the Community Standards into all available languages.

Our commitment: We are exploring ways we may be able to shorten the time it takes to publish updates to the Community Standards in the 47 languages in which it is currently available.

Considerations: The translation teams at Facebook are responsible for ensuring that user-facing material appears in the languages our users speak accurately and in a timely manner. These materials include, for example, Newsroom posts, the Community Standards, and in-product user experiences.

Today, we publish the Community Standards in 47 languages. We first make updates in English and then begin a process of translating these changes into other languages. Currently, we group translation requests and send them monthly to teams that translate the Community Standards into non-English languages. We do this to streamline our translation requests.

Typically, these specialized teams complete Community Standards translation requests in a week. This process can take longer, though, depending on the length and complexity of the update. While an update to the Community Standards is pending translation, we include a disclaimer at the top of the Community Standards in that language explaining that the most current version of the Community Standards is the English version, and it should be used to understand current policies.

Currently, it takes approximately four to six weeks from the time we publish a Community Standards update in English to when we complete the translation process for all languages. We are exploring how we can shorten this timeline.

Next steps: We plan to explore potential changes to the translation processes before the end of this year, and will provide an update in a future quarterly update.

Recommendation 3 (implementing fully)

Engage an independent entity not associated with either side of the Israeli-Palestinian conflict to conduct a thorough examination to determine whether Facebook’s content moderation in Arabic and Hebrew, including its use of automation, have been applied without bias. The report and its conclusions should be made public.

Our commitment: We have asked BSR (Business for Social Responsibility) to perform human rights due diligence in line with this recommendation. The due diligence is framed using United Nations Guiding Principles on Business and Human Rights (UNGP) criteria to examine all salient human rights issues, and to determine priorities for action. In accordance with the UNGP standards, we will publish the insights and actions of this due diligence in the first quarter of 2022.

Considerations: Facebook is committed to giving people a voice. It is critical to this mission that we apply our Community Standards equally. Earlier this year, we launched a Corporate Human Rights Policy.

We have partnered with a non-profit organization expert in business and human rights, BSR, to conduct human rights due diligence of Facebook’s impacts during May-June’s intensified violence in Israel and Palestine. BSR will examine relevant internal Facebook sources and engage with affected stakeholders. We will implement the board’s recommendation in our due diligence, defining and prioritizing all salient human rights issues according to the guidance of the UN Guiding Principles on Business and Human Rights.

Consistent with our human rights policy and the UNGPs, we will publicly communicate the insights and actions of the due diligence in the first quarter of 2022, so that Facebook’s approach can be effectively evaluated.

Next steps: The due diligence process has already begun. We will publicly communicate the insights and actions of the due diligence in the first quarter of 2022, so that Facebook’s approach can be effectively evaluated.

Recommendation 4 (implementing in part)

Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting. The transparency reporting should distinguish government requests that led to removals for violations of the Community Standards from requests that led to removal or geo-blocking for violating local law, in addition to requests that led to no action.

Our commitment: In our response to recommendation 2021-006-IG-UA-11 from the case regarding the support of Abdullah Ӧcalan, we committed to significantly increasing transparency around government requests for content removals. In addition to the information we already provide on government requests where we restrict content in particular jurisdictions based on an alleged violation of local law, we will also provide information on government requests that led to content being removed for violations of the Community Standards, as well as on requests where we took no action.

We are committed to providing this information for all government requests where we are able to clearly identify the reporter as acting in an official capacity on behalf of a government or law enforcement agency.

Considerations: As noted in our responses to recommendations 2021-006-IG-UA-9 and 2021-006-IG-UA-11 from the case regarding the support of Abdullah Ӧcalan, we have a robust process for reviewing government requests for content removal or restriction, in line with the commitments we have made as a member of the Global Network Initiative and under our Corporate Human Rights Policy.

When we receive a government request to remove or restrict content, we first review it against the Community Standards. If we determine that the content violates our policies, we remove it. If content does not violate our policies, we conduct a legal review to confirm whether the report is valid, as well as human rights due diligence, and may restrict access to the content in the jurisdiction where it has been reported as unlawful. In these cases, we notify the impacted user that their content was restricted in response to a legal request, except where we are legally prohibited from doing so. We describe our process for reviewing government requests in detail in our Transparency Center, and publish a report detailing these instances.

In cases where we believe that reports are not legally valid, are overly broad, or are inconsistent with international human rights standards, we may request clarification or take no action. In all cases, we consider the impact our decisions will have on the availability of other speech via our products.

We are committed to providing transparency, in line with the board’s recommendation, on requests reviewed through this robust process. This includes requests received through our dedicated government escalation channels, through our Law Enforcement Requests Portal, and via court orders.

In some cases, governments, law enforcement agencies, or those acting on their behalf may report content in ways that do not allow us to clearly identify them as such, or identify whether they are acting in an official capacity. For example, a government official or law enforcement officer may use our in-product reporting tools to report content in the same way as any Facebook user. We are not able to distinguish these reports, as they are treated in the same way as any other user report.

Next steps: We are continuing our work to include information on content removed for a violation of our Community Standards in response to a government request, and on requests where no action was taken, in our public Transparency Center. We will track future progress on this recommendation under 2021-006-IG-UA-11.