Oversight Board Selects a Case Regarding A Photo of an Individual Killed in Ukraine During the Russian Invasion with Text Containing an Excerpt from a World War II Era Poem

UPDATED

JUN 12, 2023

2022-008-FB-UA

Today, the Oversight Board selected a case appealed by a Facebook user regarding a post containing an image depicting an individual who was killed in the Ukrainian city of Bucha during the Russian invasion of Ukraine and text containing an excerpt from a Russian poem. The caption discusses the Russian invasion and includes reference to a Russian poem from WWII which includes a call for violence against the Nazi invading forces.

Upon initial review, Meta took down this content for violating our policy on Hate Speech. However, upon additional review, we determined we removed this content in error and reinstated it with a warning screen over the image.

We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to remove the content from the platform. Meta previously restored the content to the platform and has acted to comply with the board’s decision immediately by removing the warning screen.

In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we implement the board’s decisions.

After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.

Recommendations

Recommendation 1 (implementing fully)

Meta should add to the public-facing language of its Violence and Incitement Community Standard that the company interprets the policy to allow content containing statements with "neutral reference to a potential outcome of an action or an advisory warning" and content that "condemns or raises awareness of violent threats". The Board expects that this recommendation, if implemented, will require Meta to update the public-facing language of the Violence and Incitement policy to reflect these inclusions.

Our commitment: We will add language to our Community Standards to clarify that our Violence and Incitement policy allows statements condemning, discussing neutrally, warning about or raising awareness of violent threats.

Considerations: Our Community Standards describe what is and isn’t allowed on Facebook and Instagram. Given the breadth of content we allow on our platforms, the Community Standards primarily focus on explaining the rationale behind our policies and specifying what we do not allow or content to which we limit access. In order to protect voice and ensure users feel safe expressing themselves on our platforms, however, we do our best to clarify types of content that may require additional context to enforce or other instances where content we allow could be confused with types of content that we remove.

Under our Violence and Incitement policy, we remove language that incites or facilitates serious violence. We recognize, however, that people sometimes share content that references or contains violent threats in order to condemn or raise awareness of those threats. We allow this type of content on our platforms. Likewise, we allow people to post content that includes a warning about a potential action, as we believe that users should be empowered to voice concerns of this kind.

In the coming months, we will look for the best way to clarify this in our Community Standards, and will provide the board with an update in an upcoming Quarterly Report.

Recommendation 2 (implementing fully)

Meta should add to the public-facing language of its Violent and Graphic Content Community Standard detail from its internal guidelines about how the company determines whether an image "shows the violent death of a person or people by accident or murder". The Board expects that this recommendation, if implemented, will require Meta to update the public-facing language of the Violent and Graphic Content Community Standard to reflect this inclusion.

Our commitment: We will provide further details in our Community Standard on Violent and Graphic Content to clarify how we determine if content depicts “the violent death of a person or people by accident or murder.”

Considerations: The Violent and Graphic Content section of the Community Standards provides details about how we strike a balance between protecting user voice and protecting users from potentially disturbing imagery.

We generally place a warning screen over imagery that shows the violent death of a person or people by accident or murder to caution users about the graphic and potentially disturbing nature of the imagery. Often, certain indicators within the imagery, such as the presence of blood or injuries on the victim help lead to a reasonable conclusion that the person suffered a violent death. Based on the board’s recommendation, we will add further details to our Community Standards clarifying how we identify whether imagery depicts violent death.

Recommendation 3 (no further action)

Meta should assess the feasibility of implementing customisation tools that would allow users over 18 years old to decide whether to see sensitive graphic content with or without warning screens, on both Facebook and Instagram. The Board expects that this recommendation, if implemented, will require Meta to publish the results of a feasibility assessment.

Our commitment: We will continue to explore opportunities to allow adult users to provide input and shape experiences based on what they personally feel comfortable encountering online, but we believe that warning screens are an important tool for allowing users to make their own decisions about what they see in real time.

Considerations: We aim to ensure that everyone feels safe and supported on our platforms. Part of this work includes allowing customers to decide whether or not to view sensitive content through the use of warning screens. Warning screens can be applied to eligible content that does not violate our Community Standards but may still be disturbing or sensitive in order to protect the underlying free expression while allowing individuals to choose whether they want to view the content. Internal safety and integrity research strongly supports their use. Warning screens are one of our online community’s preferred integrity interventions, with user research showing the vast majority of users agreeing with the use of this soft action. The intent of the content warning screen is not to punish the creator, but to protect viewers and give them control over their online experience.

Warning screens are an important tool to allow users to better curate their experience on our platforms. They allow users to decide when and if they want to engage with potentially harmful content by providing a warning about the sensitive nature of the content along with a single click-through option if a user decides to proceed with viewing the content in question. Though research insights from testing and product behavior showed a decrease in the overall engagement with sensitive content, warning screens enable us to keep the option for users to decide to view that content, empowering users to shape their own experience.

Encountering uncovered sensitive content without warning can be unnecessarily distressing to users, and because harmful content can take so many different forms, a single option to remove all warning screens could lead users vulnerable to types of content they had not anticipated or believed they were choosing to view. Additionally, actors who have provided feedback on warning screens find that this feature strikes a fair balance in protecting users' experiences while allowing sensitive content to remain on the platform.

We see warning screens as more effective for allowing users to make real time decisions for themselves than a static one-off selection to remove all warning screens across platforms. It is important to note that tolerance for potentially harmful or borderline content may vary based on the environment a user views it in. In qualitative interviews, for example, US respondents told us that despite tolerating nudity on Instagram and Facebook in some contexts, that same content made them feel less comfortable scrolling when around their family members. Warning screens allow users to make the decision to view content in real-time, taking into account things like their external environment.

Meta currently has plans to allow users to provide feedback on warning screens applied to content that appears in their feed, including registering their disagreement with the warning screen application and preference that similar content not come with a warning screen in the future. This work will fold into our efforts to ensure users can share feedback about their overall experience on our platform. We have also conducted broad integrity research on the feasibility of more personalized warning screens and will update the board on any related product developments.

Additionally, the Oversight Board’s ability to shape our application of warning screens was recently expanded. Now, if the board determines that content should be restored or remain on our platforms, it can also issue a binding judgment about whether that content qualifies for the application or removal of a warning screen, adding another external accountability tool to ensure we apply screens accurately and effectively.