Case on a veiled threat based on religious beliefs

UPDATED

JUN 12, 2023

2020-007-FB-FBR

On December 3, 2020, the Oversight Board selected a case referred by Meta regarding a post in a group that appears to exist for Muslims in India. The post contains a statement about a sword being taken from its scabbard if people speak against the prophet. The post also references President Emmanuel Macron of France. Meta deemed this post a veiled threat, and we took it down for violating our policy on violence and incitement, as laid out in the Facebook Community Standards.

Meta referred this case to the board as an example of a challenging decision about statements that may incite violence even when not explicit. It also highlights an important tension we face when addressing religious speech that could be interpreted as a threat of violence.

Case decision

On February 12, 2021, the board overturned Meta's decision on this case. Meta acted to comply with the board’s decision, and this content has been reinstated.

Recommendations

On March 11, 2021, Meta responded to the board’s recommendation for this case. We are committing to take action on the recommendation.

Recommendation 1 (committed to action)

Provide people with additional information regarding the scope and enforcement of restrictions on veiled threats. This would help people understand what content is allowed in this area. Meta should make their enforcement criteria public. These should consider the intent and identity of the person, as well as their audience and the wider context.

Our commitment: We commit to adding language to the Violence and Incitement Community Standard to make it clearer when we remove content for containing veiled threats.

Considerations: Facebook removes explicit statements that incite violence under our Violence and Incitement Community Standard. Facebook also removes statements that are not explicit when they act as veiled or implicit threats. The language we will add to our Community Standards will elaborate on the criteria we use in this policy to evaluate whether a statement is a coded attempt to incite violence.

In its enforcement of this policy, Facebook currently does not directly use the identity of the person who shared the content or the content’s full audience as criteria for assessing whether speech constitutes a veiled threat, so the added language will not include such criteria. As the board notes, we are informed by our trusted partner network to tell us when content is potentially threatening or likely to contribute to imminent violence or physical harm, so it is possible that these partners use such signals in their assessments.

Next steps: We will add language described above to the Violence and Incitement Community Standard within a few weeks.