Oversight Board Selects Case Regarding A Veiled Threat of Violence Based on Lyrics from a Drill Rap Song

UPDATED

JUN 12, 2023

Today, the Oversight Board selected a case referred by Meta regarding an Instagram post that contained a video and text caption referencing a UK drill rap song.

In it, the rapper refers to a rival gang and a shooting incident in which the group was involved. Meta removed the content because it contained a veiled threat of violence that violated our policy on Violence and Incitement, as laid out in the Instagram Community Guidelines and Facebook Community Standards.

Meta referred this case to the board because we found it significant and difficult because it creates tension between our values of voice and safety.

Meta prohibits threats of violence on our platforms in order to prevent potential offline harm. This includes “veiled threats,” which are coded statements where the method of violence or harm is not explicitly articulated. It can sometimes be difficult to assess whether lyrics are exaggerated claims of aspirational violence or a genuine threat of violence.

We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to remove the content from the platform. Meta will act to comply with the board's decision and reinstate the content where possible.

In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we implement the board’s decisions.

After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.

Recommendations

Recommendation 1 (Implementing Fully)

Meta's description of its value of "Voice" should be updated to reflect the importance of artistic and creative expression. The Board will consider this recommendation implemented when Meta's values have been updated.

Our commitment: We will add clarifying language to the value of “Voice” in our Community Standards to reflect that we recognize the importance of protecting artistic and creative expression on our platforms.

Considerations: Protecting voice is a key company value and plays a vital role in informing our Community Standards. We recognize that people come to our platforms to express themselves in different ways, including via images, videos, and text. Art, in particular, has long been a powerful form of political expression, and our policies account for this.

Protecting artistic expression is a priority for us, and we often consult with experts in this space, including academics and artists, when developing and updating policies with potential to impact creative expression. Facebook and Instagram are meant to empower people to share their art and creativity—whether a professional photographer sharing images from their portfolio or a parent sharing a drawing created by their child. As with all forms of expression, however, we also have to assess any potential for harm. As indicated in our Community Standards, safety and dignity are also important considerations in our policies and may sometimes require us to remove or restrict content that has been reasonably determined to have the potential to cause harm.

Based on the board’s recommendation, in the next year we will add clarifying language to our values to reflect that artistic and creative expression are of critical importance to "Voice" on Meta's platforms. We will share updates on our progress in future Quarterly Updates on the Oversight Board.

Recommendation 2 (Implementing Fully)

Meta should clarify that for content to be removed as a "veiled threat" under the Violence and Incitement Community Standard, one primary and one secondary signal is required. The list of signals should be divided between primary and secondary signals, in line with the internal Implementation Standards. This will make Meta's content policy in this area easier to understand, particularly for those reporting content as potentially violating. The Board will consider this recommendation implemented when the language in the Violence and Incitement Community Standard has been updated.

Our commitment: We will include further clarifying information about existing signals to identify veiled threats in our Violence and Incitement policy, and are exploring additional policy development that may result in further updates and clarifications.

Considerations: Our policy on veiled threats is outlined in the Community Standards and further details can be found in the overview of our 2020 Policy Forum, in which we explored an established framework for assessing veiled threats. We consistently reassess policies, definitions, and frameworks to account for changes in the way people use social media, shifts in norms and uses of language, and other evolving realities. We recognize the importance of making this information more easily accessible and are undertaking a policy development to clarify the use of primary and secondary signals in our assessment of veiled threats. We will add further clarifications to our Violence and Incitement policy to provide additional information about Meta’s approach to veiled threats and any changes from our ongoing policy development. We will share progress on these changes in future Quarterly Updates.

Recommendation 3 (Implementing in Part)

Meta should provide users with the opportunity to appeal to the Oversight Board for any decisions made through Meta's internal escalation process, including decisions to remove content and to leave content up. This is necessary to provide the possibility of access to remedy to the Board and to enable the Board to receive appeals for "escalation-only" enforcement decisions. This should also include appeals against removals made for Community Standard violations as a result of "trusted flagger" or government actor reports made outside in-product tools. The Board will consider this implemented when it sees user appeals coming from decisions made on escalation and when Meta shares data with the Board showing that for 100% of eligible escalation decisions, users are receiving reference IDs to initiate appeals.

Our commitment: People in the EU, UK, and India will soon be able to appeal eligible content decisions made on escalation to Meta and to the Oversight Board. For users who aren't using our services in these countries, we plan to develop an alternate pathway that allows users to appeal board-eligible escalation takedown decisions that are not internally appealable, directly to the Oversight Board.

Considerations:We agree with the board that we should expand users' ability to appeal eligible content decisions, including those made on escalation, to the board. As with all levels of review, any content that is assessed through our internal escalation process is assessed against our Community Standards before we reach a decision. Content decisions that we make during an internal escalation process are often not appealable because they are made through the same kind of specialized contextual review that is otherwise available on appeal. Allowing users to appeal these decisions would lead to a situation in which at-scale reviewers could reverse decisions without having all the context and inputs that were applied in making the decision on escalation. There are also some policies for which we do not offer an appeal regardless of the review process, like some violation types under our Child Sexual Exploitation policy.

We are working to enable people on our platforms in the EU, UK, and India to appeal eligible escalations internally and to the Oversight Board within the year. We will use the experience we gain during this launch to estimate the volume of content appeals and ensuing resource requirements that could be needed for a potential global expansion.

In the meantime, for users in the rest of the world, we plan to develop an alternative pathway that allows them to appeal eligible escalation takedown decisions directly to the board. For board-eligible content decisions made on escalation, we will send new messaging allowing users to appeal the decision directly to the board when there is no internal appeal available. We hope to implement this solution by the second half of 2023 and will update the board on our progress directly and in future Quarterly Updates.

Recommendation 4 (Implementing in Part)

Meta should implement and ensure a globally consistent approach to receive requests for content removals (outside in-product reporting tools) from state actors by creating a standardized intake form asking for minimum criteria, for example, the violated policy line, why it has been violated, and a detailed evidential basis for that conclusion, before any such requests are actioned by Meta internally. This contributes to ensuring more organized information collection for transparency reporting purposes. The Board will consider this implemented when Meta discloses the internal guidelines that outline the standardized intake system to the Board and in the Transparency Centre.

Our commitment:We are working to consolidate and standardize intake of content reports by state actors. This work will be informed and affected by regionally-specific compliance, practical, and legal obligations. However, we are committed to adopting a consistent approach to the extent feasible and will continue to provide updates in future Quarterly Updates.

Considerations:Moving towards more consistent intake of requests for content takedowns from state actors will allow us to ensure more standardized assessment of these requests across regions, languages, and populations. It will also help us improve measurement capacity for public reporting. In some countries, the intake methods we make available to state actors must be tailored to conform to specific local circumstances, including the local regulatory requirements and prevailing regional customs, making it difficult and impractical for Meta to adopt a globally uniform intake system.

We have begun the rollout of a platform that allows for a more consistent intake approach for incoming requests from some state actors and trusted partners. The platform includes a new specialized contact form for external users to make requests to Meta. This platform is designed to integrate into existing tracking tools and ultimately allow improved measurement, review, and reporting of these types of requests.

Typically, Meta does not require users (including state actors) to provide evidence for why they have reported content that they think violates our policies. Depending on the intake mechanism, Meta may request certain additional information from users. In our current view, reports themselves, along with any information we request is sufficient for us to review the reported content users consider potentially violating against our Community Standards and then take any appropriate action. We often review this procedure and make adjustments as we deem necessary. However, we recognize the importance of ensuring that our processes for responding to government requests, including those made on the basis of Community Standards violations, are as consistent and equitable as possible across all languages and jurisdictions.

Despite the challenges we foresee in adopting a globally uniform intake form for takedown requests by state actors, Meta is aligned with the spirit of the board’s recommendation. We commit to continuing our efforts to centralize and standardize intake channels for requests by state actors and civil society, to the extent possible. We will provide further updates on our progress in a future Quarterly Update.

Recommendation 5 (Assessing Feasibility)

Meta should mark and preserve any accounts and content that were penalised or disabled for posting content that is subject to an open investigation by the Board. This prevents those accounts from being permanently deleted when the Board may wish to request content that is referred for decision or to ensure that its decisions can apply to all identical content with parallel context that may have been wrongfully removed. The Board will consider this implemented when Board decisions are applicable to the aforementioned entities and Meta discloses the number of said entities affected for each Board decision.

Our commitment:We are working to assess the feasibility of a mechanism that will allow us to extend the preservation period for accounts and content subject to open board investigations in selected cases, while upholding our obligations to user data privacy. Because of the length of typical board investigations, our expectation is that this should prevent full deletion by Meta of accounts and content under active investigation by the board before completion of that case.

Considerations: As outlined in our Terms of Service and Privacy Policy, account data may be preserved in limited scenarios including where we are legally obligated to do so, if doing so is necessary in relation to a legal claim or litigation or when retention is necessary for investigating certain violations of our terms or policies. Consistent with our privacy policy, we endeavor to limit the amount of information subject to preservation and only preserve account data under exceptional circumstances with the appropriate legal permissions.

We have systems in place to prevent full deletion of disabled accounts and content for discrete periods of time, and our legal and product teams are assessing the feasibility of extending that hold from the point at which a case is selected by the Oversight Board. This complex effort is currently underway but involves technical challenges and regulatory obligations.

We will continue assessing the feasibility of marking and preserving the at-issue content and accounts under active review for selected board cases while working towards this longer-term commitment and will update the board on these efforts in future Quarterly Updates.

Recommendation 6 (Implementing in Part)

Meta should create a section in its Transparency Centre, alongside its "Community Standards Enforcement Report" and "Legal Requests for Content Restrictions Report", to report on state actor requests to review content for Community Standard violations. It should include details on the number of review and removal requests by country and government agency, and the number of rejections by Meta. This is necessary to improve transparency. The Board will consider this implemented when Meta publishes a separate section in its "Community Standards Enforcement Report" on requests from state actors that led to removal for content policy violations.

Our commitment: As Meta has indicated in its responses to recent board recommendations from the Ocalan and Al Jazeera cases and reiterated in our Quarterly Updates, we are committed to increasing transparency around government requests, including government requests containing content that we review and may remove under our Community Standards.

Considerations: We currently publish regular transparency reports on content restrictions made on the basis of local law and government requests for user data. These reports offer the public information on the nature and extent of these requests and the strict policies and processes we have in place to handle them.

As indicated in our prior responses, we share the board’s belief in the value of additional transparency around government requests, including those made to review content under the Community Standards.

There are several complicating factors to this work.

Meta receives government requests to review or remove content via a range of methods that vary by country and regulator, from physical mail to submissions via a structured online form. While these requests are reviewed through a standardized, global process, we are actively continuing to develop the infrastructure and internal process to accurately and systematically track such requests that are actioned based on our Community Standards rather than local law.

As noted in our responses to prior related recommendations, in some instances, state actors may make requests to Meta in ways that do not allow for their identification: for example, a law enforcement officer may use in-product reporting functionality to report an alleged violation of the Community Standards in the same way any user may. We are not able to identify or track such requests; equally, such requests are not necessarily reviewed under escalation-only policies. Additionally, in some jurisdictions and/or with some requests, Meta may be prohibited by law from transparency regarding certain requests to review content.

Work to facilitate this increased transparency reporting, within these limitations, is underway. Accordingly, we are commiting to implement this recommendation in part.

Because this is a large, complex project requiring significant infrastructure and process investments, we do not have a definitive timeline to complete it. We will share updates in a future Quarterly Update.

Recommendation 7 (Assessing Feasibility)

Meta should regularly review the data on its content moderation decisions prompted by state actor content review requests to assess for any systemic biases. Meta should create a formal feedback loop to fix any biases and/or outsized impacts stemming from its decisions on government content takedowns. The Board will consider this recommendation implemented when Meta regularly publishes the general insights derived from these audits and the actions taken to mitigate systemic biases.

Our commitment: We will develop a process to conduct randomized reviews of our intake and response to government requests to ensure that they are accurate, fair, and consistent with Meta’s policies and commitments.

Considerations: We are actively working to develop a process to re-review a randomized sample of government requests on an ongoing basis to ensure that our review of those requests was accurate, fair, and consistent with Meta’s policies and commitments. We envision that this approach may include, among other elements, a review of accuracy of any enforcement actions taken under our Community Standards, a review of any actions taken on the basis of local law, and assurance of consistency with Meta’s human rights commitments as a member of the Global Network Initiative.

We recognize that the ideal form of the board’s recommendation may involve complex, global, and ongoing social science research to assess possible systemic biases across an unspecified range of dimensions that are likely to be unique to each country and context. For many reasons, including the complexities described in response to recommendation #6, we do not believe that such a process is feasible to implement at scale at this time. However, we agree with the board that our goal should be to ensure that our interactions with state actors and content decisions prompted by requests for takedown should be consistent and equitable.

We aim to identify and prevent bias wherever possible, and believe that our development of a system of randomized re-review of our response to government requests will help us to do so.

We will provide further details in a future Quarterly Update as work to define these efforts continues.