Policies that outline what is and isn't allowed on the Facebook app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies in the Facebook app and on Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional Internet restrictions that limit people's ability to access the Internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
AUG 16, 2022
Today, the Oversight Board selected a case appealed by a Facebook user regarding a video depicting an edited version of a Disney cartoon, "The Pied Piper," posted by a Facebook page that describes itself as a news portal. The video depicts the Croatian city of Knin as being overrun by rats who are driven out by a piper from the Croatian village of Čavoglave playing a well-known Western Balkans’ folk song.
Upon initial review, we found the content to be non-violating and it was left up. However, upon further review, we determined the post did in fact violate our policy on Hate Speech, as laid out in the Facebook Community Standards, and removed it. Meta takes down content that targets a person or group of people based on their race, ethnicity and/or national origin with "dehumanising speech or imagery in the form of comparisons, generalisations or unqualified behavioural statements (in written or visual form) to or about: animals that are culturally perceived as intellectually or physically inferior." In light of the case’s historical context, we determined the content contained a direct attack by comparing Serbians to rats.
We will implement the board's decision once it has finished deliberating, and we will update this post accordingly. Please see the board's website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to leave the content on the platform. However, as Meta ultimately removed the content, no further action will be taken.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.
Meta should clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood. The board will consider this recommendation implemented when Meta updates its Community Standards and Internal Implementation Standards to content reviewers to incorporate this revision.
Our commitment: We will update our Community Standards and internal policy guidance to clarify our approach to implicit hate speech, by which we mean hate speech that requires context to interpret. We will explain that it will only be removed when we can reasonably understand the user’s intent.
Considerations: We believe that people use their voice and connect most freely when they don’t feel attacked on the basis of who they are. For that reason, we don’t allow hate speech on our platforms. However, in order to review the volume of expression that people who use our platforms share every day, we have to apply a high-capacity, high-consistency approach. This is why we only remove content when we can reasonably conclude that it contains a hate speech attack based on the context.
Our reviewers make decisions based on the letter of our hate speech policy without adding their own viewpoints. Our at-scale reviewers may escalate questions for expert review where the application of our policy is ambiguous or requires additional context. Content that may contain implicit hate speech, which often uses ambiguous language or requires additional context to interpret, may qualify for this kind of escalation.
The challenges of addressing implicit hate speech can be illustrated with these examples: One post shows an image of a Neo-Nazi rally with the caption, "wow." A second post shows a meme with two images side-by-side, one showing Adolf Hitler surrounded by German children and the other showing Angela Merkel surrounded by Syrian refugees with a caption that reads, "Germany then vs Germany now." In both cases, the intent behind the content is ambiguous: it could be hate speech or it could be speech that condemns hate. When we are unable to determine whether a person’s speech is hate speech, we leave that expression on our platforms.
Next steps: We will add additional language to our Community Standards and policy guidance to clarify that our approach to implicit hate speech is to remove it if it is escalated and we can reasonably understand the user’s intent. We will report on our progress in a future Quarterly Update.
In line with Meta’s commitment following the "Wampum belt" case (2021-012-FB-UA), the board recommends that Meta notify all users who have reported content when, on subsequent review, it changes its initial determination. Meta should also disclose the results of any experiments assessing the feasibility of introducing this change with the public. The board will consider this recommendation implemented when Meta shares information regarding relevant experiments and, ultimately, the updated notification with the board and confirms it is in use in all languages.
Our commitment: We currently inform all people on our platforms when we change, or maintain, a decision on content they have posted or reported. We will soon add more detail to the messages they receive so that they know when that change is related to the Oversight Board’s review process.
Considerations: Meta sends messages to the people on our platforms when we make a decision about content they have reported, posted or appealed. Right now, these messages are largely standardised, although we also notify people who have appealed content to the Oversight Board when the status of that content changes as a result of an Oversight Board decision. If we change our enforcement decision on a piece of content due to the Oversight Board’s review, but prior to the Oversight Board’s decision, people who reported, posted, or appealed that content receive standardised messaging that does not mention the board’s involvement. Consistent with our response to Recommendation #1 in the Depicting Indigenous Artwork and Discussing Residential Schools case (2021-012-FB-UA), we are updating our messaging to indicate that the Oversight Board’s review process helped us determine that our enforcement decision was incorrect. These messages will be sent both to people whose posts are under review and to people who reported or appealed the content. We plan to launch this update by the end of the year. The notices will eventually be translated into all the languages that Meta supports.
Next steps: We will update our messaging to acknowledge when Meta’s enforcement decision changes as a result of the Oversight Board’s review process by the end of the year. These messages will be sent to both people who posted the content under review and those who reported or appealed it. We will share updates on this work in future Quarterly Updates.