Policies that outline what is and isn't allowed on the Facebook app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies in the Facebook app and on Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional Internet restrictions that limit people's ability to access the Internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JAN 19, 2022
Whether potentially violating content is reported by people or detected by Meta's technology, automation helps us quickly route the content to reviewers who have the right subject matter and language expertise.
We then use technology to rank and prioritise content so our review teams can focus on the most important cases first. This includes content with the potential for offline harm, such as posts related to terrorism and suicide, and viral content that violates our policies and has the potential to reach a large audience.
To make sure that review teams spend more time focused on the right decisions, we're always making improvements to our technology and processes.How Meta prioritises content for review
To reduce harm in our community, our technology and human review teams are always working together. Here are some ways that reviewers, in tandem with technology, help strengthen our entire content enforcement system.
When reviewers make a decision about a piece of content, they're simultaneously training and refining our technology to help it identify other pieces of similar content over time. This human-technology feedback loop is vital to keeping our systems current.
When reviewing violating content, review teams manually label the policy guiding their decision, which means that they mark the policy that the content, account or behaviour violates. This important labelling data helps us improve the quality of our artificial intelligence algorithms that proactively search for harmful content.
Our technology does well in two areas in particular: detecting repeated violations and identifying obviously graphic or extreme content. But when there's a high degree of ambiguity, complexity or nuance in whether our policies apply to a piece of content, reviewers tend to make better decisions than technology.