Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
What do we restrict?
Our terms and policies define what is and isn't allowed on Facebook and Instagram. You can find further information on the restrictions we may impose on your use of Meta services in the Facebook Terms of Service, Instagram Terms of Use, Facebook Community Standards, Instagram Community Guidelines, Advertising Standards, Commerce Policies, and some of our other policies.
If we determine content goes against our terms and policies, we take action on it. For example, we do not allow content that incites violence or criminal behavior (such as content promoting, supporting or praising dangerous organizations), compromises people’s safety (such as bullying and harassment, suicide and self-injury, and child or adult sexual exploitation), is objectionable (such as hate speech), is inauthentic (such as spam, misinformation or fake profiles) or that violates someone else’s intellectual property. More information on these policies is provided in the Facebook Community Standards and Instagram Community Guidelines.
We may also impose restrictions in certain other circumstances. To learn more, see the following:
Intellectual property, Copyright and Trademark
Facebook Content Monetisation Policies and Instagram Content Monetisation Policies
Facebook Partner Monetization Policies and Instagram Partner Monetization Policies
Restrictions based on local law
Content Distribution Guidelines
How do we apply our policies?
There are various procedures, measures and tools that we may use to moderate content on our services on the basis of our terms and policies.
We have content enforcement systems in place to take action when we determine something goes against our terms and policies. These form part of a three part approach – remove, reduce, inform:
Remove: We remove content that goes against our policies as soon as we become aware of it.
Reduce: Some problematic content can create a negative experience for people on Facebook and Instagram. We'll often reduce the distribution of this content, even when it doesn’t quite meet the standard for removal under our policies.
Inform: When content is potentially sensitive or misleading, we sometimes add a warning or share additional information from independent fact-checkers.
To learn more about our enforcement policies, see here.
To implement this approach, Meta uses both technology and human review teams to detect, review and take action on millions of pieces of content (which includes accounts) every day on Facebook and Instagram.
Technology
Technology, including machine learning, is central to our content review process. Our technology proactively detects and removes the vast majority of violating content before anyone reports it. Our technology automates decisions for certain areas where content is highly likely to be violating and can take action on a new piece of content if it matches or comes very close to another piece of violating content. You can find more on how our enforcement technology works here.
Technology also helps prioritise review. Whether content is reported by people or detected by Meta’s technology, automation helps us quickly route the content to human reviewers who have the right subject matter and language expertise. We then use technology to rank and prioritise content so our review teams can focus on the most important cases first. You can find more on how technology helps prioritise review here.
Human Review Teams
Our human review teams are located across the globe and review potential violations on Facebook and Instagram. They receive in-depth training and often specialise in certain policy areas and regions. When a piece of content requires further review, our technology sends it to a human review team to take a closer look and make the final decision. Our technology learns and improves from each decision. You can find more on how review teams work here.
How do we assess reports of content violating local law?
We also have specific procedures and processes in place to assess reports about content that may violate relevant local law.
When governments believe content on Facebook or Instagram goes against local law, they may report content for review. We may also receive court orders to restrict content or reports alleging content is unlawful in a particular country from non-government entities and members of the public.
We have a robust process for reviewing reports alleging that content on Facebook or Instagram go against local law.
When we receive a report, we first review it against our policies, such as the Facebook Community Standards or Instagram Community Guidelines. If we determine that the content goes against our policies, we remove it. If content does not go against our policies, in line with our commitments as a member of the Global Network Initiative and our Corporate Human Rights Policy, we conduct a review to confirm whether the report is valid.
In cases where we believe that reports are not legally valid, or are overly broad, or are inconsistent with international human rights standards, we may request clarification or take no action. Where a reporter frequently submits abusive reports that content goes against local law, we may suspend processing of such reports in line with our Misuse Policy here.
Where we act against content on the basis of local law rather than our policies such as the Facebook Community Standards or Instagram Community Guidelines, we generally restrict access to the content only in the jurisdiction where it is alleged to be unlawful and in most cases we do not impose any other penalties or feature restrictions. We also notify the affected user.
When we act against Ads or Commerce content (such as Marketplace posts) on the basis of local law, we remove the content globally pursuant to our Advertising Policies and Commerce Policies, respectively.
When content violates intellectual property rights, we remove the content globally pursuant to our Facebook Terms of Service and Instagram Terms of Use.
Where a user has frequently posted illegal content and we have repeatedly restricted access to their content on the basis of local law, we may impose suspensions per our Misuse Policy here
If you disagree with a decision Meta has taken relating to content, you can find out about your options to request a review here