Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JAN 19, 2022
We invest in artificial intelligence to improve our ability to detect violating content and keep people safe. Whether it’s improving an existing system or introducing a new one, these investments help us automate decisions on content so we can respond faster and reduce mistakes.
Here are some of the investments we’ve made in AI technology to improve how our tools understand content:
We developed a new architecture called Linformer, which analyzes content on Facebook and Instagram in different regions around the world.
We built a new system called Reinforced Integrity Optimizer, which learns from online signals to improve our ability to detect hate speech.
We incorporated language tools called XLM and XLM-R, which help us build classifiers that understand the same concept in multiple languages. This means when our technology can learn in one language, it can improve its performance in others, which is particularly useful for languages that are less common on the internet.
We built a whole entity understanding system, which analyzes content to help determine whether it contains hate speech.
The challenges of harmful content affect the entire tech industry and society at large. That’s why we open-source our technology to make it available for others to use. We believe being open and collaborative with the AI community will spur research and development, create new ways of detecting and preventing harmful content, and help keep people safe.
Here are some pieces of technology we’ve open-sourced in recent years, including 2 industry competitions we led:
XLM-R is a machine learning model that’s trained in one language and then used with other languages without additional training data. With people posting content in more than 160 languages on Meta technologies, XLM-R lets us use one model for many languages, instead of one model per language. This helps us more easily identify hate speech and other violating content across a wide range of languages and launch products in multiple languages at once. We open-sourced our models and code so the research community can improve the performance of their multilingual models.
Goal: To give people the best experience on our platforms, regardless of the language they speak.
Linformer is a transformer architecture that analyzes billions of pieces of content on Facebook and Instagram in different regions around the world. Linformer helps detect hate speech and content that incites violence. We published our research and open-sourced the Linformer code so other researchers and engineers could improve their models.
Goal: To create a new AI model that learns from text, images and speech and efficiently detects hate speech, human trafficking, bullying and other forms of harmful content.
We created a competition with Microsoft, the Partnership on AI, and academics from several universities for technology that better detects when AI has been used to alter a video in order to mislead viewers. Our contribution to the Deepfakes Detection Challenge was commissioning a realistic data set, which the industry lacked, to help detect deepfakes.
Goal: To spur the industry to create new ways of detecting and preventing media manipulated with AI from being used to mislead people.
We created a competition with Getty Images and DrivenData to accelerate research on the problem of detecting hate speech that combines images and text. Our contribution to the Hateful Memes Challenge was creating a unique data set of over 10,000 examples so researchers could easily use them in their work.
Goal: To spur the industry to create new approaches and methods for detecting multimodal hate speech.