Policies that outline what is and isn't allowed on the Facebook app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
Quarterly report on how well we're doing at enforcing our policies in the Facebook app and on Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional Internet restrictions that limit people's ability to access the Internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JAN 19, 2022
Sometimes, the meaning of a piece of content is immediately obvious to a person but less clear to technology. To keep people safe, Meta needs to train artificial intelligence on how to detect violating posts.
For example, the following content combines text and images. Two of the images are good-natured; the other two are potentially mean-spirited.
Without proper training, most AI struggles to make these distinctions. It either reads the text and determines the literal meaning of the words or it looks at the image to determine the general meaning of the photo's subject. People, on the other hand, instinctively pair the text and image together to understand the content.
One way we address this is by training our technology to first look at all the components of a post and only then to determine the true meaning. This can go a long way to helping AI more accurately detect what a person sees when viewing the same post.
We also use a system that guides AI to learn directly from millions of current pieces of content and help pick training data that reflects our goals. This is different from typical AI systems that rely on fixed data for training. Using this method helps us better protect people from hate speech and content that incites violence.
We still have work to do, but this training will help our technology continue to improve and better understand the true meaning of multimodal content.