Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JAN 26, 2022
Our commitment to stakeholder engagement means addressing a number of essential questions, such as: How does Meta decide who to engage with? How do we find relevant experts? How do we make sure that vulnerable groups are heard?
There’s no simple formula for responding to these questions. But we’ve developed a structure and methodology to engage stakeholders centered around three core principles: inclusiveness, expertise and transparency
Stakeholder engagement broadens our perspective and creates a more inclusive approach to policymaking.
Stakeholder engagement helps us better understand how our policies impact people and organizations. When we make decisions about what content to remove and what to leave up, we affect how people communicate with each other on Facebook. Not everyone will agree on where we draw the lines. But at a minimum, we need to understand the concerns of those who are impacted by our policies, whether they agree or disagree with them.
It’s particularly important that we hear the voices of stakeholders from marginalized communities. That’s why we reach out to a broad spectrum of stakeholders across the world. It's not enough to ask how our policies affect “people in general.” We need to understand how our policies will impact people who are particularly vulnerable by virtue of laws, cultural practices, poverty or other reasons that prevent them from speaking up for their rights.
The question of our impact plays out in many ways. While our policies are global, they affect people on a very personal level. Our policy development needs to reflect cultural sensitivity and a deep understanding of local context.
Stakeholder engagement gives us a tool to deepen our local knowledge and perspective so we can hear the voices across the policy spectrum we might otherwise miss.
Of course, it’s not always self-evident what “the spectrum” is. In many cases, our policies don't line up neatly with traditional political dichotomies, such as liberal versus conservative or civil libertarian versus control by the State. We talk to others in Meta's Policy and Research organizations and conduct our own research to identify a range of diverse stakeholders.
For example, in considering how our hate speech policy should apply to certain forms of gendered language, we spoke with academic experts, women's and digital rights groups and free speech advocates. Likewise, when considering our policy on adult nudity and sexual activity in art, we listened to family safety organizations, as well as artists and museum curators. In reviewing how our policies should apply to memorialized profiles of deceased people, we connected with both professors who study digital legacy as an academic subject as well as people on Facebook who've been designated as “Legacy Contacts” and who have real world experience with this product feature.
In our stakeholder mapping, we also seek input from minority groups that have traditionally lacked power throughout the world, such as political dissidents and religious minorities. For example, in reevaluating how our hate speech policy applies to certain behavioral generalizations, we consulted with immigrants rights groups.
Stakeholder engagement brings expertise to our policy development process.
The Stakeholder Engagement team conducts research to gather input from top subject matter experts for a given policy. This ensures our policy-making process is informed by current theories and analysis, empirical research and an understanding of the latest online trends. The expertise we gather references issues of language, social identity and geography, all of which bear on our policies in important ways.
Our policies are entwined with many complex social and technological issues, such as hate speech, terrorism, bullying and harassment and threats of violence. Sometimes we're looking for guidance on how safety and voice should be balanced, such as considering what types of speech to allow about “public figures” under our policies. In other cases, we're reaching out to gain specialized knowledge, such as how our policies can draw on international human rights principles or how minority communities may experience certain types of speech.
Sometimes the challenges we face are new, even to the experts we consult with. But by talking with outside experts and incorporating their feedback, we make our policies more thoughtful.
For example, our hate speech policy recognizes three tiers of attacks. Tier 1, the most severe, involves calls to violence or dehumanizing speech against other people based on their race, ethnicity, nationality, gender or other protected characteristic (for example, “Kill the Christians”). Tier 2 attacks consist of statements of inferiority or expressions of contempt or disgust (for example, “Mexicans are lazy”). And Tier 3 covers calls to exclude or segregate (for example, “No women allowed”).
These tiers make our policies more nuanced and precise. On the basis of the tiers, we're able to provide additional protections against the most harmful forms of speech. For instance, we remove Tier 1 hate speech directed against immigrants (for example, “Immigrants are rats”) but permit less intense forms of speech (for example, “Immigrants should stay out of our country”) to leave room for broad political discourse.
As part of our policy development work in this area, we spoke with outside experts—academics, NGOs that study hate speech and groups all across the political landscape. This stakeholder engagement helped confirm the tiers were comprehensive and aligned with patterns of online and offline behavior.
Stakeholder engagement makes our policies and our policy development process more transparent.
We know from talking to hundreds of stakeholders that we can build trust by making sure our policy-development process is open. The more visibility we provide, the more our stakeholders are likely to view our policies as legitimate. Transparency with our stakeholder engagement process helps us build a system of rules and enforcement that people regard as fair.
Engagement also means being open about the challenges of moderating content, as well as explaining the rationale behind our policies and why there may be a need for improvement. In turn, the policies we launch will be better by virtue of having been tested through consultation and a candid exchange of views.