Content actioned

UPDATED

NOV 7, 2023

We measure the number of pieces of content (such as posts, photos, videos or comments) or accounts we take action on for going against our standards. This metric shows the scale of our enforcement activity. Taking action could include removing a piece of content from Facebook or Instagram, covering photos or videos that may be disturbing to some audiences with a warning, or disabling accounts. In the event we escalate content to law enforcement, we don’t additionally count that.

It might be tempting to read our content actioned metric as an indicator of how effectively we find violations or the impact of those violations on our community. However, the volume of content we take action on is only part of the story. It doesn’t reflect how long it took to detect a violation or how many times users saw that violation while it was on Facebook or Instagram.

This metric can go up or down due to external factors that are out of our control. As an example, consider a cyberattack during which spammers share 10 million posts featuring the same malicious URL. After we detect the URL, we remove the 10 million posts. Content actioned would report 10 million pieces of content acted on, an enormous spike. But this number doesn’t necessarily reflect that we got better at acting on spam; it reflects more that spammers decided that month to attack Facebook with unsophisticated spam that was easy to detect. Content actioned also doesn’t indicate how much of that spam actually affected users: people might have seen it a few times, or a few hundred or thousand times. (That information is captured in prevalence.) After the cyberattack, content actioned might decrease dramatically, even if our detection moving forward improves.

A piece of content can be any number of things including a post, photo, video or comment.

How we count content and actions

How we count individual pieces of content can be complex and has evolved over time. In July 2018, we updated our methodology to clarify how many discrete pieces of content we’ve taken action on for violating our policies, and we will continue to mature and improve our methodology as part of our commitment to providing the most accurate and meaningful metrics. Overall, our intention is to provide an accurate representation of the total number of content items we take action on for violating our policies.

There are some differences in how we count content on Facebook versus Instagram.

On Facebook, a post with no photo or video or a single photo or video counts as one piece of content. That means all of the following, if removed, would be counted as one piece of content actioned: a post with one photo, which is violating; a post with text, which is violating; and a post with text and one photo, one or both of which is violating.

When a Facebook post has multiple photos or videos, we count each photo or video as a piece of content. For example, if we remove two violating photos from a Facebook post with four photos, we would count this as two pieces of content actioned: one for each photo removed. If we remove the entire post, then we count the post as well. So for example, if we remove a Facebook post with four photos, we would count this as five pieces of content actioned: one for each photo and one for the post. If we only remove some of the attached photos and videos from a post, we only count those pieces of content.

On Instagram, we remove the whole post if it contains violating content, and we count this as one piece of content actioned, regardless of how many photos or videos there are in the post.

At times, a piece of content will be found to violate multiple standards. For the purpose of measuring, we attribute the action to only one primary violation. Typically this will be the violation of the most severe standard. In other cases, we ask the reviewer to make a decision about the primary reason for violation.

How we label violations

Every time we take action on a piece of content, we label the content with the policy it violated. When reviewers look at reports, they first select whether the material violates our policies or not. If they select yes, they then label with the violation type.

In the past, we didn’t require our reviewers to label the violations when they made decisions. Instead, we relied on information that users gave us when they submitted reports. In 2017, we upgraded our review process to record more granular information about why reviewers removed a piece of content, which allowed us to establish more accurate metrics. We also updated our detection technology so it labels violations as they’re found, flagged or removed using the same labels as our reviewer decisions.

To count the content acted on for a specific standard violation, we must label the violation each time we take an action.

Accounts actioned on Facebook for being fake

For fake accounts, we report “accounts actioned” as opposed to “content actioned.” “Accounts actioned” is the number of accounts we disable for being fake.

Caveats

Content actioned and accounts actioned don’t include instances where we block content or accounts from being created in the first place, as we do when we detect spammers attempting to post with high frequency or the creation of a fake account. If we included these blocks, it would dramatically increase the numbers (likely by millions a day) for fake accounts disabled and spam content removed.

When we enforce on URLs, we remove any current or future content that contains those links. We measure how much content we actioned based on if a user attempts to display this content on Facebook.

How we measure actions on accounts, Pages, Groups and Events

Large volumes of content can live within user accounts, Pages, Groups or events on Facebook. One of these objects as a whole can violate our policies, based on content or behavior within it. We can usually determine that an account, Page, Group or event violates standards without reviewing all the content within it. If we disable an account, Page, Group or event, all the content within it automatically becomes inaccessible to users.

In the metrics included in the Community Standards Enforcement Report, we only count the content in accounts, Pages, Groups or events that we determined to be violating during our reviews of those objects and that we explicitly took action on. We don’t count any content automatically removed upon disabling the account, Page, Group or event that contained that content.

Except for fake accounts on Facebook, this report does not currently include any metrics related to accounts for any violations. This report also does not report any metrics for Pages, Groups or events we took action on for any violation —just content within those objects.