First Bundled Case About Violence Against Women

UPDATED

SEP 11, 2023

2023-002-IG-UA

Today, the Oversight Board selected a case appealed by an Instagram user regarding a video in Swedish containing a woman’s testimony about her experience in a violent intimate relationship. The caption discusses the nature of gender-based violence inflicted by men upon women, claiming that men physically and mentally abuse women “all the time, every day.”

Upon initial review, Meta took down this content for violating our policy on Hate Speech, as laid out in our Instagram Community Guidelines and Facebook Community Standards. However, upon additional review, we determined we removed this content in error and reinstated the post after subject matter experts determined this was a qualified behavioral statement that raises awareness of gender-based violence against women.

We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to remove the content from the platform. Meta previously reinstated this content so no further action will be taken on it.

After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.

Recommendations

Recommendation 1 (Implementing in Part)

To allow users to condemn and raise awareness of gender-based violence, Meta should include the exception for allowing content that condemns or raises awareness of gender-based violence in the public language of the Hate Speech policy. The Board will consider this recommendation implemented when the public-facing language of the Hate Speech Community Standard reflects the proposed change.

Our commitment: Both our Violence and Incitement policy and our Hate Speech policy allow for content to be shared in a condemning or awareness-raising context, including for gender-based violence. We are currently working to clarify both our Hate Speech and Violence and Incitement policies and will consider opportunities to clearly publicly articulate this allowance.

Considerations: As part of both our Violence and Incitement policy and Hate Speech policy, we allow content that is shared in order to condemn or raise awareness. For example, if someone were to share a video of gender-based violence with a caption condemning the actions depicted, we would allow it unless the video or caption contained additional content that violated our policies. For example, if someone were to share content that condemns or raises awareness of gender-based violence, but also violates our Hate Speech policy by including a direct attack against people based on their protected characteristics, the content would be removed.

We are in the process of refining and clarifying our Community Standards as part of holistically reviewing the overlaps and differences between our policies on organic and ads content, and will consider ways that we can more clearly articulate where we may allow speech that condemns and raises awareness of gender-based violence.

Recommendation 2 (Implementing in Part)

To ensure that content condemning and raising awareness of gender-based violence is not removed in error, Meta should update guidance to its at-scale moderators with specific attention to rules around qualification. This is important because the current guidance makes it virtually impossible for moderators to make the correct decisions even when Meta states that the first post should be allowed on the platform. The Board will consider this recommendation implemented when Meta provides the Board with updated internal guidance that shows what indicators it provides to moderators to grant allowances when considering content that may otherwise be removed under the Hate Speech policy.

Our commitment: As part of our Hate Speech Community Standards, we remove broad generalizations and unqualified behavioral statements about a group or groups of people when there is an attack on a group of groups of people based on protected characteristics. We are pursuing work to provide additional nuance to clarify internal guidance around our approach to behavioral statements, generalizations, and qualified behavioral statements This includes long-term work to increase alignment across our approaches to potentially violating content in our Hate Speech policy area and across policy areas.

Considerations: Currently, we are scoping work to explore refining our guidance within our Hate Speech policy generally around behavioral statements, generalizations, and qualified behavioral statements. We will provide additional detail on our progress in future updates. We recognize that there may be space to provide additional nuance and context when we allow content shared in condemnation and raising awareness context within our policies, and therefore are exploring ways to make this update. We will provide additional detail on our progress in future Quarterly Updates.

Recommendation 3 (Implementing in Full)

To improve the accuracy of decisions made upon secondary review, Meta should assess how its current review routing protocol impacts accuracy. The Board believes Meta would increase accuracy by sending secondary review jobs to different reviewers than those who previously assessed the content. The Board will consider this implemented when Meta publishes a decision, informed by research on the potential impact to accuracy, whether to adjust its secondary review routing.

Our commitment: We have ongoing monitoring mechanisms in place which assess how review routing protocol and enforcement decisions impacts accuracy across reviewers. We are continuously working to refine and improve how systems impact our full set of enforcement metrics, including accuracy.

Considerations: Meta has review protocols in place to ensure that secondary review is allocated to a different reviewer other than the initial reviewer to the most feasible extent. As shared in our response to PAO on Meta’s Cross-Check Policies #31, we have an internal system called Dynamic Multi Review (DMR) which enables us to review certain content multiple times, by different reviewers before making a final decision. This ensures that the quality and accuracy of human review is carefully considered upon secondary review, taking into account factors such as virality and potential for harm.

We have a dedicated global operations measurement team which monitors enforcement decisions across all content types. This team monitors the accuracy and quality of our review decisions to ultimately develop integrity metrics that validate our review processes. We do this through protocols such as audit validation which ensures that our accuracy metrics can be trusted on an ongoing basis and remain in alignment with the source of truth. Our operational measurement teams also conduct engagement with scaled reviewers to maintain validation across our metrics, monitor tooling and triaging to report and address malfunctions in our tools on an ongoing basis and generate insights to consistently improve review accuracy.

In practice, decisions that are escalated to secondary review are processed through channels that will ensure at least one additional reviewer other than the initial assessor may be allocated. Oftentimes more than one additional reviewer is allocated, depending on factors including the violation type, associated account tags and accumulated views. We have also enabled robust appeals processes which have recently enabled users to appeal decisions directly to the board per our response to PAO on Meta’s Cross-Check Policies #25. Additionally, our appeals processes default to secondary review from reviewers who are different to those who previously assessed the content, unless determined otherwise by capacity constrained exceptions.

We are constantly iterating on our routing protocol and monitoring our accuracy metrics. We continue to strive towards ensuring that our secondary review routing systems strengthen our enforcement accuracy. We now consider this recommendation complete and will have no further updates on this work.

Recommendation 4 (Implementing in Part)

To provide greater transparency to users and allow them to understand the consequences of their actions, Meta should update its Transparency Center with information on what penalties are associated with the accumulation of strikes on Instagram. The Board appreciates that Meta has provided additional information about strikes for Facebook users in response to Board recommendations. It believes this should be done for Instagram users as well. The Board will consider this implemented when the Transparency Center contains this information.

Our commitment: We remove content from Instagram if it violates our policies, and also may disable accounts if they repeatedly violate our policies as we note in our Restricting Accounts page on the Transparency Center. We do not apply the same restrictions (such as read-only feature blocks) on Instagram as we do on Facebook, so the same penalties are not associated with the accumulation of strikes for our users. We will work to more clearly represent this information on our Transparency Center.

Considerations: We provide details about our approach to strikes and penalties in the Transparency Center, highlighting where these strikes and related penalties apply specifically to Facebook. However, aside from account disable and restrictions to live video, these same restrictions do not apply to Instagram due to a number of reasons, including the difference between the platforms and the fact that they have different features and experiences. Facebook users may also utilize groups and pages – if a person posts violating content to a page or group that they manage, the strike may also count against that page or group. Instagram, on the other hand, does not have these features and therefore the same restrictions would not apply. In cases such as “Live video”, however, which is a feature both on Facebook and Instagram, if a user accrues enough strikes on Instagram we temporarily limit access for that feature, just as if they accrued the same number of strikes on Facebook.

In our Instagram Help Center, we provide details about how a user can check their Account Status, which allows them to find out if content they have posted was removed for violating the Community Guidelines and whether that removal may lead to account restrictions and limitations. This includes details on how to appeal content and allows a user to see if there are any features they may temporarily not be able to use as a result of their violations to the Instagram Community Guidelines. For increased accessibility, people can also check their account status and identify any enforcement actions taken against their content via an in-product feature. Ultimately if a user violates our policies on Facebook, Instagram, or Threads repeatedly, or violates a more severe policy, we will disable the account. In addition to this shared approach across Facebook and Instagram, the Instagram Help Center details other restrictions that may be placed on an account in an effort to limit things like spam or inauthentic activity, including limits to how many messages an account can send or limits to approving follower requests.

We will incorporate language to our Transparency Center to clarify how penalties apply to Instagram and will share updates on this work in a future Quarterly Update.