Altered Video of President Biden

UPDATED

APR 5, 2024

2023-029-FB-UA

Today, the Oversight Board selected a case appealed by a Facebook user regarding an altered video of President Biden which appears to show him inappropriately touching a young woman’s chest and kissing her on the cheek. We determined this video was edited in superficial ways to remove certain portions, but not with AI or synthetic methods. The altered video is accompanied by a caption calling President Biden a “sick pedophile.” The original version of the video shows President Biden placed an “I Voted” sticker on his granddaughter’s shirt after she indicated where to place it and then kissed her on the cheek. While this is not, by definition, a deepfake — which is the product of artificial intelligence or machine learning, including deep learning techniques, that merges, combines, replaces, and/or superimposes content onto a video, creating a video that appears authentic — reviews of the video concluded that the video was edited to remove context and distort what really happened.

Meta determined that the content did not violate our policies on Hate Speech, Bullying and Harassment, or Manipulated Media, as laid out in our Facebook Community Standards, and left the content up.

Under our Hate Speech policy, Meta removes any content that targets someone with “comparison, generalizations, or unqualified behavioral statements” about being a “sexual predator,” but only when the content specifically targets someone on the basis of a “protected characteristic.” We define a protected characteristic as “race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, or serious disease.” However, in this case, the content did not violate this policy because the content did not attack or target President Biden on the basis of a protected characteristic.

Under our Bullying and Harassment policy, we allow criminal allegations against all adults, even if they contain expressions of contempt or disgust. In this case, the reference to President Biden as a “sick pedophile” is considered an expression of contempt in the context of a criminal allegation against an adult.

Under our Manipulated Media policy, we remove videos if specific criteria are met, including if the video has been edited or synthesized, beyond adjustments for clarity or quality, in ways that are not apparent to an average person, and would likely mislead an average person to believe a subject of the video said words that they did not say. In this case, the video does not depict President Biden saying something he did not say and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions).

We will implement the board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today, February 5, 2024, on this case. The board upheld Meta's decision to leave the content up.

After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.

Recommendations

Recommendation 1 (Implementing in Full)

To address the harms posed by manipulated media, Meta should reconsider the scope of its Manipulated Media policy in three ways to cover: (1) audio and audiovisual content, (2) content showing people doing things they did not do (as well as saying things they did not say), and (3) content regardless of the method of creation or alteration.

The Board will consider this recommendation implemented when the Manipulated Media policy reflects these changes.

Our commitment: In February 2024, we announced we will start labeling a wider range of video, audio and image content as “Made with AI” when we detect industry standard indicators or when people disclose they’re uploading AI-generated video, audio or images. We are also updating our Misinformation Community Standard as we explain further in our response to Recommendation #3 and in a new Newsroom Post, and consider this recommendation implemented in full.

Considerations: We agree with the Board that AI-generated or manipulated audio and audiovisual content is important to capture in our policies and enforcement. We introduced our Manipulated Media Policy in 2020, when realistic content made with AI was rare and the overarching concern was about videos. We began a review process in Spring 2023 to evaluate our policies and consider whether new approaches were needed to keep pace with advancements in generative AI. As we outline further below, in addition to our engagement with the Oversight Board in this case, this process included extensive public opinion surveys and close consultations with experts, researchers and other stakeholders.

Now, we are shifting our approach to give people information about AI-generated content and will be labeling AI-generated audio, video, and images that we can detect. As explained in a February Newsroom post, we’ve been working with industry partners on common technical standards for identifying AI content, including video and audio. Our “Made with AI” labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We already add “Imagined with AI” to photorealistic images created using our Meta AI feature.

We will still remove content if it violates our Community Standards, regardless of whether it is created by AI or a person. Our network of nearly 100 independent fact-checkers will continue to review false and misleading AI-generated content. Users have the ability to request a review of fact-checker ratings in product or by reaching out directly to fact-checkers. Finally, if we determine that digitally-created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may apply a more prominent label.

This approach is reflected in the Misinformation section of our Community Standards. In line with the Board's recommendation in this decision, and as described further in our response to Recommendation #3, we will keep this content on our platforms so we can add informational labels and context, unless the content otherwise violates our policies. We therefore consider this recommendation from the Board complete.

Recommendation 2 (Implementing in Full)

To ensure its Manipulated Media policy pursues a legitimate aim, Meta must clearly define in a single unified policy the harms it aims to prevent beyond preventing users being misled, such as preventing interference with the right to vote and to take part in the conduct of public affairs.

The Board will consider this recommendation implemented when Meta changes the Manipulated Media policy accordingly.

Our commitment: As discussed in Recommendation #3 below, we agree with the Oversight Board’s recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context. As such, we will update the Misinformation section of our Community Standards to reflect this approach, which, in combination with the updated policy language, fulfills this recommendation.

Considerations: Under all of our Community Standards, we balance Meta’s values of expression, safety, dignity, authenticity, and privacy.

Distinct from other policy areas, for Misinformation we note different categories of misinformation and our remove, reduce, and inform approach to content. While we remove certain misinformation, such as misinformation that has the potential to contribute to the risk of imminent physical harm or could directly contribute to interference with the functioning of political processes, for other types of content we generally may partner with third-party fact checking organizations to review and rate content. As of February 2024, we also include details about our approach to labeling digitally created or altered content that is a photorealistic image or video, or realistic sounding audio, that creates a particularly high risk of materially deceiving the public on a matter of public importance. We believe by consolidating this into the Misinformation section of the Community Standards, we have defined the policy line under which we may approach digitally created or altered content.

Recommendation 3 (Implementing in Full)

To ensure the Manipulated Media policy is proportionate, Meta should stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and may mislead. The label should be attached to the media (such as a label at the bottom of a video) rather than the entire post, and should be applied to all identical instances of that media on the platform.

The Board will consider this recommendation implemented when Meta launches the new labels and provides data on how many times the labels have been applied within the first 90-day period after launch.

Our commitment: Based on the Oversight Board’s recommendation, as well as public opinion surveys in 13 countries and global consultations with academics, civil society organizations and others, we are making changes to the way we handle manipulated media. We will begin labeling a wider range of video, audio and image content as "Made with AI" when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content. We agree with the Oversight Board’s recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context.We will still remove content, regardless of whether it is created by AI or a person, if it violates our Community Standards.

Considerations: As highlighted in the above responses, we agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving. As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. As we write in our response to Recommendation #1, we are actively building an approach to label AI-generated video, audio and images when we can detect it.

We agree that providing transparency and additional context is now the better way to address this content. We will keep this content on our platforms so we can add informational labels and context, unless the content otherwise violates our policies. For example, we will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards. We also have a network of nearly 100 independent fact-checkers who will continue to review false and misleading AI-generated content. When fact-checkers rate content as False or Altered, we show it lower in Feed so fewer people see it, and add an overlay label with additional information. In addition, we reject an ad if it contains debunked content, and since January, advertisers have to disclose when they digitally create or alter a political or social issue ad in certain cases.

We plan to start labeling AI-generated content in May 2024, and we’ll stop removing content solely on the basis of our manipulated video policy in July. This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.

In Spring 2023, we began reevaluating our policies to see if we needed a new approach to keep pace with rapid advances in generative AI technologies and usage. We completed consultations with over 120 stakeholders in 34 countries in every major region of the world. Overall, we heard broad support for labeling AI-generated content and strong support for a more prominent label in high-risk scenarios. Many stakeholders were receptive to the concept of people self-disclosing content as AI-generated.

A majority of stakeholders agreed that removal should be limited to only the highest risk scenarios where content can be tied to harm, since generative AI is becoming a mainstream tool for creative expression. This aligns with the principles behind our Community Standards – that people should be free to express themselves while also remaining safe on our services.

We also conducted public opinion research with more than 23,000 respondents in 13 countries and asked people how social media companies, such as Meta, should approach AI-generated content on their platforms. A large majority (82%) favor warning labels for AI-generated content that depicts people saying things they did not say.

Additionally, the Oversight Board noted their recommendations were informed by consultations with civil-society organizations, academics, inter-governmental organizations and other experts.

Based on feedback from the Oversight Board, experts, and the public, we’re taking steps we think are appropriate for platforms like ours. We want to help people know when photorealistic images have been created or edited using AI, so we'll continue to collaborate with industry peers through forums like the Partnership on AI and remain in a dialogue with governments and civil society – and we’ll continue to review our approach as technology progresses.

We will share an update on our progress in the next Oversight Board public report.