Beginning in November, Google will mandate that political advertisements prominently disclose when they contain synthetic content, such as images generated by artificial intelligence, the company announced this week.
According to Google’s blog post on Wednesday, political ads featuring synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers.
This rule, which is an addition to the company’s political content policy covering Google and YouTube, will apply to images, videos, and audio content.
The update to Google’s policy comes as the 2024 U.S. presidential election campaign intensifies and as many countries prepare for significant elections that year.
Concurrently, advancements in artificial intelligence technology have made it increasingly easy and affordable to create convincing AI-generated text, audio, and video.
Experts in digital information integrity have raised concerns that these new AI tools could lead to a surge of election misinformation that social media platforms and regulators might struggle to manage.
AI-generated images have already appeared in political ads. In June, a video posted on X by Florida Governor Ron DeSantis’ presidential campaign featured AI-generated images showing former President Donald Trump embracing Dr. Anthony Fauci.
These images, intended to criticize Trump for not dismissing Fauci, were designed to be misleading: they were presented alongside real images of Trump and Fauci, with a text overlay stating, “real life Trump.”
In April, the Republican National Committee released a 30-second ad in response to President Joe Biden’s campaign announcement, using AI-generated images to depict a dystopian future following Biden’s reelection.
The ad included a small disclaimer, “Built entirely with AI imagery,” but some viewers in Washington, DC, did not notice it on their initial viewing.
Google’s updated policy will require disclosures on ads that use synthetic content in ways that could mislead users.
For example, ads where synthetic content makes it appear as if a person is saying or doing something they did not actually say or do will need to be labeled.
The policy will not apply to synthetic or altered content deemed “inconsequential to the claims made in the ad,” such as changes like image resizing, color corrections, or “background edits that do not create realistic depictions of actual events.”
In July, Google and other leading AI companies agreed to a set of voluntary commitments proposed by the Biden administration to enhance AI safety.
As part of this agreement, the companies pledged to develop technical measures, such as watermarks, to help users identify AI-generated content.
The Federal Election Commission is also examining how to regulate AI in political advertisements.
Leave a Reply