Meta to Require Disclosure of Political Ads Made With AI

Meta Company

On Wednesday, Meta announced that it will start requiring political advertisers to disclose when they use altered or digitally created media, such as deepfake videos of candidates.

This move comes as the tech industry prepares for an influx of AI-generated video, images, and audio leading up to the 2024 election.

Meta, the parent company of Facebook and Instagram, stated in a blog post that it will mandate disclosures from advertisers during the ad-buying process if their social issue, electoral, or political ads feature photorealistic images, videos, or realistic-sounding audio that has been digitally created or altered.

Nick Clegg, Meta’s president of global affairs, noted that this policy will take effect globally early next year, just in time for the 2024 presidential primaries and caucuses.

The company explained that if an advertiser fails to make the required disclosure, the ad will be rejected, and repeated non-compliance may lead to penalties. Additionally, such ads will be labeled.

The policy does not ban altered media outright, acknowledging that AI-generated content is likely to persist.

For instance, in April, the Republican National Committee used AI to produce a 30-second ad imagining a second term for President Joe Biden, and in March, fake AI-generated images of former President Donald Trump being arrested circulated online.

Meta AI

However, Meta insists that synthesized media must be disclosed to prevent misleading viewers.

This new policy is similar to a measure announced by Google in September, which also requires the disclosure of “synthetic” media.

As Google and Meta are the two largest internet advertising companies by total sales, their policies often set industry standards.

Meta has faced criticism over altered videos for years. In 2019, Facebook refused to remove doctored videos of then-House Speaker Nancy Pelosi, D-Calif., leading to accusations of dishonesty from Pelosi.

The platform revised its policies the following year to ban or label posts with manipulated media.

With recent advances in generative AI making it easier to create realistic fakes, online platforms, candidates, and voters are facing new challenges.

Meta’s new advertising policy will apply to instances such as depicting a real person saying or doing something they did not, using deepfake technology. It will also cover ads showing realistic-looking but non-existent people or events, or altered footage of real events.

However, if the digital editing is deemed “inconsequential or immaterial” to the issues raised in an ad, the disclosure requirement will not apply.

Mason Williams
Driven by a commitment to integrity and excellence, Mason's writing empowers readers to make informed decisions, facing challenges, and seize opportunities in an increasingly complex world. His work serves as a guiding light, illuminating the way forward amidst uncertainty.