Starting next year, Facebook and Instagram will require political ad campaigns to disclose if they contain AI-generated content, such as deepfakes.
The policy is designed to prevent AI-generated political ads from potentially misleading the public. Today’s AI programs can easily create realistic images of presidential candidates, including in compromising or scandalous situations.
For example, AI image generators have already depicted former President Donald Trump being arrested or current US President Joe Biden wearing a colorful swimsuit. Other so-called deepfakes can clone someone’s voice to make it say anything, including First Lady Jill Biden condemning her husband.
Facebook and Instagram parent company Meta already bans political ads that contain false content or violate the platforms’ policies. But for political ads that resort to using deepfakes and other AI trickery to make a point, the company is going to allow it — so long as the advertiser is upfront about the generative AI use.
The new policy applies to ads for electoral campaigns and social issues. Meta plans on placing a disclosure across each political ad that qualifies. “If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser,” the company added.
The only exception is for political ads that used AI software to make minor enhancements, like color correction or image cropping and sharpening. They will not require a disclosure.
Meta is joining Google, which enacted a similar policy to tackle the problem back in September. Facebook has also told advertisers to avoid using its generative AI-ad creation tools for political purposes, and other areas, such as housing, employment and health.
"We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries," the company told PCMag.