After updating its terms around the use of AI in political ads earlier this week, Meta has now clarified its stance, with a new set of rules around the use of generative AI in certain promotions.
As per Meta:
“We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy will go into effect in the new year and will be required globally.”
Meta had actually already implemented this policy in part, after various reports of AI-based manipulation within political ads.
But now, it’s making it official, with specific guidelines around what’s not allowed within AI-based promotions, and the disclosures required on such.
Under the new policy, advertisers will be required to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic-sounding audio, that has been digitally created or altered.
In terms of specifics, disclosure will be required:
- If an AI-generated ad depicts a real person as saying or doing something they did not say or do.
- If an AI-generated ad depicts a realistic-looking person that does not exist, or a realistic-looking event that did not happen.
- If an AI ad displays altered footage of a real event.
- If an AI-generated ad depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.
In some ways, these types of disclosures may feel unnecessary, especially given that most AI-generated content looks and sounds pretty clearly fake.
But political campaigners are already using AI-generated depictions to sway voters, with realistic-looking and sounding replicas that depict rivals.
A recent campaign by U.S. Presidential candidate Ron DeSantis, for example, used an AI-generated image of Donald Trump hugging Anthony Fauci, as well as a voice simulation of Trump in another push.
To some, these will be obvious, but if they influence any voters at all through such depictions, that’s an unfair, and misleading approach. And really, AI depictions like this are going to have some influence, even with these new regulations in place.
“Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser. We will share additional details about the specific process advertisers will go through during the ad creation process.”
So the risk here is that your ad will be rejected, and you could have your ad account suspended for repeated violations.
But you can already see how political campaigners could use such depictions to sway voters in the final days heading to the polls.
What if, for example, I came up with a pretty damaging AI video clip of a political rival, and I paid to promote that on the last day of the campaign, spreading it out there in the final hours before the political ad blackout period?
That’s going to have some impact, right? And even if my ad account gets suspended as a result, it could be worth the risk if the clip seeds enough doubt, through a realistic-enough depiction and message.
It seems inevitable that this is going to become a bigger problem, and no platform has all the answers on how to address such as yet.
But Meta’s implementing enforcement rules, based on what it can thus far.
How effective they’ll be is the next test.