Meta’s New Rules for Digitally Altered Political Ads
Meta has implemented new disclosure requirements for digitally altered political ads, which are now in effect. All advertisers looking to run ads about social issues, elections, and politics are now required to indicate when their ad “contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered by AI or other methods”. This policy was announced last November in anticipation of an expected flood of generative AI ads during the upcoming U.S. Election campaign. The goal is to prevent users from being deceived by digitally created or modified messaging. YouTube and TikTok have also implemented tags for AI-generated content to provide more transparency for viewers.
Meta’s advertisers will now be presented with a checkbox when setting up ads within selected categories to indicate if an ad has been digitally altered. Failure to comply with this requirement may result in ad removal and/or bans. Advertisers can also update their previously launched campaigns with Meta’s new AI disclosure tag. The implementation of these rules is essential as AI tools become increasingly advanced and capable of producing realistic depictions. The potential for AI to facilitate the creation of full video clips depicting unreal incidents raises concerns about misleading promotions and the influence they may have on voters. To mitigate this risk, Meta will implement a political ads blackout a week before the poll, allowing enough time to debunk any false claims or clips before people vote. Overall, these new rules aim to promote transparency and prevent the spread of deceptive content during political campaigns.