Meta Implements New Rules for AI-Based Political Ads to Prevent Misleading Content
Meta, the parent company of Facebook and Instagram, has introduced a new policy to address the use of generative AI in political ads. The policy requires advertisers to disclose when an ad contains digitally created or altered content, such as photorealistic images or videos, or realistic-sounding audio. The aim is to prevent the spread of misleading information and deepfakes that could influence voters. Violations of the policy may result in ad rejection or suspension of ad accounts.
The disclosure requirement applies to AI-generated ads that depict a real person saying or doing something they didn’t say or do, depict a realistic-looking person or event that doesn’t exist, display altered footage of a real event, or depict a realistic event that is not a true recording. While AI-generated content may be easily recognizable as fake to some, there is still a risk that it could influence voters if not properly disclosed.
Political campaigners have already used AI-generated depictions to sway voters, creating realistic-looking and sounding replicas of rivals. Meta’s new policy aims to address this issue and ensure transparency in political ads. Advertisers will have to disclose the use of digitally created or altered content during the ad creation process, and this information will appear in the Ad Library. Failure to comply with the disclosure requirement may result in penalties against the advertiser, including ad rejection and ad account suspension.
However, the effectiveness of these rules in preventing the spread of misleading AI-generated content remains to be seen. As AI technology continues to advance, it is likely that this issue will become more prominent, and platforms will need to develop further solutions to address it.