Google announced on Monday that it would require advertisers to disclose election advertising that employ digitally changed content to show actual or realistic-looking persons or events, in its latest effort to combat election misinformation.
The disclosure requirements under the political content policy have been updated, and marketers must now choose a checkbox in the “altered or synthetic content” portion of their campaign settings.
The rapid development of generative AI, which can generate text, graphics, and video in seconds in response to commands, has sparked concerns about its possible abuse.
The rise of deepfakes, which are effectively modified content that misrepresents someone, has further blurred the distinction between the genuine and the fake.
Google announced that it will create in-ad disclosures for feeds and shorts on mobile phones, as well as in-streams on desktops and television. Other formats will require advertisers to present a visible “prominent disclosure” for users.
Google said that the “acceptable disclosure language” will change depending on the context of the ad.
In April, during India’s ongoing general election, phony films of two Bollywood actors denouncing Prime Minister Narendra Modi went popular online. Both AI-generated videos urged viewers to vote for the opposition Congress Party.
Separately, OpenAI, directed by Sam Altman, announced in May that it has blocked five covert influence operations that intended to utilize its AI models for “deceptive activity” on the internet in a “attempt to manipulate public opinion or influence political outcomes.”
Comment Template