Meta Oversight Board Evaluates Management of AI-Generated Celebrity Content
Meta Platforms’ Oversight Board has taken the initiative to review the company’s handling of two sexually explicit AI-generated images circulating on its Facebook and Instagram platforms. As an independent entity funded by Meta, the board operates with autonomy, aiming to scrutinize Meta’s policies and enforcement practices concerning pornographic deepfakes generated using artificial intelligence.
In their assessment, the board provided descriptions of the images under scrutiny but opted not to disclose the identities of the celebrity women depicted, citing concerns about mitigating further harm. The proliferation of AI-generated content, ranging from fabricated images to videos, has posed formidable challenges in distinguishing between authentic and fake media, especially in contexts involving sexual exploitation, predominantly targeting women and girls.
Earlier this year, an incident involving Elon Musk’s social media platform X highlighted the complexities of disseminating counterfeit explicit content. X briefly restricted users from accessing images of pop star Taylor Swift after grappling with the spread of AI-generated explicit depictions of her.
The industry has seen a growing chorus of calls for legislative intervention to tackle the production and dissemination of harmful deepfakes. Some industry figures advocate for legal frameworks criminalizing the creation of such content and mandating proactive measures by tech companies to curb its circulation.
According to the descriptions provided by the Oversight Board, one case involves an AI-generated nude image resembling a prominent figure from India, circulated on an Instagram account exclusively featuring AI-generated pictures of Indian women. In another instance, a photo posted in a Facebook group dedicated to sharing AI creations depicts a nude woman resembling an American public figure, accompanied by a man groping her breast.
Initially, Meta removed the image featuring the American woman for violating its policies against bullying and harassment, which prohibit derogatory sexualized content. However, the picture depicting the Indian woman remained accessible until the Oversight Board selected it for review, prompting Meta to reverse its decision.
In response to the ongoing review, Meta has pledged to abide by the decisions of the Oversight Board, signaling its commitment to addressing concerns surrounding the circulation of harmful deepfake content across its platforms.
Comment Template