An oversight board is criticizing Facebook owner Meta’s policies regarding manipulated media as “incoherent” and insufficient to address the flood of online disinformation that already has begun to target elections across the globe this year.
The quasi-independent board on Monday said its review of an altered video of President Joe Biden that spread on Facebook exposed gaps in the policy. The board said Meta should expand the policy to focus not only on videos generated with artificial intelligence, but on media regardless of how it was created. That includes fake audio recordings, which already have convincingly impersonated political candidates in the U.S. and elsewhere.
The company also should clarify the harms it is trying to prevent and should label images, videos and audio clips as manipulated instead of removing the posts altogether, the Meta Oversight Board said.
The board’s feedback reflects the intense scrutiny that is facing many tech companies for their handling of election falsehoods in a year when voters in more than 50 countries will go to the polls. As both generative artificial intelligence deepfakes and lower-quality “cheap fakes” on social media threaten to mislead voters, the platforms are trying to catch up and respond to false posts while protecting users’ rights to free speech.
“As it stands, the policy makes little sense,” Oversight Board co-chair Michael McConnell said of Meta’s policy in a statement on Monday. He said the company should close gaps in the policy while ensuring political speech is “unwaveringly protected.”
Meta said it is reviewing the Oversight Board’s guidance and will respond publicly to the recommendations within 60 days.
Spokesperson Corey Chambliss said while audio deepfakes aren’t mentioned in the company’s manipulated media policy, they are eligible to be fact-checked and will be labeled or down-ranked if fact-checkers rate them as false or altered. The company also takes action against any type of content if it violates Facebook’s Community Standards, he said.
Facebook, which turned 20 this week, remains the most popular social media site for Americans to get their news, according to Pew. But other social media sites, among them Meta’s Instagram, WhatsApp and Threads, as well as X, YouTube and TikTok, also are potential hubs where deceptive media can spread and fool voters.
Meta created its oversight board in 2020 to serve as a referee for content on its platforms. Its current recommendations come after it reviewed an altered clip of President Biden and his adult granddaughter that was misleading but didn’t violate the company’s specific policies.
The original footage showed Biden placing an “I Voted” sticker high on his granddaughter’s chest, at her instruction, then kissing her on the cheek. The version that appeared on Facebook was altered to remove the important context, making it seem as if he touched her inappropriately.
The board’s ruling on Monday upheld Meta’s 2023 decision to leave the seven-second clip up on Facebook, since it didn’t violate the company’s existing manipulated media policy. Meta’s current policy says it will remove videos created using artificial intelligence tools that misrepresent someone’s speech.
“Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy,” the ruling read.
The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy.