Meta Calls for Industry Effort to Label A.I.-Generated Content
Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.
On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.
The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.
“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.
He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.
As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.