How judges, not politicians, could dictate America’s AI rules

It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US. Last week, the Federal Trade Commission opened an investigation into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT. Meanwhile,…
How judges, not politicians, could dictate America’s AI rules

Its approach differs from that of other Western countries. While the EU is trying to prevent the worst AI harms proactively, the American approach is more reactive. The US waits for harms to emerge first before regulating, says Amir Ghavi, a partner at the law firm Fried Frank. Ghavi is representing Stability AI, the company behind the open-source image-generating AI Stable Diffusion, in three copyright lawsuits. 

“That’s a pro-capitalist stance,” Ghavi says. “It fosters innovation. It gives creators and inventors the freedom to be a bit more bold in imagining new solutions.” 

The class action lawsuits over copyright and privacy could shed more light on how “black box” AI algorithms work and create new ways for artists and authors to be compensated for having their work used in AI models, say Joseph Saveri, the founder of an antitrust and class action law firm, and Matthew Butterick, a lawyer. 

They are leading the suits against GitHub and Microsoft, OpenAI, Stability AI, and Meta. Saveri and Butterick represent Silverman, part of a group of authors who claim that the tech companies trained their language models on their copyrighted books. Generative AI models are trained using vast data sets of images and text scraped from the internet. This inevitably includes copyrighted data. Authors, artists, and programmers say tech companies that have scraped their intellectual property without consent or attribution should compensate them. 

“There’s a void where there’s no rule of law yet, and we’re bringing the law where it needs to go,” says Butterick. While the AI technologies at issue in the suits may be new, the legal questions around them are not, and the team is relying on “good old fashioned” copyright law, he adds. 

Butterick and Saveri point to Napster, the peer-to-peer music sharing system, as an example. The company was sued by record companies for copyright infringement, and it led to a landmark case on the fair use of music. 

The Napster settlement cleared the way for companies like Apple, Spotify, and others to start creating new license-based deals, says Butterick. The pair is hoping their lawsuits, too, will clear the way for a licensing solution where artists, writers, and other copyright holders could also be paid royalties for having their content used in an AI model, similar to the system in place in the music industry for sampling songs. Companies would also have to ask for explicit permission to use copyrighted content in training sets. 

Tech companies have treated publicly available copyrighted data on the internet as subject to “fair use” under US copyright law, which would allow them to use it without asking for permission first. Copyright holders disagree. The class actions will likely determine who is right, says Ghavi.