It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US.
Last week, the Federal Trade Commission opened an investigation into whether OpenAI broke the law by scraping people’s online data to train its chatbot ChatGPT.
Meanwhile, artists and authors are suing companies such as OpenAI, Stability AI, and Meta, alleging that they broke copyright laws by training their AI models on their work without providing any recognition or payment.
If these cases prove successful, they could force OpenAI, Meta, Microsoft, and others to fundamentally change the way AI is built, trained, and deployed. Read the full story to learn how.
— Melissa Heikkilä
If you’re interested in the messy world of AI regulation, why not check out the latest issue of The Algorithm, Melissa’s weekly AI newsletter. Sign up to receive it in your inbox every Monday. And for now, read more from us on this topic:
+ A quick guide to the most important AI law you’ve never heard of. The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence. Read this story to find out what you need to know about it.
+ Let us walk you through all the most (and least) promising efforts to govern AI around the world.