An early guide to policymaking on generative AI
She wanted to know if I had any suggestions, and asked what I thought all the new advances meant for lawmakers. I’ve spent a few days thinking, reading, and chatting with the experts about this, and my answer morphed into this newsletter. So here goes!
Though GPT-4 is the standard bearer, it’s just one of many high-profile generative AI releases in the past few months: Google, Nvidia, Adobe, and Baidu have all announced their own projects. In short, generative AI is the thing that everyone is talking about. And though the tech is not new, its policy implications are months if not years from being understood.
GPT-4, released by OpenAI last week, is a multimodal large language model that uses deep learning to predict words in a sentence. It generates remarkably fluent text, and it can respond to images as well as word-based prompts. For paying customers, GPT-4 will now power ChatGPT, which has already been incorporated into commercial applications.
The newest iteration has made a major splash, and Bill Gates called it “revolutionary” in a letter this week. However, OpenAI has also been criticized for a lack of transparency about how the model was trained and evaluated for bias.
Despite all the excitement, generative AI comes with significant risks. The models are trained on the toxic repository that is the internet, which means they often produce racist and sexist output. They also regularly make things up and state them with convincing confidence. That could be a nightmare from a misinformation standpoint and could make scams more persuasive and prolific.
Generative AI tools are also potential threats to people’s security and privacy, and they have little regard for copyright laws. Companies using generative AI that has stolen the work of others are already being sued.
Alex Engler, a fellow in governance studies at the Brookings Institution, has considered how policymakers should be thinking about this and sees two main types of risks: harms from malicious use and harms from commercial use. Malicious uses of the technology, like disinformation, automated hate speech, and scamming, “have a lot in common with content moderation,” Engler said in an email to me, “and the best way to tackle these risks is likely platform governance.” (If you want to learn more about this, I’d recommend listening to this week’s Sunday Show from Tech Policy Press, where Justin Hendrix, an editor and a lecturer on tech, media, and democracy, talks with a panel of experts about whether generative AI systems should be regulated similarly to search and recommendation algorithms. Hint: Section 230.)
Policy discussions about generative AI have so far focused on that second category: risks from commercial use of the technology, like coding or advertising. So far, the US government has taken small but notable actions, primarily through the Federal Trade Commission (FTC). The FTC issued a warning statement to companies last month urging them not to make claims about technical capabilities that they can’t substantiate, such as overstating what AI can do. This week, on its business blog, it used even stronger language about risks companies should consider when using generative AI.