How’s AI self-regulation going?
But AI nerds may remember that exactly a year ago, on July 21, 2023, Biden was posing with seven top tech executives at the White House. He’d just negotiated a deal where they agreed to eight of the most prescriptive rules targeted at the AI sector at that time. A lot can change in a year!
The voluntary commitments were hailed as much-needed guidance for the AI sector, which was building powerful technology with few guardrails. Since then, eight more companies have signed the commitments, and the White House has issued an executive order that expands upon them—for example, with a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security.
US politics is extremely polarized, and the country is unlikely to pass AI regulation anytime soon. So these commitments, along with some existing laws such as antitrust and consumer protection rules, are the best the US has in terms of protecting people from AI harms. To mark the one-year anniversary of the voluntary commitments, I decided to look at what’s happened since. I asked the original seven companies that signed the voluntary commitments to share as much as they could on what they have done to comply with them, cross-checked their responses with a handful of external experts, and tried my best to provide a sense of how much progress has been made. You can read my story here.
Silicon Valley hates being regulated and argues that it hinders innovation. Right now, the US is relying on the tech sector’s goodwill to protect its consumers from harm, but these companies can decide to change their policies anytime that suits them and face no real consequences. And that’s the problem with nonbinding commitments: They are easy to sign, and as easy to forget.
That’s not to say they don’t have any value. They can be useful in creating norms around AI development and placing public pressure on companies to do better. In just one year, tech companies have implemented some positive changes, such as AI red-teaming, watermarking, and investment in research on how to make AI systems safe. However, these sorts of commitments are opt-in only, and that means companies can always just opt back out again. Which brings me to the next big question for this field: Where will Biden’s successor take US AI policy?
The debate around AI regulation is unlikely to go away if Donald Trump wins the presidential election in November, says Brandie Nonnecke, the director of the CITRIS Policy Lab at UC Berkeley.
“Sometimes the parties have different concerns about the use of AI. One might be more concerned about workforce effects, and another might be more concerned about bias and discrimination,” says Nonnecke. “It’s clear that it is a bipartisan issue that there need to be some guardrails and oversight of AI development in the United States,” she adds.
Trump is no stranger to AI. While in office, he signed an executive order calling for more investment in AI research and asking the federal government to use more AI, coordinated by a new National AI Initiative Office. He also issued early guidance on responsible AI. If he returns to office, he is reportedly planning to scratch Biden’s executive order and put in place his own AI executive order that reduces AI regulation and sets up a “Manhattan Project” to boost military AI. Meanwhile, Biden keeps calling for Congress to pass binding AI regulations. It’s no surprise, then, that Silicon Valley’s billionaires have backed Trump.