State Legislators, Wary of Deceptive Election Ads, Tighten A.I. Rules
When experts in artificial intelligence recently showed a gathering of state legislators a deepfake image that had been generated by A.I. in early 2022, depicting former presidents Donald J. Trump and Barack Obama playing one-on-one basketball, the crowd chuckled at how rudimentary it was.
Then the panel brought out a fake video that was made just a year later, and the legislators gasped at how realistic it looked.
Alarmed by the increasing sophistication of what can be false or highly misleading political ads generated by artificial intelligence, state lawmakers are scrambling to draft bills to regulate them.
With primary voters about to cast the first ballots in 2024, the issue has become even more pressing for legislators in dozens of states who are returning to work this month.
“States know that there’s going to have to be some regulatory guardrails,” said Tim Storey, president and chief executive of the National Conference of State Legislatures, which convened the A.I. panel at a conference in December. “It’s almost trying to figure out what’s happening in real time.”
The broader goal, legislators said, was to prevent what has already happened elsewhere, especially in some elections overseas. In Slovakia, deepfake voice recordings, falsely purporting to be of the leader of a pro-Western political party buying votes, may have contributed to that party’s narrow loss to a pro-Kremlin party. And last year, Gov. Ron DeSantis of Florida released fake A.I. images of former President Donald J. Trump embracing Dr. Anthony Fauci.