A.I.’s Use in Elections Sets Off a Scramble for Guardrails

Gaps in campaign rules allow politicians to spread images and messaging generated by increasingly powerful artificial intelligence technology.
A.I.’s Use in Elections Sets Off a Scramble for Guardrails

In Toronto, a candidate in this week’s mayoral election who vows to clear homeless encampments released a set of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park.

In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop.

In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality.

What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.

Increasingly, political consultants, election researchers and lawmakers say setting up new guardrails, such as legislation reining in synthetically generated ads, should be an urgent priority. Existing defenses, such as social media rules and services that claim to detect A.I. content, have failed to do much to slow the tide.

As the 2024 U.S. presidential race starts to heat up, some of the campaigns are already testing the technology. The Republican National Committee released a video with artificially generated images of doomsday scenarios after President Biden announced his re-election bid, while Gov. Ron DeSantis of Florida posted fake images of former President Donald J. Trump with Dr. Anthony Fauci, the former health official. The Democratic Party experimented with fund-raising messages drafted by artificial intelligence in the spring — and found that they were often more effective at encouraging engagement and donations than copy written entirely by humans.

Some politicians see artificial intelligence as a way to help reduce campaign costs, by using it to create instant responses to debate questions or attack ads, or to analyze data that might otherwise require expensive experts.

At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email blast full of false narratives churned out by computer or a fabricated image of urban decay can reinforce prejudices and widen the partisan divide by showing voters what they expect to see, experts say.

The technology is already far more powerful than manual manipulation — not perfect, but fast improving and easy to learn. In May, the chief executive of OpenAI, Sam Altman, whose company helped kick off an artificial intelligence boom last year with its popular ChatGPT chatbot, told a Senate subcommittee that he was nervous about election season.

He said the technology’s ability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation” was “a significant area of concern.”

Representative Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle “is poised to be the first election where A.I.-generated content is prevalent.” She and other congressional Democrats, including Senator Amy Klobuchar of Minnesota, have introduced legislation that would require political ads that used artificially generated material to carry a disclaimer. A similar bill in Washington State was recently signed into law.

The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its ethics code.

“People are going to be tempted to push the envelope and see where they can take things,” said Larry Huynh, the group’s incoming president. “As with any tool, there can be bad uses and bad actions using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.”

The technology’s recent intrusion into politics came as a surprise in Toronto, a city that supports a thriving ecosystem of artificial intelligence research and start-ups. The mayoral election takes place on Monday.

A conservative candidate in the race, Anthony Furey, a former news columnist, recently laid out his platform in a document that was dozens of pages long and filled with synthetically generated content to help him make his tough-on-crime position.

A closer look clearly showed that many of the images were not real: One laboratory scene featured scientists who looked like alien blobs. A woman in another rendering wore a pin on her cardigan with illegible lettering; similar markings appeared in an image of caution tape at a construction site. Mr. Furey’s campaign also used a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.

The other candidates mined that image for laughs in a debate this month: “We’re actually using real pictures,” said Josh Matlow, who showed a photo of his family and added that “no one in our pictures have three arms.”

Still, the sloppy renderings were used to amplify Mr. Furey’s argument. He gained enough momentum to become one of the most recognizable names in an election with more than 100 candidates. In the same debate, he acknowledged using the technology in his campaign, adding that “we’re going to have a couple of laughs here as we proceed with learning more about A.I.”

Political experts worry that artificial intelligence, when misused, could have a corrosive effect on the democratic process. Misinformation is a constant risk; one of Mr. Furey’s rivals said in a debate that while members of her staff used ChatGPT, they always fact-checked its output.

“If someone can create noise, build uncertainty or develop false narratives, that could be an effective way to sway voters and win the race,” Darrell M. West, a senior fellow for the Brookings Institution, wrote in a report last month. “Since the 2024 presidential election may come down to tens of thousands of voters in a few states, anything that can nudge people in one direction or another could end up being decisive.”

Increasingly sophisticated A.I. content is appearing more frequently on social networks that have been largely unwilling or unable to police it, said Ben Colman, the chief executive of Reality Defender, a company that offers services to detect A.I. The feeble oversight allows unlabeled synthetic content to do “irreversible damage” before it is addressed, he said.

“Explaining to millions of users that the content they already saw and shared was fake, well after the fact, is too little, too late,” Mr. Colman said.

For several days this month, a Twitch livestream has run a nonstop, not-safe-for-work debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as simulated “A.I. entities,” but if an organized political campaign created such content and it spread widely without any disclosure, it could easily degrade the value of real material, disinformation experts said.

Politicians could shrug off accountability and claim that authentic footage of compromising actions was not real, a phenomenon known as the liar’s dividend. Ordinary citizens could make their own fakes, while others could entrench themselves more deeply in polarized information bubbles, believing only what sources they chose to believe.

“If people can’t trust their eyes and ears, they may just say, ‘Who knows?’” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, wrote in an email. “This could foster a move from healthy skepticism that encourages good habits (like lateral reading and searching for reliable sources) to an unhealthy skepticism that it is impossible to know what is true.”