The Case for an A.I. Pause

We are honoring the Top 11 winners of our Student Editorial Contest by publishing their essays. This one is by Gabriel Huang, age 17.
The Case for an A.I. Pause

This essay, by Gabriel Huang, 17, from Tower Hill School in Wilmington, Del., is one of the Top 11 winners of The Learning Network’s 10th Annual Student Editorial Contest, for which we received 12,592 entries.

We are publishing the work of all the winners and runners-up over the next week, and you can find them here as they post.


The Case for an A.I. Pause

Artificial intelligence can ace the SAT and code Flappy Bird. It can generate award-winning artwork and talk to you like a person. A.I. is here and companies are developing their chatbots, software and generators at hypersonic speeds.

They should hit pause.

Even with a limited sample size, we’ve already seen the massive risks that A.I. poses to society.

Deep fakes, a form of photoshop that utilizes A.I. learning to create realistic images, has already been exploited for propaganda. Chatbots and A.I. software have a well-documented history of exhibiting racist and sexist behavior. The potential risks of widespread job displacement at the hands of A.I. put us all at risk for widespread economic and political chaos.

In response to these risks, many in the A.I. community have called for increased regulation. But most policymakers simply do not understand artificial intelligence enough to enact meaningful regulations, and the speed at which A.I. is progressing makes it nearly impossible for them to catch up.

“You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of A.I. will not come from evil robots with red lasers coming out of their eyes,” said Representative Jay Obernolte, one of the few United States Congress members with experience in artificial intelligence.

The solution? An A.I. pause. Already, the idea has been endorsed by over 1,100 individuals involved in the artificial intelligence community — ranging from executives like Elon Musk and Steve Wozniak to leading computer science academics like Yoshua Bengio and Tristan Harris. Under an A.I. pause, no development or research, academic or corporate, could be done to advance artificial beyond a certain “power level.”

This would give governments and policymakers the time to properly analyze and regulate the artificial intelligence field. They would be able to establish policies that would protect the jobs of groups like truck drivers whose jobs could be eliminated by A.I.-driven cars. They could create baseline standards for crime recognition software to pass before it could be used by local police.

On top of that, an A.I. moratorium could also be beneficial to the companies and research groups developing A.I. models. A pause in the “A.I. arms race” could allow groups like OpenAI to allocate more resources to address the flaws in their current A.I. software rather than rushing to develop more advanced models. A pause could ultimately lead to a superior final product.

A.I. could be one of the most beneficial inventions to the human race, but it could also be one of the most societally cataclysmic. By instituting an A.I. pause, we’ll have the opportunity to improve upon and understand A.I. technology, while also ensuring that its development is in harmony with our society.

Works Cited

McNamee, Roger. “There Is Only One Question That Matters with A.I.” Time, 5 April 2023.

Metz, Cade and Gregory Schmidt. “Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society.’” The New York Times, 29 March 2023.

Piper, Kelsey, “A.I. Experts Are Increasingly Afraid of What They’re Creating.” Vox, 28 Nov. 2022.

Samuel, Sigal. “The Case for Slowing Down AI.” Vox, 20 March 2023.