The Race to Dominate A.I.

We look at the recent history of A.I. and what to expect for the industry.
The Race to Dominate A.I.

Just before Thanksgiving, a Silicon Valley giant appeared to implode before our eyes. A boardroom coup at OpenAI, the world’s hottest artificial intelligence company, pushed out its charismatic leader, Sam Altman.

At the time, the ouster — and Altman’s roller-coaster ride to reclaim his job as C.E.O. — seemed sudden. In reality, it was more than a decade in the making. A.I. had been simmering in the tech world, as powerful figures poured money into research and fought with one another over heady questions of humanity, philosophy and power.

This week, with our colleagues Mike Isaac and Nico Grant, we published a series recounting the recent history of A.I. and looking ahead to its future. In today’s newsletter, we explain what we learned.

Powerful tech leaders — including Altman, Elon Musk and the Google co-founder Larry Page — were developing A.I. systems for years before the technology went mainstream. The men bickered over whether it would end up harming the world; some, including Musk, feared that A.I. would turn dystopian science fiction into reality, with computers becoming smart enough to escape human control.

At the heart of these disagreements was a brain-stretching paradox: The men who said they were most worried about A.I. were among the most determined to create it. They justified that ambition by saying that they alone had the morals and skill to prevent A.I. tools from becoming rogue machines that could endanger humanity.

Eventually, these disputes led them to split off and form their own A.I. labs. Each schism created more competition, which pushed the companies to advance A.I. even faster.

The newly formed A.I. labs improved their technology over years. But nothing captured the public’s attention like ChatGPT, OpenAI’s chatbot, which debuted last year. It was an enormous hit, attracting millions of users with its ability to write poetry, summarize research and mimic everyday conversation.

Our reporting found that Altman and OpenAI did not appreciate what they were about to unleash when they released ChatGPT. Internally, the company called the chatbot a “low key research preview.” Researchers and engineers at OpenAI were instead focused on developing more advanced technology.

ChatGPT’s popularity supercharged the competition at big tech companies like Google and Meta, Facebook’s parent company, which raced to get their own products into the world.

Though the companies were concerned that their A.I. chatbots were inaccurate or biased, they put those worries to the side — at least for the moment. As one Microsoft executive wrote in an internal email, “speed is even more important than ever.” It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”

A.I. has since sneaked into daily life, through chatbots and image generators, in the word processing programs you might use at work, and in the seemingly human customer service agents you chat with online to return a purchase. People have already used it to create sophisticated phishing emails, cheat on schoolwork and spread disinformation.

Though OpenAI was founded as a nonprofit, Altman transformed it into a commercial operation that investors now value at more than $80 billion. As Altman raced to advance the technology, some directors on the nonprofit’s board worried he was not being honest with them and felt they could no longer trust him to prioritize safety.

That one person could be so central to the future of A.I. — and perhaps humanity — is a symptom of the lack of meaningful oversight of the industry.

A.I. systems are advancing so rapidly and unpredictably that even on the rare occasions lawmakers and regulators have tried to tackle them, their proposals quickly become obsolete, as our colleagues Adam Satariano and Cecilia Kang found. For example, European regulators proposed “future proof” rules in mid-2021 that limited how A.I. could be used in sensitive cases, such as in hiring decisions and law enforcement. But the regulations did not contemplate the advances behind ChatGPT, which was released a year and a half later.

The absence of rules has left a vacuum. The leading A.I. companies have proposed some voluntary guidelines — like using watermarks to help consumers spot A.I.-generated material — but it’s not clear how much they will matter.

European regulators this week are in marathon sessions to write the world’s strictest A.I. regulations, and they will be worth watching. In the meantime, companies continue to push ahead. On Wednesday, Google demonstrated a powerful new A.I. system called Gemini Ultra, even though Google hasn’t yet completed its customary safety testing. The company promised it would be out in the world early next year.

Related: Artists are using A.I. to produce or augment their work. Read about one.

  • Israel accused Hamas of firing rockets from designated humanitarian zones where thousands of Palestinians have sought refuge.

  • Criticism of Harvard, M.I.T. and Penn mounted after congressional testimony from their presidents about antisemitism. (Representative Elise Stefanik, a New York Republican, has gone viral after her questioning.)

  • A Texas judge ruled that a woman whose fetus has a fatal condition could get an abortion, overriding the state’s strict ban. The Texas attorney general said the woman and hospital staff could still face prosecution.

  • In a lawsuit, survivors of a sex cult accused Sarah Lawrence College of negligence for allowing a predator into their dorm.

  • Catholic nuns with shares in Smith & Wesson are suing the gun company for selling an AR-15-style rifle.

  • Meteorologists expect an odd weekend of weather in the eastern U.S., with unseasonal warmth and heavy rain.

Canada’s new tech law makes the country a test case for a world where Google shares news without deciding which outlets succeed and which fail, Julia Angwin writes.

Universities must resolve a double standard: They either punish antisemitism or accept all offensive speech, Bret Stephens writes.

The House hearing on campus antisemitism confirmed people’s worst fears. But watching the whole hearing reveals the trap university presidents entered, Michelle Goldberg writes.

Scottish stink: This may be the world’s smelliest cheese.

Modern Love: Divorce taught a lesson — never rely on a man for money.

Lives Lived: Juanita Castro supported her brother Fidel when he led the uprising that toppled Cuba’s dictator in 1959. But she broke with him over his crackdown on dissent and went on to collaborate with the C.I.A. before fleeing Cuba in 1964. She died at 90.

N.F.L.: Bailey Zappe, an unlikely hero, led the Patriots to a 21-18 win over the Steelers.

Basketball: The Pacers and the Lakers will play for the first N.B.A. Cup on Saturday, after Los Angeles walloped New Orleans and Indiana edged Milwaukee in the semifinals.

Golf move: Jon Rahm is joining LIV.

Haute cuisine: Hundreds of Parisians stood in line at dawn Wednesday, awaiting their first bite of a delicacy: a Krispy Kreme doughnut. The pastry chain opened its first restaurant in France, joining a market where American chains like McDonald’s, Starbucks and Popeyes are thriving. “This is all about American pop culture,” said Alexandre Maizoué, the director general of Krispy Kreme France. “They’ve seen all the American series. They like U.S. culture and the American art de vivre.”