Mark Zuckerberg just escalated the war for AI talent against Google and OpenAI with a clever Reels video and an AI team reorg

Mark Zuckerberg is turning up the heat in the already competitive AI market by merging his companys two advanced AI divisions into one group, a move to accelerate Metas development of general purpose artificial intelligence chatbots and to win the competition for AI engineering talent.

Appearing on an Instagram Reel video Thursday, the Meta CEO announced the change to the companys AI teams and talked up its various efforts in the field, including big investments specialized computer chips to help it build and deliver new generative AI models and products. Zuckerberg also said the company had begun training its next generation Llama 3 large language model.

Its become clear that the next generation of services requires building full general intelligence, Zuckerberg said. Building the best AI assistantsAIs for creators, AIs for businesses and morethat needs advances in every area of AI from reasoning to planning to coding to memory and other cognitive abilities.

The announcements are a clear signal that Zuckerberg wants to position Meta at the forefront of developing consumer-facing AI products. This is important to Metas investors, but perhaps more importantly, to talented engineers and machine learning researchers, many of whom are increasingly choosing to work for OpenAI or other well-funded AI startups.

To make more rapid progress towards this goal, Meta is combining Fundamental AI Research (FAIR), its advanced AI research division, which was founded over a decade ago, with the GenAI group it established just last year to help it build generative AI products, such as the celebrity persona chatbots it launched in September.

The move echoes Alphabets decision last year to merge its two advanced AI research labs, Google Brain and DeepMind. Although neither Google lab was directly responsible for producing products, the rationale behind the merger was to enable Google to both create more capable AI models and to move those models into commercial production far faster than the tech giant had managed to in the past. The company was trying to catch up to Microsoft and OpenAI, which had beaten Google to market with a highly-capable AI chatbot in the form of ChatGPT and had also made the worlds most powerful large language model, GPT-4, available to customers.

Many technologists, including Microsoft founder Bill Gates, predict that the next big platform shift in computing will be to personal AI assistants, which will become peoples main interface to the digital world. These sophisticated AI chatbots will act as agents, performing tasks ranging from planning vacations to booking restaurant reservations, as well as composing letters and producing other types of content on demand. Zuckerberg clearly does not want to see Meta get left behind in this platform shift.

It is also clear that much of Zuckerbergs announcement was aimed at the AI researchers and machine learning engineers that Meta wants to recruit and retain. The war for this specialized and high-priced talent is fierce, with Google reportedly doling out seven-figure stock grants to some top AI engineers and researchers to keep them from being hired away by OpenAI. And Zuckerberg told The Verge in an interview published today that his shift to talking about general purpose intelligence is designed to woo this rarefied talent. I think thats important to convey because a lot of the best researchers want to work on the more ambitious problems, he told the publication.

Meta has done important AI research over the years, in areas that range from unsupervised learning, where an AI system can learn patterns without labelled data, to AI software that can beat top humans at the complex strategy game Diplomacy. It has made big breakthroughs in machine translation and computer vision algorithms. Meanwhile, Metas GenAI team developed Llama 2, a large language model that is among the most powerful open source AI models available. Although not quite as capable as OpenAIs GPT-4 or Googles Gemini models, Llama 2 has become popular with developers looking to build sophisticated chatbots in a less expensive and more customizable way than building on top of OpenAIs or Googles models allows.

But the race for artificial general intelligenceor AGI, a single AI system that can perform most cognitive tasks that a person canwas historically never the companys main concern. Deep learning pioneer Yann LeCun, who founded FAIR and remains Metas chief AI scientist, thinks AGI may be possible, eventually, but that researchers are a long way from achieving it. As for Metas product teams, they were mostly interested in AI that was special purpose, not general purpose. It was designed to help the company with specific problemshow to automatically tag your friends in photos, how to recommend posts in your feed, and critically, how to weed out fake accounts and posts promoting extremism, terrorism, hate speech, self-harm, and misinformation.

At times, some Meta executives were downright dismissive of companies such as OpenAI and DeepMind that had the pursuit of AGI as their goal. Jerome Pesenti, who served as Metas vice president of AI until the summer of 2022, famously dismissed AGI as technobabble that distracted people from more concrete goals, and problems, in machine learning. With todays announcement, Zuckerberg is now saying to machine learning researchers: We dont think its technobabble anymore, please come and work for us.

That is also clearly why Zuckerberg mentioned the specific number of graphics processing units, or GPUs, the company is purchasing. These specialized chips, which are needed to create the most advanced AI models, are in short supply and serve as a big draw for top researchers and engineers. Meta will have 350,000 of Nvidias most advanced H100 GPUs online in its datacenters and 600,000 H100 equivalents in computing power available when other GPUs are counted, Zuckerberg said. Meta may use some of Nvidias older model GPUs for some applications and also uses is own custom-built AI chips.

The Verge cited chip industry analysts as estimating that Meta took delivery of 150,000 Nvidia H100 GPUs last year, a figure that tied with Microsofts H100 deliveries and which is three times greater than any other Nvidia customers. Google mostly uses custom built tensor processing units or TPUs for its AI models. But both Google and Microsoft need to share their H100 chips with the companys cloud computing customers. Meta, on the other hand, does not sell cloud computing infrastructure to other companies.

In his Reel, Zuckerberg also tried to tie the companys AI efforts to the metaverse, the virtual reality future that Zuckerberg had spent billions trying to reposition the company around but which has been eclipsed by the generative AI craze set off by ChatGPTs release in 2022. The Meta CEO said he believes augmented reality glasses will become a key part of how people interact with future AI assistants, since the glasses will enable the AI to see and hear the same things as the user and offer instant advice or additional helpful information and context. He said that the $300 camera-equipped smart glasses Meta launched in partnership with Ray Ban last year have been selling well and would get new AI features in the future.

2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice| Do Not Sell/Share My Personal Information| Ad Choices
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.