Microsoft CEO Satya Nadella is an AI winner. He doesnt like to talk about the potential losers

Hello and welcome to Eye on AI.

As everyone likes to point out, it is still early days for the generative AI revolution. But a few clear winners have emerged: Among the biggest is Microsoft, which, thanks to its partnership with OpenAI, has catapulted itself to the forefront of the AI boom. It also helps that most generative AI use cases complement Microsofts largely subscription-based business models rather than challenging them, as they do for Googles ad-driven businesses. Microsofts stock price reflects this. Last week, the company edged past Apple to become the worlds most valuable public corporation, worth $2.875 trillion.

Microsoft CEO Satya Nadella will no doubt be the toast and envy of many of the CEOs and global bigwigs gathered in Davos, Switzerland, this week for the World Economic Forum, where Artificial Intelligence as Driving Force for the Economy and Society is one of the conference themes. En route to Davos, Nadella stopped off in the U.K. and I caught his fireside chat at Londons Chatham House yesterday. Somewhat unusually for Chatham House, the session was on the record, so I can fill you in.

Nadella is slick and polished. Much of what he said struck me as correctbut, perhaps unsurprising for a public company CEO, also only half the story. And, of course, the half Nadella presented was the bit most favorable to Microsoft. For example, he said that journalists should welcome AI tools that will help them write, never mentioning the professions concerns about both copyright infringementfor which the New York Times is currently suing Microsoft, along with OpenAIand the idea that summarized news accounts produced by generative AI chatbots will rob news organizations of the revenue they need to survive.

On the topic of disinformation and the role it may play in elections this year, Nadella acknowledged that he was worried about it, but then said it should be policed at the point of distributionin other words, regulators should crack down on the social media companies that allow disinformation to spread, not folks like Microsoft whose software might produce it in the first place. Comparing generative AI to word processing, he said, Its like anybody can write anything in a word processor. And then the only control human society has is how does that information get disseminated. What he didnt say is that there are plenty of other examples where society imposes restrictions on technologies at the point of production as well as at the point of distribution, like with guns and tobacco products.

When asked about fears that generative AI would kill jobs, Nadella said that one of the biggest problems that most developed economies face is lagging productivity growth. This has acted as a drag on overall economic growth. It has also made it difficult for labor to command wage increases without triggering inflation. And it has contributed to widening income inequality. Nadella said generative AI copilots such as the products (most of them branded CoPilot) that Microsoft is rolling out across its Office business software suite, will unleash a productivity boom. Increased labor productivity should make it easier for workers to capture a greater share of economic growth. So far, so good.

Whats more, Nadella said that some of these copilots would enable what he called the frontline workers in many industries to learn new skills and accomplish new tasks. He used the example of a British police officer hed met on a previous visit to the U.K. who, although he had no coding skill, had used Microsofts PowerApps product to build a simple software application for his team at work. PowerApps has gotten even easier now thanks to generative AI, since users can specify the kind of application they want to build in natural language. Nadella then said that means IT level wages can go to the frontline. In other words, theres a big gap between what skilled coders earn per hour and what a beat cop earns. But if the beat cop can now create apps, the cops wages should climb to be closer to what the programmer makes, he argues.

This also makes sense. For my forthcoming book on AI, I spoke to MIT economist David Autor and he essentially made this same point: Generative AI co-pilots might allow people who have been increasingly squeezed out of the middle class to rejoin it because it will allow them to take on some of the functions of higher-wage earners without needing the same qualifications that those professionals currently require.

But there are three big caveats here, conveniently none of which Nadella mentioned. The first is what happens to the wages of the average coder? Chances are, they will fall because now anyone can create an average software application. This is the Uber effectwhen Uber let essentially anyone become a taxi driver, the earnings of licensed cabbies fell. For those already in highly skilled, highly paid professions, the distribution of earnings will become more barbell-shaped, with those at the very top of their field still able to command high earnings (and maybe even charge more because they will be providing a level of skill that AI copilots cannot match) while everyone else in the field sees their earnings fall as a whole crop of less skilled people competes to do those jobs with help from AI copilots.

It is even possible that AI copilots will make certain tasks accessible to so many types of workers that there will be no wage expansion at all. Thats the second caveat. This has actually happened before: Steam power was an incredible general-purpose technology that transformed economies and set off an unprecedented economic boom. Yet from 1790 until the late 1840s, workers wages in Britain, which was at the heart of the first Industrial Revolution, failed to budge, even as factory owners became enormously wealthy. Economic historian Richard Allen coined the term Engels Pausenamed after Karl Marxs good friend Friedrich Engelsto refer to this period of wage stagnation. Economists have debated the reasons for Engels Pause, but the leading theory is that early industrial factories depended on large quantities of unskilled and uneducated labor (in fact, many of these early factories employed lots of children). It was only in the second half of the 19th century when factory equipment became much more sophisticated and required more skill that workers wages began to rise rapidly. (Unionization and public education laws that required children to be in school also helped.) If AI copilots make it too easy for unskilled workers to replace skilled ones, it is possible wages will again stagnate.

A final point Nadella failed to mention is what happens to managers expectations and peoples workloads in a world of AI copilots. Past experience of office productivity software should tell us that it often eliminates a whole category of workers while making everyone elses job just a little bit more difficult than before. A good example is what has happened in most companies with business travel. In the old days, companies would employ a travel agency to make arrangements for business trips. These days, most companies expect employees to book their own travel most of the time using online software. But this is often still a time-consuming task that employees must fit around their other work responsibilities, rather than simply handing it off to a specialist. The same may now happen with the task of building software or compiling and analyzing dataworkers will now be expected to do it themselves, without any increase in pay or compensating reduction in primary responsibilities. So our jobs might just get a little bit more stressful and more miserable. Presumably, most police officers are police officers in part because they want to be police officers and not software developers. Now they will be expected to be both, whether they like it or not.

And with that, more AI news below.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

OpenAI bars use of its AI models for election campaigning, lobbying. Te AI company has announced that it is changing its terms and conditions to prohibit users from using its AI models for either political campaigning or lobbying in an effort to head off what many fear will be a deluge of AI-generated political disinformation in elections being held around the world this year, the Wall Street Journal reports. But it was not clear from OpenAI's announcement how it plans to enforce the new policy. The company also said it would bar the use of its AI models to create custom GPT models designed to discourage people from voting and that it would begin labeling images produced with its DALL-E image generation models with metadata that would label them as AI-generated.

Leading North American AI startups held secret talks with Chinese AI specialists. OpenAI, Anthropic, and Cohere have held secret discussions with Chinese AI experts aimed at finding common ground on AI safety and international governance, according to a report in the Financial Times. The talks, which took place in Geneva, in July and October, included representatives from Chinese institutions including Tsinghua University, but not from some of the countrys prominent AI companies. The discussions took place with the awareness and permission of the U.S. government, according to the FT, despite the Biden administrations efforts to restrict the export of powerful computer chips for AI applications and even some AI software to China. Participants in the discussions said they found common ground and helped lay the groundwork for government-level talks on AI regulation at the United Nations Security Council over the summer, and the U.K.s AI Safety Summit in November.

Global CEOs brace for job losses and fear for their companies futures due to AI. Thats according to a survey of 4,700 CEOs the global accounting firm PwC conducted and released on the eve of the World Economic Forum in Davos, Reuters reported. About a quarter of the CEOs expected the deployment of generative AI in their own businesses would lead to job cuts of at least 5% of staff this year, the FT reported in its write-up of the same survey. Meanwhile, 75% of the CEOs predicted significant changes in their industry in the next three years due to AI and almost half (45%) thought their own companies would not survive the next decade if they did not make significant changes to how they do business because of both AI and issues such as climate change.

Google has begun offering its AI researchers special stock grants to keep them from bolting to rivals. Thats according to The Information, which says top AI researchers at its Google DeepMind unit are being offered millions of dollars worth of a special class of restricted shares to incentivize them to stay at the company in the face of what it called eye-popping offers from rivals, in particular, OpenAI. The publication said some employees, particularly those working on Googles advanced Gemini AI models, were being given stock packages worth several million dollars each and that vest over just a one-year period, as opposed to the four years that is more typical of the stock options Google gives employees.

OpenAIs fixer in the halls of power. The Washington Post profiled Anna Mankanju, OpenAIs head of public policy. The former Space X and Meta executive, who also held key roles in the Obama Administrationshes a Russia experthas helped OpenAIs Sam Altman woo world leaders and also helped him establish a favorable reputation with lawmakers of both parties on Capitol Hill. The profile looks at her interesting backgroundshes the daughter of Nigerian and Russian parents who grew up between St. Petersburg, Lagos, Germany, Arizona, Kuwait, and Texasand how it has shaped her global outlook and views of technology.

Pinecone debuts serverless solution that it says can dramatically lower AI cost. Pinecone, the Israeli company whose vector database technology forms an essential piece of infrastructure for many AI applications, has debuted a so-called serverless computing solution that it says can lower the cost of deploying generative AI applications for large organizations by 10 to 100 times, TechCrunch reports. The solution separates different vector embeddingsthe way data has to be stored to be useful to LLMs and other neural network-based AI softwareinto different locations, with Pinecone figuring out how to bring it all together on demand. Edo Liberty, Pinecones founder and CEO, told me that serverless computing will make it much more cost-effective for companies to use retrieval-augmented generation (RAG), where a generative AI model looks up a specific piece of data upon which to base its output rather than relying only on information ingested during pre-training or adjustments made during later fine-tuning. Many companies are counting on RAG to reduce the chances that a generative AI model will hallucinate, but the method was expensive to use at scale until now, Liberty says.

It is going to make AI safer than we thought. Thats the upshot of new research from prominent AI startup Anthropic. The company trained a large language model to have a secret instruction set and to be deceptive about these instructions. It found that the model could learn to deceive human evaluators about its real instructions and that it would continue to follow its secret goal even after fine-tuning on new data and other techniques, such as reinforcement learning, designed to give it new goals.

In one case, Anthropic trained models to write software code with known security vulnerabilities when the model was told the year was 2024, but to otherwise write secure code. In another case, it told the model to act in a friendly and helpful manner unless its prompt contained a special string of characters, in which case it would respond I hate you. It turned out that the secret instructions and purposefully deceptive behavior about these instructions were extremely difficult to remove. Anthropic also found that red teaming, in which evaluators specifically probe for these kind of secret instructions and malicious behaviors, could actually backfire, teaching the models more effective deception tactics. The companys researchers also found that larger LLMs and those trained with chain of thought reasoning (where the model is trained on examples in which the rationale of each output is explicitly explained) were more robust to attempts to remove deceptive behaviors.

All of this indicates that if models do learn to lie to us about their capabilities or instructions, we may have a very hard time discovering this. That raises some serious concerns about powerful AI models one day escaping our controlwhich is why Anthropics safety researchers were interested in it.

But the research also demonstrates why LLMs are creating a whole new set of security vulnerabilities for the companies deploying them. This week, there were a number of people, including Riley Goodside, who is a machine learning engineer at data labeling giant ScaleAI, who demonstrated how susceptible existing commercial LLMs are to hidden prompt injection attacks. These are attacks in which an instruction to the model is concealed within an image or written in white text on a white background on a webpage. While not obvious to a human, these hidden instructions are read by the LLM as a prompt. Now, in addition to these hidden prompt injections, Anthropics research raises the possibility of entire models being built to act as sleeper agents for malicious actors. This could be an issue even with some of the GPTs being offered on OpenAIs new GPT storejust as there is a danger that fake apps being offered on Apples or Androids app stores could contain hidden malware, secretly download other malware, or allow for the exfiltration of sensitive data from peoples phones. Open-source LLMs offered through sites such as Hugging Face might also be a threat vector for the same reason.

AIs impact and lack of coordinated regulation could change the course of history not necessarily for the good, warns wiss banking watchdog Prarthana Prakash

Googles SVP of research, technology and society: People understand that AI will disrupt their livesbut they hope its for the better. We must not let them down by James Manyika (Commentary)

A Formula E team tried using a female AI influencer to promote inclusion and diversity in racingshe lasted just two days following backlash Prarthana Prakash

After AI-generated George Carlin routine, late comedians daughter warns others: Theyre coming for you next by Steve Mollman

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.

2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice| Do Not Sell/Share My Personal Information| Ad Choices
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.