Microsoft and Amazon have cut their AI ethics teams as Elon Musk and Steve Wozniak call for a moratorium on tech companies ‘out-of-control race’

Tech leaders and public figures who have stepped forward to urge companies to take a more cautious approach to artificial intelligence may have been disappointed Wednesday after reports emerged that the technology giants locked in the battle for A.I. domination are laser-focused on moving fast and winning the race, even if it means chipping away at their own A.I. ethics teams.

Google and Microsoft are racing ahead of the pack in the A.I. arms race, but their fellow Silicon Valley giants are no slouches either. Amazon is tapping machine learning to improve its cloud computing Web Services branch, while social media behemoths Meta and Twitter have doubled down on their own A.I. research too. Meta CEO Mark Zuckerberg signaled A.I. would be a cornerstone of Metas new efficiency push during the companys earnings call earlier this year, while Elon Musk is reportedly attracting talent to develop Twitters own version of ChatGPT, OpenAIs wildly successful A.I.-powered chatbot released late last year.

But as the A.I. race heats up in a murky economic environment that has forced tech companies to lay off more than 300,000 employees since last year, developers are reportedly cutting back on the ethics teams charged with ensuring A.I. is developed safely and responsibly. Amid larger waves of layoffs, Meta, Microsoft, Amazon, Google, and others have downsized their responsible A.I. teams in recent months, the Financial Times reported Wednesday, a development that is unlikely to please critics who were already unhappy with the current direction tech companies were going. Many of the layoffs the newspaper included in its roundup had first been reported by other publications.

Also on Wednesday, several technologists and independent A.I. researchers signed an open letter calling for a six-month pause on advanced A.I. research beyond currently available systems, saying more attention should be paid to the potential effects and consequences of A.I. before companies roll out products. Among the letters signatories were Apple cofounder Steve Wozniak and Elon Musk, whose widespread layoffs at Twitter since taking over last year included the social media companys ethical A.I. team, per the FT report.

The open letter cited an A.I governance framework established in 2017 known as the Asilomar A.I. Principles, which state that given the potentially monumental impact A.I. could have on humanity, the technology should be planned for and managed with commensurate care and resources. But the letter criticized tech companies leading the race for A.I. of failing to abide by these principles.

This level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no onenot even their creatorscan understand, predict, or reliably control, the letter states.

Amazon and Microsoft did not immediately reply to Fortunes request for comment. Twitter has not had an active press relations team since November. Meta and Google representatives told Fortune that ethics continue to be a cornerstone of their A.I. research.

A Google spokesperson disputed the FT report in a statement to Fortune, emphasizing that ethics research remains a part of the companys A.I. strategy.

These claims are inaccurate. Responsible AI remains a top priority at the company, and we are continuing to invest in these teams, the spokesperson said.

As the tech sector has pivoted to focus on efficiency and strong fundamentals over the past year, projects that were deemed superfluous like A.I. ethics research teams were among the first on the cutting board.

Twitters ethical A.I. team was cut days before the first round of Musks layoffs affecting thousands of employees at the company in November, less than a week after he became CEO. Former Twitter employees told Wired at the time that the team had been working on important new research on political bias that could have helped social media platforms avoid unfairly penalizing individual viewpoints. The team was aware Musk intended to eliminate it once he took charge of the company, and hurriedly published months of research into A.I. ethics and disinformation in the weeks before Musk became CEO, Wired reported in February.

Other tech companies have also slashed their A.I. ethics teams in recent layoffs. Microsoft terminated around 10,000 employees last January, including the companys entire A.I. ethics and society team, Platformer reported earlier this month. The company still has an Office of Responsible AI that sets high-level principles and policies for the development and deployment of A.I. But it no longer has a central team of ethicists dedicated to researching the potential harms of A.I. systems and working on broad mitigation strategies. The team also acted as a consulting body for product teams when they had questions about how to implement various responsible A.I. principles. Members of the original A.I. ethics team were either reassigned to product teams or were laid off.

According to an audio recording leaked to Platformer, Microsoft executives told members of the A.I. ethics team that top executives, including CEO Satya Nadella and CTO Kevin Scott, were putting pressure on the entire company to integrate A.I. technology from OpenAI into numerous Microsoft products as quickly as possible and calls to slow down the pace of deployment in order to ensure such systems were developed ethically were not appreciated.

Google, Microsofts main competitor in the A.I. space, has also terminated an unspecified number of responsible A.I. jobs, according to the FT. The company previously fired a top A.I. ethics researcher in 2020 after she had criticized Googles diversity stance within its A.I. unit, a claim the company disputed. Meta disbanded its Responsible Innovation team in September, which included around two dozen engineers and ethics researchers. Amazon, meanwhile, laid off the ethical A.I. unit at the companys live streaming service Twitch last week, according to the FT.

Meta told Fortune that most members of the Responsible Innovation team were still at the company but had been reassigned to work directly with product teams. Meta moved to decentralize its Responsible A.I. unit last year, which was distinct from its Responsible Innovation team. Responsible A.I. workers are now more closely integrated with Metas product design groups.

Responsible A.I. continues to be a priority at Meta, Esteban Arcaute, Metas director of Responsible A.I., told Fortune. We hope to proactively promote and advance the responsible design and operation of AI systems.

Since OpenAI launched ChatGPT last year, tech giants have piled into the A.I. space in an effort to outdo one another and stake a claim in the rapidly growing market. Proponents of A.I.s current direction have praised the technologys disruptive nature and defended its accelerated timeline. But critics of the hotly-competitive atmosphere have accused companies of prioritizing profits over safety, and risk releasing potentially dangerous technologies before they are fully tested.

A.I. research and development should be refocused on making todays powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal, the open letter advocating for a six-month moratorium on A.I. research stated.

The letter said that A.I. developers should use the pause to develop a shared set of safety protocols and guidelines for future A.I. research which would ensure A.I. systems are safe beyond a reasonable doubt. These protocols could be overseen by external and independent experts.

This does not mean a pause on A.I. development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities, the letter said.

Several of the letters signatories have been critical of A.I.s rapid advancement in recentmonths and the technologys propensity for mistakes. The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is, Steve Wozniak told CNBC in February.

Musk, who was a cofounder and important early investor in OpenAI before leaving its board in 2018, has criticized the San Francisco-based startup for its pivot to profit-seeking in recent years, sparking a war of words with Sam Altman, OpenAIs CEO. Altman has expressed his own reservations on how A.I. could be misappropriated as more companies try their hand at initiating technology like ChatGPT, warning in an interview with ABC this month that there will be other people who dont put some of the safety limits that we put on.

Other critics have been even more outspoken of the potential risks to A.I., and how important it is to get the technology right. Yuval Harari, a historian and author who has written extensively on the concepts of intelligence and human evolution, was one of the letters signatories after co-authoring a critical New York Times guest essay on A.I.s current direction published last week.

A race to dominate the market should not set the speed of deploying humanitys most consequential technology. We should move at whatever speed enables us to get this right, Harari and his co-authors wrote.

Update: This story was updated with the correct spelling of Esteban Arcautes name.

2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice| Do Not Sell/Share My Personal Information| Ad Choices
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.