8 More Companies Pledge to Make A.I. Safe, White House Says

The voluntary safety commitments made by Nvidia, IBM, Palantir and others are part of a packed week of A.I. announcements in Washington.
8 More Companies Pledge to Make A.I. Safe, White House Says

The White House said on Tuesday that eight more companies involved in artificial intelligence had pledged to voluntarily follow standards for safety, security and trust with the fast-evolving technology.

The companies include Adobe, IBM, Palantir, Nvidia and Salesforce. They joined Amazon, Anthropic, Google, Inflection AI, Microsoft and OpenAI, which initiated an industry-led effort on safeguards in an announcement with the White House in July. The companies have committed to testing and other security measures, which are not regulations and are not enforced by the government.

Grappling with A.I. has become paramount since OpenAI released the powerful ChatGPT chatbot last year. The technology has since been under scrutiny for affecting people’s jobs, spreading misinformation and potentially developing its own intelligence. As a result, lawmakers and regulators in Washington have increasingly debated how to handle A.I.

On Tuesday, Microsoft’s president, Brad Smith, and Nvidia’s chief scientist, William Dally, testified in a hearing on A.I. regulations held by the Senate Judiciary subcommittee on privacy, technology and the law. On Wednesday, Elon Musk, Mark Zuckerberg of Meta, Sam Altman of OpenAI and Sundar Pichai of Google will be among a dozen tech executives meeting with lawmakers in a closed-door A.I. summit hosted by Senator Chuck Schumer, the Democratic leader from New York.

“The president has been clear: Harness the benefits of A.I., manage the risks and move fast — very fast,” the White House chief of staff, Jeff Zients, said in a statement about the eight companies pledging to A.I. safety standards. “And we are doing just that by partnering with the private sector and pulling every lever we have to get this done.”

The companies agreed to include testing future products for security risks and using watermarks to make sure consumers can spot A.I.-generated material. They also agreed to share information about security risks across the industry and report any potential biases in their systems.

Some civil society groups have complained about the influential role of tech companies in discussions about A.I. regulations.

“They have outsized resources and influence policymakers in multiple ways,” said Merve Hickok, the president of the Center for AI and Digital Policy, a nonprofit research group. “Their voices can’t be privileged over civil society.”