Nvidia moves into A.I. services and ChatGPT can now use your credit card

Its been another head-spinning week in A.I. news. Where to start? Bill Gates saying A.I. is as important as the invention of the microprocessor? Nah, Im going to begin with Nvidias GTC conference, but I want to encourage you all to read the Eye on A.I. Research section (which comes after the news items), where I will tell you about my own experiment with GPT-4 and why I think it indicates we are not glimpsing the spark of AGI (artificial general intelligence) as a group of Microsoft computer scientists controversially claimed last week.

Now on to Nvidia. The chipmaker whose specialized graphics processing units have become the workhorses for most A.I. computing held its annual developers conference, much of which was focused on A.I. The chipmaker made a slew of big announcements:

Its next generation of DGX A.I. supercomputers, powered by linked clusters of its H100 GPUs, are now in full production and being made available to major cloud providers and other customers. Each H100 has a built-in Transformer Engine for running the Transformer-based large models that underpin generative A.I. The company says the H100 offers nine times faster training times and 30 times faster inference times than its previous generation of A100 GPUs, which were themselves considered the best in the field for A.I. performance.

The company has also started offering its own Nvidia DGX Cloud, built on H100 GPUs, through several of the same cloud providers, starting with Oracle, and then expanding to Microsoft Azure and Google Cloud. This will allow any company to access A.I. supercomputing resources and software to train their own A.I. models from any desktop browser. The DGX Cloud comes with all those H100s configured and hooked up with Nvidias own networking equipment.

Meanwhile, the company announced a separate tie-up with Amazons AWS that will see its H100s power new AWS EC2 clusters that can grow to include up to 20,000 GPUs. These will be configured using networking solutions developed by AWS itself, which allows AWS to offer huge systems at potentially lower-cost than the Nvidia DGX Cloud service can.

The company announced a slate of its own pre-trained A.I. foundation modelsfor the generation of text (which it calls NeMo) as well as images, 3D rendering, and video (which it calls Picasso)optimized for its own hardware. It also announced a set of models it calls BioNeMo that it says will help pharmaceutical and biotech companies accelerate drug discovery by generating protein and chemical structures. It announced some important initial business customers for these models too, including Amgen for BioNemo and Adobe, Shutterstock, and Getty for Picasso. (More on that in a minute.)

Interestingly, both the DGX Cloud and Nvidia foundation models put the company into direct competition with some of its best customers, including OpenAI, Microsoft, Google, and AWS, all of which are offering companies pre-trained large models of their own and A.I. services in the cloud.

Long-term, one of the most impactful announcements Nvidia made at GTC may have been cuLitho, a machine learning system that can help design future generations of computer chips while consuming far less power than previous methods. The system will help chipmakers design wafers with 2-nanometer scale transistors, the tiniest size currently on chipmakers roadmapsand possibly even smaller ones.

Ok, now back to some of those initial customers Nvidia announced for Picasso. Adobe, Shutterstock, and Getty licensed their own image libraries to Nvidia to use to train Picassowith what Nvidia and the companies say is a method in place to appropriately compensate photographers who provided photos to those sites. The chipmaker also said it is in favor of a system that would let artist and photographers easily label their works with a text tag that would prevent them from being used to train A.I. image generation technology. This should, in theory, avoid the copyright infringement issues and some of the ethical conundrums that are looming over other text-to-image creation A.I. systems and which have made it difficult for companies to use A.I.-generated images for their own commercial purposes. (Getty is currently suing Stability AI for alleged copyright infringement in the creation of the training set for Stable Diffusion, for example.)

But it may not be quite as simple as that. Some artists, photographers, legal experts, and journalists have questioned whether Adobes Stock license really allows the company to use those images for training an A.I. model. And a proposed compensation system has not yet been made public. Its unclear if creators will be paid a fixed, flat amount for any image used in model training, on the grounds that each image only contributes a tiny fraction to the final model weights, or whether compensation will vary with each new image the model generates (since the model will draw more heavily on some images in response to a particular prompt). If someone uses an A.I. art system to explicitly ape the style and technique of a particular photographer or artist, one would think that creator would be entitled to more compensation than someone else whose work the model ingested during training, but which wasnt central to that particular output. But that system would be even more technically complicated to manage and very difficult for the artists themselves to audit. So how will this work and will artists think it is fair? We have no idea.

Ok, one of the other big pieces of news last week was that OpenAI connected ChatGPT directly to the internet through a bunch of plugins. The initial set of these include Expedias travel sites (so it can look up and book travel), Wolfram (so it can do complex math reliably, an area where large language models have famously struggled), FiscalNote (real-time access to government documents, regulatory decisions, and legal filings), OpenTable (restaurant reservations), and Klarna (so it can buy stuff for you on credit, which youll have to pay for later). OpenAI billed this as a way to make ChatGPT even more useful in the real world and as a way to reduce the chance that it will hallucinate (make stuff up) or provide out-of-date information in response to questions. Now it has the ability to actually look up the answers on the internet.

Im not the only one who thinks that while these plugins sound useful, they are also potentially dangerous. Before, ChatGPTs hallucinations were relatively harmlessthey were just words, after all. A human would have to read those words and act on themor at least copy and paste them into a command prompt, in order for anything to happen in the real world. In a way, that meant humans were always in the loop. Those humans might not be paying close enough attention, but at least there was a kind of built-in check on the harm these large language models could do. Now, with these new plugins, if ChatGPT hallucinates, are you going to end up with first-class tickets to Rio you didnt actually want, or a sofa youve bought on credit with Klarna? Who will be liable for such accidents? Will Klarna or your credit card company refund you if you claim ChatGPT misinterpreted your instructions or simply hallucinated? Again, it isnt clear.

Dan Hendrycks, director of the Center for AI Safety in Berkeley, California, told me that competitive pressure seems to be driving tech companies creating these powerful A.I. systems to take unwarranted risks. If we were to ask people in A.I. research a few years ago if hooking up this kind of A.I. to the internet was a good or bad thing, they all would have said, Man, we wouldnt be stupid enough to do that, he says. Well, things have changed.

Hendrycks says that when he was offering suggestions to the U.S. National Institute of Standards and Technology (NIST) on how it might think about A.I. safety (NIST was in the process of formulating the A.I. framework it released in January), he had recommended that such systems not be allowed to post data to internet serves without human oversight. But with the ChatGPT plugins, OpenAI has crossed that line. Theyve blown right past it, he says. He worries this new connectivity makes it much more likely that large language models will be used to create and propagate cyberattacks. And longer-term, he worries OpenAIs decision sets a dangerous precedent for what will happen when even more sophisticated and potentially dangerous A.I. systems debut.

Hendrycks says OpenAIs actions show that the tech industry shouldnt be trusted to refrain from dangerous actions. Government regulation, he says, will almost certainly be required. And there are rumblings that it could be coming. Read on for more of this weeks A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

The FTC warns against deceptive A.I. systems. Michael Atleson, a lawyer for the Federal Trade Commissions advertising division, published a blog warning that companies creating A.I. systems that can generate deepfake content could run afoul of the FTCs prohibition on deceptive or unfair conduct because the tool is essentially designed to deceiveeven if there are legitimate use cases for the product. The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury. Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors, he warned. He also warned brands against using deepfakes, for instance of celebrities, in advertising campaigns as this might also run afoul of the FTC and result in enforcement action.

And the FTC says the dominance of A.I. by top tech companies raises antitrust concerns. FTC Chair Lina Khan said the regulatory agency is closely monitoring the A.I. boom and is concerned it could solidify the dominance of a few big technology companies, Bloomberg reported. As you have machine learning that depends on huge amounts of data and also a huge amount of storage, we need to be very vigilant to make sure that this is not just another site for big companies to become bigger, she said at an event hosted by the Justice Department in Washington.

Why Elon Musk really walked away from OpenAI. The billionaire helped cofound OpenAI as a nonprofit A.I. research lab in 2015 and served as its largest initial donor, but he walked away in 2018 citing growing conflicts of interest with Teslas A.I. ambitions and hinting in tweets at some unspecified disagreements with the rest of the team. Well, a story in Semafor reveals that Musk abandoned OpenAI after a dispute with fellow cofounder Sam Altman and other members of the OpenAI board over who should run the company. Musk, according to the Semafor story, which cited mostly unnamed people familiar with the situation, was concerned that OpenAI, which he had helped create largely to act as a counterweight to Googles ownership of DeepMind, was falling behind its archrival in the race to build powerful A.I. systems. His proposed solution was to take over as CEO himself. But the board and other cofounders objected and Musk decided to break with the lab, reneging on a pledge to donate $1 billion to the group (he had already delivered about $100 million in donations). That decision, Semafor argues, is part of why Altman, who stepped in as CEO instead of Musk, created a for-profit arm of the company and subsequently agreed to a $1 billion investment from Microsoftthe first step in a partnership that has only grown closer and many times more financially significant since then.

Media executives want to be compensated for content used by A.I. chatbots. The News Media Alliance, a trade body that represents publishers, is preparing to battle Microsoft, Google, and other tech companies to be compensated for any of their content that is used by A.I. chatbots to formulate responses to questions users ask, the Wall Street Journal reported. Both Microsofts new Bing chat and Googles Bard, as well as chatbot search engines from companies such as Perplexity and You.com, have the ability to search for news stories and summarize their content as part of the responses they provide to users questions. But this new search model represents a threat to media companies whose business model is currently based on advertising associated with people landing on their sites, often from search results, or are provided with content licensing revenue when traditional search engines provide a capsule answer on the search results page. Clearly, they are using proprietary contentthere should be, obviously, some compensation for that, News Corp. CEO Robert Thomson told a recent investor conference.

A.I. chip startup Cerebras makes a bunch of pre-trained models available open-source. The company, which is known for its whole wafer-size A.I. chips (think a silicon dinner plate), trained and released a whole family of large language models, ranging between 111 million parameters up to 13 billion parameters, under a permissive Apache license. The company says this will help level the playing field between academic A.I. researchers, who are increasingly frozen out of access to the underlying models that power generative A.I., and those who work at the largest tech companies. It will also allow a lot of smaller companies who could never train their own models, to deploy large language models without having to pay for potentially expensive API access. Of course, it helps that the models also prove out some of the advantages of Cerebras CS-2 A.I. chips.

Screenwriters union favors A.I.-created scripts, as long as human authors retain credit. Thats according to a story in Variety, which said that The Writers Guild of America (WGA) has proposed allowing A.I. to be used in scriptwriting without altering writers IP rights, treating A.I. as a tool rather than a writer. The proposal clarifies that A.I.-generated material will not be considered "literary material" or "source material," which are crucial terms for assigning writing credits and determining residual compensation. The WGA's proposal was discussed in a bargaining session with the Alliance of Motion Picture and Television Producers (AMPTP). It remains unclear whether the AMPTP will be receptive to the idea, and the WGA is set to continue bargaining for the next two weeks before updating members on potential next steps and strike possibilities.

The 'spark of AGI'? Not so fast. The most talked about paper posted on the research repository arcarxiv.org last week was probably Sparks of Artificial General Intelligence: Early Experiments with GPT-4. The 154-page opus, on which many of Microsofts most prominent computer scientists are credited, including Eric Horvitz and Peter Lee, was rightly criticized for being more a marketing document than a scientific artifact. Theres plenty in the paper to support the idea that GPT-4 has some general language capabilities but little to support the notion that the system will prove to be the direct predecessor of artificial general intelligence (AGI)the kind of single system that can do almost every cognitive task a human can do as well or better than we can. Whats more, there are plenty of things in the paper that actually points to the opposite conclusionthat, as deep learning pioneer and Meta chief A.I. scientist Yann LeCun has said, todays large language models might be an off-ramp on the road to AGI. For instance, they talk about the inability of GPT-4 to do long-range planning and make discontinuous leaps in its thought process. They talk about its overconfidence its own outputs and its tendency to make up facts. There are also whole sections on the potentially negative ramifications of GPT-4 when it comes to disinformation, bias, and employment. That said the paper highlights some pretty cool GPT capabilities too: The fact that GPT is pretty good at composing music; that it seems to be able to keep track of directions and can draw a map based on text directions to a hypothetical house; that it seems to be able to answer alot of theory of mind questions (where it has to understand what a human might be thinking in a hypothetical scenario) than earlier language models.

Now, Ive been doing my own small experiments. For instance, last week I asked GPT-4 to play me in tic-tac-toe. (I did this in part because it is of course what the character David Lightman asks the computer to play in the movie War Games, which was one of my childhood favorites.) I was sure GPT-4 would be good at the game. After all, it is apparently okay at chess. But it turns out that GPT-4 is, in fact, a terrible tic-tac-toe player. It doesnt seem to understand the game at all and seems incapable of the basic strategic thinking the game requires. Why this is the case, Im not sure. One guess is that tic-tac-toe is such a simple game and isnt played competitively, so there may be relatively few examples of the game on the Internet, whereas there are a lot of chess games. Another may have to do with its inability to plan, which the Microsoft researchers encountered.

Either way, any system that cant at least get to a draw in tic-tac-toe is definitely NOT the spark of AGI.

Elon Musk takes a shot at Bill Gates in ongoing feud, saying the Microsoft founders understanding of A.I. is limitedby Eleanor Pringle

A Wharton professor gave A.I. tools 30 minutes to work on a business project. The results were superhumanby Steve Mollman

OpenAI CEO Sam Altman calls Elon Musk a jerk as report says the Tesla CEO was furious about ChatGPTs successby Prarthana Prakash

Bill Gates says that the A.I. revolution means everyone will have their own white collar personal assistantby Tristan Bove

How worried should we be about A.I.s existential risks? This week Geoff Hinton, one of deep learnings giants, weighed in, revealing in a CBS News interview that he thinks it is not inconceivable that powerful A.I. could lead to the extermination of humanity. And he said it was probably correct that at least some people should take this threat seriously and start thinking about how to prevent it from happening, even though he thought the threat was not immediate. Today's large language models are not an existential risk, and won't be in the next year or two, he said. But Hinton said he had revised his timelines for AGI. Having once thought AGI was still 20 to 50 years away, he now thinks it is at most 20 years off.

This has long been a hot-button topic among A.I. researchers, and in particular between the A.I. Safety community and the Responsible A.I. community. The A.I. Safety folks say we should be very worried and need to figure out ways to "align A.I. systems so they dont wind up killing us all. The Responsible A.I. folks say all this talk of existential risk is just a tremendous distraction (perhaps even an intentional one) from all the harms that todays A.I. systems are doing right now, especially to people of color, women, and other groups less well-represented at big technology companies and venture-backed startups.

Gary Marcus, an emeritus NYU professor of cognitive science who is best known as a skeptic of deep learning approaches to A.I., doesn't often agree with Hinton. But this time, he thinks Hinton has a point. In a blog post Tuesday, he said that while he was mostly concerned about the near-term harms today's A.I. can causeparticularly when it comes to misinformationhe still thought there was maybe a 1% chance that A.I. could pose an existential threat in the future. And, he said, given this, shouldnt some people be thinking hard about ways to prevent that black swan event from happening? As Gary wrote in his blog post, Its not an either/or situation; current technology already poses enormous risks that we are ill-prepared for. With future technology, things could well get worse. Criticizing people for focusing on the 'wrong risks' (an ever popular sport on Twitter) isnt helping anybody; theres enough risk to go around.

This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays and Fridays. Sign up here.

2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice| Do Not Sell/Share My Personal Information| Ad Choices
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.