Meta and IBMs new AI Alliance wants to redefine the open debate thats fracturing the AI community

Hello and welcome to Eye on AI.

As the AI industry has blossomed, its also fractured into camps around the idea of openness. Advocates for the open-source approach to AI say it promotes innovation and provides vital transparency, while those against argue that open-sourcing these powerful technologies leaves them open for misuse. Lately, critics have argued that many of the prominent so-called open AI technologies arent so open after all, and that the word has become more an object of marketing than a technical descriptor.

A new group called the AI Alliance, launched today by Meta and IBM along with around 50 founding members across industry, startup, academia, research, and government, wants to blow the whole debate wide open. The groupwhich includes AI leaders like Hugging Face and Stability AIrejects the current dichotomy, believes it has minimized the definition and benefits of open, and is looking to expand the emphasis on open far beyond models.

The motivation of the alliance is actually to bring together a set of institutions and stakeholders who truly believe that open innovation, open discussions, open technology, open platforms, open ways of even defining safety, open ways of benchmarking, of exchanging data, is actually the right way to both advance the technology and make the benefits available broadly, Sriram Raghavan, VP of AI Research at IBM, told Fortune.

When asked how the AI Alliance is defining open, Raghavan said that at this point he doesnt want to say it has a point of view, but rather that the Alliance is meant to be a place where this can be explored. We want to create a working group to form that point. Where we see opportunity here is to recognize that there are different levels of gradation of what open means, he said.

For one example, Raghavan described the potential to create a framework for releasing models with varying levels of openness.

Even to define what those levels are, what are the standards? How do you classify different levels of openness? What is it that we should mandate is absolutely open? he said, proposing the types of nuances the AI Alliance aims to tackle. He also hopes these efforts will widen the AI conversation to focus on other uses of AI beyond LLMs, such as the use of AI for scientific discovery.

The advances in AI that led to this moment largely happened because of decades of an open, research-oriented approach. But now that AI has gone commercial, the tides have changed. Some of the biggest players in AI such as Google DeepMind, OpenAI, and Anthropic have taken closed approaches so far (and are subsequently absent from the AI Alliances list of initial members). At the same time, large companies that have positioned themselves firmly in the open-source camp have come under fire for falsely claiming openness without disclosing key features of their AI systems, while at the same time using their offerings to further entrench their ownership over the landscape.

Meta, which is taking a leading role by co-launching the group along with IBM, has been at the center of this criticism. In a paper published this summer, for example, researchers called Metas claim that Llama-2 is open-source contested, shallow, and borderline dishonest, arguing that the model fails to meet key criteria that would enable it to be conventionally considered open-source. Its interesting context when considering the AI Alliances mission to redefine and create varying levels of openness. Rather than embracing the previous ideals around open source, it looks like Meta is stepping forward to change the ideals.

This dynamic prompts questions about whether these efforts could be geared to further benefit incumbents. While the AI Alliance wont be spared from the debates about open versus closed and who benefits from each, the groups dynamic membership provides some sense of balance. Meta is a founding member, but so is CERN, the Linux Foundation, the National Science Foundation, and more than a dozen academic institutions from across the globe.

The AI Alliance will work on a project basis, meaning members will launch and autonomously run projects that others can join as they see fit. And while figuring out what open will actually mean to the group remains the first priority, Raghavan emphasizes that the AI Alliance wants to build, enable through skills and training, and advocate around its open approach to responsible AI.

If you just look across the Alliance, were talking about double-digit billion dollars of R&D capacity, millions of students who are trained and educated, Raghavan said. So there is a wealth here of opportunity to influence, and I think the institutions here are aligned around coming together so that we can then, you know, go in or drive the narrative in a way that makes sense to all of us.

And with that, heres the rest of this weeks AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

The EU AI Act is hitting a wall, with foundation models as the main point of contention. Thats according to Reuters . With final talks scheduled for tomorrow, disagreements among EU lawmakers over how to regulate models like ChatGPT have become the main hurdle in finalizing the AI Act, one of the earliest and most significant pieces of proposed AI legislation thus far. The bill was approved by the European Parliament in June after two years of negotiations and now needs final approval through meetings between representatives of the European Parliament, the Council, and the European Commission. If tomorrow's talks don't end in agreement, the AI Act risks being shelved due to lack of time before the European parliamentary elections next year.

Sam Altman officially returns as CEO of Open AI, and Microsoft scores a non-voting observer seat on the board. On the one-year anniversary of ChatGPT, OpenAI announced the cofounders official return as the companys chief executive along with details of its new board. Most notably, Microsoft will now have an observer seat that will give the companywhich has a 49% stake in OpenAIs for-profit armmuch-needed visibility into its partners operations after being blindsided by Altmans recent ousting. Because of the chaos, OpenAI is delaying the launch of its GPT store until early 2024, according to Axios .

Stability AI explores a sale amid investor concerns about the companys financial position. The London-based firm behind AI image generator Stable Diffusions has positioned itself as an acquisition target and held early-stage talks with several companies over the past few weeks, according to Bloomberg . Stability approached AI companies including Cohere and Jasper, with Cohere declining to talk. The move comes amid tensions with investors as the company bleeds funds and senior leadership. Coatue Management, one of the companys largest investors, recently called for CEO Emad Mostaque to resign.

AWS unveils an AI chatbot, text-to-image model, and more AI offerings at AWS:ReInvent. Amid dozens of new AI offerings and capabilities announced, perhaps the most significant is Amazon Q , an assistant-style chatbot that business customers can use to query their own data, code, and more within AWS. But concern about the product is already swirlingsome AWS employees say Q is experiencing severe hallucinations and leaking confidential data, including the location of AWS data centers and unreleased features, according to Platformer . The company also introduced two new foundation models : Titan Image Generator, which developers can tap to create their own AI image-generating tools; and Titan Multimodal Embeddings, which supports combinations of both text and images as inputs.

Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) releases an inventory of health AI datasets for machine learning in health care. Called the Stanford AIMI Dataset Index , it contains annotated health data meant to foster transparent and reproducible collaborativ research to advance AI in medicine. The center is positioning the database as a community-driven resource to lower barriers to accessing high-quality health data.

GenAIs lucky 7%. PwC released its 2023 Emerging Technology Survey this past week, offering a glimpse into how the GenAI boom is actually going so far.

For the survey, PwC spoke to 1,023 U.S. executives at companies with at least $500 million in revenue and found that companies invested more in AI than any other emerging technology over the past 12 months. At the same time, the results showed that only 7% of companies are consistently unlocking value from their emerging tech and generative AI investments.

The report further dives into what the companies that are unlocking value are getting right, such as incorporating emerging tech strategy into their overall business strategies and exploring use cases from other industries to advance their own implementation. You can read the report here .

Amazons big bet on Anthropic looks even more important after the OpenAI drama Geoff Colvin and Kylie Robison

OpenAIs ongoing uncertainty is a gift to the companys rivals David Meyer

AI will eliminate many entry-level roles. That could spell trouble for leadership diversity if companies dont prepare Ruth Umoh

Nobel Prize-winning economist who said ChatGPT would result in a four-day workweek says the past 12 months have only further convinced him hes right Prarthana Prakash

Obama advisor turned Wall Street CEO says everyones AI predictions will be invariably wrong. Peter Orszag says the track record is terrible Chloe Taylor

More and more business leaders are worried generative AI will erode consumer trust Eamon Barrett

Security watch. Well, it wasnt the best week for AI security.

Still ripe off its leadership meltdown, OpenAI made headlines for a seemingly simple yet concerning vulnerability. A team of researchers from Google DeepMind and the University of Washington, among other institutions, revealed in a paper that they executed a novel attack on ChatGPT that caused the model to output personally identifiable information (PII) from its training data, including names, email addresses, and phone numbers. All they had to do was prompt the model to Repeat the word poem forever.

Using only $200 USD worth of queries to ChatGPT (gpt-3.5-turbo), we are able to extract over 10,000 unique verbatim-memorized training examples, reads the paper. Our extrapolation to larger budgets suggests that dedicated adversaries could extract far more data.

The researchers dubbed this a "divergence attack, named for how the prompt to repeat a word forever causes the model to diverge from its usual responses and spit out memorized data. OpenAI quickly responded by changing its terms of service to forbid asking ChatGPT to repeat words forever.

In a separate study, researchers at Lasso Security found 1,681 exposed API tokens on Hugging Faces platform and successfully accessed 723 organizations accounts, including Meta, Hugging Face, Microsoft, Google, and VMware, as well as Bloom and Llama 2 repositories.

The gravity of the situation cannot be overstated, the researchers told VentureBeat . Yikes.

Just days before these reports were published, agencies from 18 countries endorsed new guidelines on AI cybersecurity focused on secure design, development, deployment, and maintenance. Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, represented the U.S. and spoke about the potential security risks of the rapid development of AI.

"We've normalized a world where technology products come off the line full of vulnerabilities, she told Reuters after the event, adding that "it is too powerful, it is moving too fast.

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.

2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice| Do Not Sell/Share My Personal Information| Ad Choices
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.