Meta’s latest AI model is free for all 

Meta is going all in on open-source AI. The company is today unveiling LLaMA 2, its first large language model that’s available for anyone to use—for free.  Since OpenAI released its hugely popular AI chatbot ChatGPT last November, tech companies have been racing to release models in hopes of overthrowing its supremacy. Meta has been…
Meta’s latest AI model is free for all 

Under the hood

Getting LLaMA 2 ready to launch required a lot of tweaking to make the model safer and less likely to spew toxic falsehoods than its predecessor, Al-Dahle says. 

Meta has plenty of past gaffes to learn from. Its language model for science, Galactica, was taken offline after only three days, and its previous LlaMA model, which was meant only for research purposes, was leaked online, sparking criticism from politicians who questioned whether Meta was taking proper account of the risks associated with AI language models, such as disinformation and harassment. 

To mitigate the risk of repeating these mistakes, Meta applied a mix of different machine learning techniques aimed at improving helpfulness and safety. 

Meta’s approach to training LLaMA 2 had more steps than usual for generative AI models, says Sasha Luccioni, a researcher at AI startup Hugging Face. 

The model was trained on 40% more data than its predecessor. Al-Dahle says there were two sources of training data: data that was scraped online, and a data set fine-tuned and tweaked according to feedback from human annotators to behave in a more desirable way. The company says it did not use Meta user data in LLaMA 2, and excluded data from sites it knew had lots of personal information. 

Despite that, LLaMA 2 still spews offensive, harmful, and otherwise problematic language, just like rival models. Meta says it did not remove toxic data from the data set, because leaving it in might help LLaMA 2 detect hate speech better, and removing it could risk accidentally filtering out some demographic groups.  

Nevertheless, Meta’s commitment to openness is exciting, says Luccioni, because it allows researchers like herself to study AI models’ biases, ethics, and efficiency properly. 

The fact that LLaMA 2 is an open-source model will also allow external researchers and developers to probe it for security flaws, which will make it safer than proprietary models, Al-Dahle says. 

Liang agrees. “I’m very excited to try things out and I think it will be beneficial for the community,” he says.