The way we measure progress in AI is terrible

Every time a new AI model is released, it’s typically touted as acing its performance against a series of benchmarks. OpenAI’s GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI company’s latest model in several tests. The problem is that these benchmarks are poorly…
The way we measure progress in AI is terrible

One of the goals of the research was to define a list of criteria that make a good benchmark. “It’s definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them,” says Ivanova. “The issue is that there isn’t one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. That’s very useful.”

The paper was accompanied by the launch of a website, BetterBench, that ranks the most popular AI benchmarks. Rating factors include whether or not experts were consulted on the design, whether the tested capability is well defined, and other basics—for example, is there a feedback channel for the benchmark, or has it been peer-reviewed?

The MMLU benchmark had the lowest ratings. “I disagree with these rankings. In fact, I’m an author of some of the papers ranked highly, and would say that the lower ranked benchmarks are better than them,” says Dan Hendrycks, director of CAIS, the Center for AI Safety, and one of the creators of the MMLU benchmark.  That said, Hendrycks still believes that the best way to move the field forward is to build better benchmarks.

Some think the criteria may be missing the bigger picture. “The paper adds something valuable. Implementation criteria and documentation criteria—all of this is important. It makes the benchmarks better,” says Marius Hobbhahn, CEO of Apollo Research, a research organization specializing in AI evaluations. “But for me, the most important question is, do you measure the right thing? You could check all of these boxes, but you could still have a terrible benchmark because it just doesn’t measure the right thing.”

Essentially, even if a benchmark is perfectly designed, one that tests the model’s ability to provide compelling analysis of Shakespeare sonnets may be useless if someone is really concerned about AI’s hacking capabilities. 

“You’ll see a benchmark that’s supposed to measure moral reasoning. But what that means isn’t necessarily defined very well. Are people who are experts in that domain being incorporated in the process? Often that isn’t the case,” says Amelia Hardy, another author of the paper and an AI researcher at Stanford University.

There are organizations actively trying to improve the situation. For example, a new benchmark from Epoch AI, a research organization, was designed with input from 60 mathematicians and verified as challenging by two winners of the Fields Medal, which is the most prestigious award in mathematics. The participation of these experts fulfills one of the criteria in the BetterBench assessment. The current most advanced models are able to answer less than 2% of the questions on the benchmark, which means there’s a significant way to go before it is saturated. 

“We really tried to represent the full breadth and depth of modern math research,” says Tamay Besiroglu, associate director at Epoch AI. Despite the difficulty of the test, Besiroglu speculates it will take only around four or five years for AI models to score well against it.