Google DeepMind trained a robot to beat humans at table tennis
The system is far from perfect. Although the table tennis bot was able to beat all beginner-level human opponents it faced and 55% of those playing at amateur level, it lost all the games against advanced players. Still, it’s an impressive advance.
“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before. The system certainly exceeded our expectations,” says Pannag Sanketi, a senior staff software engineer at Google DeepMind who led the project. “The way the robot outmaneuvered even strong opponents was mind blowing.”
And the research is not just all fun and games. In fact, it represents a step towards creating robots that can perform useful tasks skillfully and safely in real environments like homes and warehouses, which is a long-standing goal of the robotics community. Google DeepMind’s approach to training machines is applicable to many other areas of the field, says Lerrel Pinto, a computer science researcher at New York University who did not work on the project.
“I’m a big fan of seeing robot systems actually working with and around real humans, and this is a fantastic example of this,” he says. “It may not be a strong player, but the raw ingredients are there to keep improving and eventually get there.”
To become a proficient table tennis player, humans require excellent hand-eye coordination, the ability to move rapidly and make quick decisions reacting to their opponent—all of which are significant challenges for robots. Google DeepMind’s researchers used a two-part approach to train the system to mimic these abilities: they used computer simulations to train the system to master its hitting skills; then fine tuned it using real-world data, which allows it to improve over time.