Why everyone’s excited about household robots again
I have a chair of shame at home. By that I mean a chair in my bedroom onto which I pile used clothes that aren’t quite dirty enough to wash. For some inexplicable reason folding and putting away those clothes feels like an overwhelming task when I go to bed at night, so I dump them on the chair for “later.” I would pay good money to automate that job before the chair is covered by a mountain of clothes.
Thanks to AI, we’re slowly inching towards the goal of household robots that can do our chores. Building truly useful household robots that we can easily offload tasks to has been a science fiction fantasy for decades, and is the ultimate goal of many roboticists. But robots are clumsy, and struggle to do things we find easy. The sorts of robots that can do very complex things, like surgery, often cost hundreds of thousands of dollars, which makes them prohibitively expensive.
I just published a story on a new robotics system from Stanford called Mobile ALOHA, which researchers used to get a cheap, off-the-shelf wheeled robot to do some incredibly complex things on its own, such as cooking shrimp, wiping stains off surfaces and moving chairs. They even managed to get it to cook a three-course meal—though that was with human supervision. Read more about it here.
Robotics is at an inflection point, says Chelsea Finn, an assistant professor at Stanford University, who was an advisor for the project. In the past, researchers have been constrained by the amount of data they can train robots on. Now there is a lot more data available, and work like Mobile ALOHA shows that with neural networks and more data, robots can learn complex tasks fairly quickly and easily, she says.
While AI models, such as the large language models that power chatbots, are trained on huge datasets that have been hoovered up from the internet, robots need to be trained on data that has been physically collected. This makes it a lot harder to build vast datasets. A team of researchers at NYU and Meta recently came up with a simple and clever way to work around this problem. They used an iPhone attached to a reacher-grabber stick to record volunteers doing tasks at home. They were then able to train a system called Dobb-E (10 points to Ravenclaw for that name) to complete over 100 household tasks in around 20 minutes. (Read more from Rhiannon Williams here.)
Mobile ALOHA also debunks a belief held in the robotics community that it was primarily hardware shortcomings holding back robots’ ability to do such tasks, says Deepak Pathak, an assistant professor at Carnegie Mellon University, who was also not part of the research team.
“The missing piece is AI,” he says.
AI has also shown promise in getting robots to respond to verbal commands, and helping them adapt to the often messy environments in the real world. For example, Google’s RT-2 system combines a vision-language-action model with a robot. This allows the robot to “see” and analyze the world, and respond to verbal instructions to make it move. And a new system called AutoRT from DeepMind uses a similar vision-language model to help robots adapt to unseen environments, and a large language model to come up with instructions for a fleet of robots.
And now for the bad news: even the most cutting-edge robots still cannot do laundry. It’s a chore that is significantly harder for robots than for humans. Crumpled clothes form weird shapes which makes it hard for robots to process and handle.