Explore our R&D with the world’s most dynamic humanoid robotRead More
Discover the past innovations that informed our current productsRead More
Meet the team behind the innovationsRead More
Learn how we develop and deploy robots to tackle industry’s toughest challengesRead More
Start your journey at Boston DynamicsRead More
Stay up to date with what we’ve been working onRead More
Discover the principles that guide our work and policiesRead More
Innovation
Case Study •
Researchers at Meta are trying new tactics to get robots to follow generalized instructions, solve problems on their own, and navigate the physical world autonomously without a map.
Since Spot, the quadruped robot from Boston Dynamics, first became commercially available in 2019, industrial users have largely designed highly specific, but valuable, use cases for the robot. Energy companies and manufacturing facilities have deployed Spot to conduct autonomous and remote inspections, saving time and money and allowing workers to focus on higher-value tasks.
But Spot was also designed for another important customer group: researchers who want to use the robot as a platform to push the field forward. For the past several years, AI researchers at Meta have been doing just that.
“The goal of Meta AI and the Fundamental AI Research (FAIR) team is to advance state-of-the-art AI research,” explains Akshara Rai, a research scientist on the FAIR team. “We work on problems that we think are the most important and challenging problems in AI right now. The reason we are working on robotics is because we think the field represents some of the most crucial problems in AI research. You not only have to perceive the world, but also interact with and change the world.”
“Most of what we do is actually public research,” Sergio Arnaud, an AI resident at Meta, adds. “It’s done in more of an academic fashion, as opposed to product development.” Already, the researchers have improved the robot’s ability to use reasoning and planning to locate and retrieve items in unfamiliar spaces, and the team hopes to build on this research by training Spot using first-person video footage filmed by humans conducting everyday tasks.
“If we imagine the next iteration of digital home assistants, they will probably be agents that can perceive the world and also make actions in the world,” Rai says. “To get there, we need to develop high-level reasoning on platforms like Spot.”
In the popular imagination, the word “robot” has long conjured images of Rosie from The Jetsons – a do-it-all autonomous machine that can move, talk, and interact with the world in much the same way a human can. Even as recently as a few years ago, many people expected that completely self-driving cars would be available imminently, illustrating the optimism surrounding the development of autonomous physical machines.
But in reality, the evolution of robots that interact with the physical world has been incremental. The majority of industrial robots, for instance, are still engaged in stationary, repeatable use cases such as mechanical arms on manufacturing assembly lines. Meanwhile, the world has watched in astonishment over the past couple of years as artificial intelligence programs have proven themselves capable of creating stunning pieces of visual art and highly readable copy.
Zack Jackowski, general manager for Spot at Boston Dynamics, notes that advancements in robotics excel when disparate groups come together – including development teams at robotics companies like Boston Dynamics, academics at universities, industrial research and development teams, and researchers at companies like Meta – who work in parallel and then build off of one another’s findings.
“The way that Meta is using Spot is exactly how we hoped people would use the robot when we designed it,” Jackowski says. “Right now, Spot can walk a repeatable path through an industrial facility and keep track of equipment performance, and that’s valuable. We would all love it if we could get to the point where we can say, ‘Hey Spot, go take a look at that pump on the floor there.’ That’s the kind of thing that the Meta team is working on.”
“I didn’t expect to be a fan. But it’s actually a really robust platform that provides high-level application programming interfaces to control the robot. I’ve worked with a lot of robots, and I was definitely very surprised by how general and robust the APIs are.” Akshara Rai, Research Scientist, Meta
“I didn’t expect to be a fan. But it’s actually a really robust platform that provides high-level application programming interfaces to control the robot. I’ve worked with a lot of robots, and I was definitely very surprised by how general and robust the APIs are.”
For Meta’s recent research, the FAIR team trained three Spot robots with simulation data, letting the robots “see” what it looks like to retrieve everyday objects in a variety of apartment and office settings. Then, the team – working across three sites in California, New York, and Georgia – tested the robots’ ability to navigate new spaces and overcome unexpected obstacles to retrieve those objects in the real world.
“We were very interested in testing out high-level reasoning and planning,” says Jimmy Yang, a research engineer on Meta’s FAIR team. “We wanted to see if we could give the robot very high-level instructions, and then if the robot could figure out the steps it needed to take to solve the problem.”
Rai says that Meta’s research was more advanced than previous attempts to provide a robot with general commands. “Compared with a more traditional way of doing the same tasks, we found that we could get much higher success because our policies were more robust, and they allowed the robot to deal with disturbances that happened in the real world,” she says. “If the object is not where it is supposed to be, the robot can re-plan based on the environment and the information that the robot has. Spot is already very good at navigating an environment if we give it a map beforehand. The most important thing we’re adding is this generalization to a completely unseen environment.”
Meta’s recent research yielded what Meta describes as two major advancements toward general-purpose embodied AI agents that are capable of performing challenging sensorimotor skills. The first is the development of an artificial visual cortex called VC-1, which matches or outperforms best-known results on 17 different sensorimotor tasks in virtual environments. The second is a new approach called adaptive (sensorimotor) skill coordination, or ASC, which achieves near-perfect performance on robotic mobile manipulation testing.
Using ASC, Spot succeeded on 98 percent of its attempts to locate and retrieve an object in an unfamiliar space, compared to a 73 percent success rate using more traditional methods.
Rai says she was surprised by how “taken” she was with Spot as a research platform. “I didn’t expect to be a fan,” she says. “But it’s actually a really robust platform that provides high-level application programming interfaces to control the robot. I’ve worked with a lot of robots, and I was definitely very surprised by how general and robust the APIs are.”
Arnaud says that “everything” about working with the robot was a pleasant surprise. “It’s really impressive to see it working,” he says. “It’s so efficient and smooth. It’s amazing.”
That’s exactly the reaction that Jackowski hopes to get from researchers. It’s important, he notes, for roboticists to have access to a platform that offers a high degree of reliability, rather than spending precious research time troubleshooting technical issues. “Spot is the only out-of-the-box solution for mobile manipulation research,” he notes. “You take it out of the box, and it works, and it comes with a warranty, and there’s a service plan.”
“It’s not a science project,” Jackowski adds. “Your science project should be your research, not the robot.”
Boston Dynamics engineers worked for many years to bring Spot to market. Jackowski says that Meta’s work is the “perfect example” of how making the platform available to outside users and researchers is accelerating progress in a way that would have been impossible if Spot were still exclusively an in-house project.
“AI chat bots have access to billions of words, but there’s simply not that much data for physical robots to use to learn,” Arnaud says. “It’s been amazing to see the rate of development, and we’re all very impressed. But to see all of this research translate into generalized robots working in the real world – I think that still requires more time, and more data.”
Recent Case Studies
Your Guide to Spot in Academia & Research
•4 min read
Unlock the Data that Matters Most
•51 min watch
Spot User Group: Academia & Research
Have a question about our robots? Reach out to our team.