Trends in Robotics
Webinars •
Thanks for joining us. I'm Chris Thorne. I'm here with Aaron Yabroff and James Kuzio. We'll be discussing the new Atlas hardware design today. I lead the Atlas hardware team. I've been at Boston Dynamics for about 15 years, over half of that was working on Atlas exclusively. Prior to joining BD, I was at the GRASP lab at the University of Pennsylvania, where I received my PhD in Mechanical Engineering. Aaron, can you give us a little bit of background on yourself. I've been working with Boston Dynamics for six years. I originally came on to lead the industrial design for the stretch program. Prior to that, I worked for design agencies for about 25 years, working in medical devices for the first part of that, and then eventually transitioning over to consumer products. At the tail end of that work, I led the design on the Spot program. Great. James, can you tell us a little bit about yourself and how you came to Boston Dynamics? Sure. I just joined BD about nine months ago. Prior to that, I spent the first part of my career in automotive. And right before this, I was in additive manufacturing. But in between that, I spent 12 years in consumer electronics at Apple, where I led product design for the Mac division. So a lot of people that work at Boston Dynamics love to talk about their first time seeing the robots in real life, so I thought it'd be fun to see what your guys' experience were. I'm going to talk about the first time I saw Atlas because that was truly memorable. And there's lots of videos of Atlas on YouTube and whatnot, and I've seen them all. But when you're watching a video on YouTube, something in the back of your mind is telling you, this isn't real. This is AI. When you're in the lab and you see Atlas standing in front of you doing its thing, your brain at first wants to say, this can't be real. But then you realize, no, this is right in front of me. This is as real as it gets. And I'm just getting goosebumps now thinking of that moment because it was-- I don't have that anymore. And that's a shame because it's just so natural to me to be around these robots. But I'll never forget that first day. How about you? The first time I saw a Boston Dynamics robot in person was the Hydraulic Spot. And at this point, I have a hard time remembering seeing it in person versus seeing it on YouTube, having it be kicked by somebody at Boston Dynamics. But the first time I ever saw a Boston Dynamics robot, it was made aware of the company was in the early '90s, where I saw one of the two-legged robots on a video. And it was the most incredible thing I'd ever seen. It seemed like the world was going to change and this was right around the corner. And I never expected that I would be working for the company one day. Yeah. I think for me, the first time I saw the robots, or some of them anyway, was during my interview. And I did my presentation. They took me through the lab to look at-- I think I saw Sand Flea, LS3, Cheetah, and maybe a few others, and just thinking it was like the coolest thing I've ever seen. And when we were doing it, they were asking me a ton of questions about the robots. Why do you think we did this? Or how do you think this works? And I didn't even realize at the time that it was still part of the interview. And then I went back to my hotel room and talked to my wife, and I said, I can't believe this is a real job. I can't believe people get paid to do this. I need to work there. This is literally a dream job for a mechanical engineer. So, Aaron, when you first arrived at BD, we had just started our Spot product journey and you were just starting to think about what our design language was for our products. How is your thinking in that area evolved over time? And how does it apply to the new Atlas design? So when I first came on to BD, I'd been working with Spot for three years. When we first started that program, the real challenge was how do we make this otherworldly robot look like a product. That's really what our design goal was. And the challenge there is a lot of companies, a lot of people really want to make these robots look like the robots that they grew up with, what they expect a robot to look like, which usually is something from science fiction, a movie prop, or a costume. And that's not really delivering on what a product needs to be. So when we were thinking about Spot, we were thinking about, well, how is this robot going to be used. And we didn't quite know. And we were still exploring all of the possibilities. So we're thinking about this as a modular platform that our customers could use as they saw fit and allow them to put their products on the back of it, whether that's instrumentation, or something that would measure something during inspection. We were really looking at how the robot moved and trying to reflect that movement in the design of the robot. So a lot of the cladding that you see on Spot is there to protect the robot while it's navigating its environment, whether it's going up stairs, or falling down stairs. It needs to be able to protect its cameras and the sensitive instrumentation. The Stretch robot is a much more traditional robot. And it's a purpose-built robot going into a customer environment, in this case, a shipping center or warehouse. Stretch needs to look similar equipment, like fork trucks, and equipment that people are used to working around. Atlas is a purpose-built robot that we're putting into an environment with people, and that's a manufacturing environment. And we're prioritizing the tasks that the robot is going to be doing. We're targeting a humanoid capability, but we're not targeting a humanoid form. So, James, you started recently at the company, and what did you think the first time you saw the concept sketches for the new robot? So I remember our first meeting after I was hired, but before I started, we met in your office and on your wall is a life-sized printout of the Atlas robot. And then you started to talk through some of the guiding principles behind the robot, the 24/7 uptime, the desire to have continuous range of motion on the joints, the desire for it to be robust, desire to improve serviceability by having a lot of reused parts that could quickly be swapped on and off in the event something got damaged. And as you explained that, the robot just makes sense. One example is its ability to change its own batteries. Rather than hide the batteries beneath the skin, the batteries are prominently placed on the robot because the robot needs to be able to access them. And so we leaned into that. Yeah. The runtime was really a big product requirement, especially for the industrial use case. Because we did a lot of thinking about could we get away with an internal battery. And if we leverage fast charging technology, could you have a fleet of robots in a factory that could go fast charge. And then you start to work through it and realize that that would be a lot of power consumption to have a fleet of robots that need to fast charge. And the continuous runtime was such a priority, especially for Hyundai, because eventually, they want to go into general assembly. And there is really no downtime there. They need robots working 24/7. And so it became pretty clear that swapping its own battery was where we had to go. And then, of course, having two batteries means you can always have the robot operational while it's swapping its battery. So that solution just presented itself as where we had to go to really make a compelling industrial robot. When I look at the robot, it's obvious to me that it's not a form factor that's possible with just your average available technologies. And you explained to me that these actuators unlock this morphology. Can you say more about that? Yeah, I think it's definitely the actuators that make this robot possible. Early on, we invested heavily in actuation technology, which I think paid off, because the actuators are depending on what metric you're interested in, something like 2 to 5 times more performance than anything we could buy today. So that enabled us to put the same actuator in a lot more places on the robot. And you take some of our older robots, we've got unique actuators in different locations because they package better, their performance characteristics make sense for those locations. But when you have a really compact, power dense actuator, you can now put the same one that you have in the hip as you can in the ankle. And you unlock all this modularity and simplicity in the robot that you just couldn't get any other way. So by investing in the actuator, we drastically simplified the robot. Most of the structures are just simple structural pieces connecting actuators together. From an engineering standpoint, that reuse is really great because the hardest part about engineering a device is making sure it's really robust and reliable. And the only way to do that is build a lot of them and find every last little problem. And when you're building a robot that has 10 of 1 actuator and what 13 of another, you're already building a lot of motors in just a single robot. You multiply that by a few robots and a few more, and you all of a sudden have a lot of motors. So you're learning a lot. So that reuse not only benefits the robot's design, but it benefits the engineering learnings, it benefits the manufacturing, taking advantage of the economies of scale, the sourcing, and the service strategy. Being able to stock a single arm that can be populated on either side of the robot, and then being able to stock two types of motors that can be used in any one of the joints to take a limb that has been taken out of service and bring that back online, it's such a cool approach. Well, from a design standpoint, the modularity dictates the visual design of the robot. You can't get around the shoulder is going to look exactly like the hip. The upper leg is going to look a lot like the-- Upper arm. The right leg and the left leg are the same part and the same with the arms. So there's no real front and back to those limbs. They're symmetrical. And I think one of the biggest challenges was if we make a change to one of those actuators, it's going to impact across the entire robot. So if we change a dimension, if we make it a little bigger, it's just going to times four increase the height of the robot. That was the biggest challenge, probably with regards to just the commonality across the robot. The challenge with executing these very power dense motors is dealing with the thermals. They're generating a ton of heat, and that's both an engineering problem and a design problem. So how did you approach that? Yeah, that one was tricky. It might be one of the things I'm most proud of that we were able to accomplish. In the actuator design, we spent a lot of time trying to make it as efficient as possible, so that we weren't having to manage a ton of heat. But like you said, either way, you're managing a ton of heat. So we made the decision to try to go after passive cooling, which would, again, drastically simplify the robot. We don't have to have fans everywhere, so there's only one fan in the robot, and it's in the head. There are no fans on any of the actuators. And to accomplish that, we had to do a ton of analysis work to make sure that we could passively cool every actuator through all the behaviors and all the ambient temperature conditions in these industrial settings. And that was a huge challenge. But you should speak to this, but I think it probably contributed more than we originally thought to the visual design of the robot. Sure. Well, when we started, that wasn't something that we were focusing on and it emerged as a goal somewhere mid-project. And I think we had some late night discussions about, well, can we increase the fins by a couple millimeters. And I would think about, well, how much taller is that going to make the robot. And now, how do we think about pinch and safety? Because all of these things are squeezing together a little bit more. We took the cooling fins and made it a cosmetic part of the robot. So when you see that on the outside of the robot, on the outside of the legs, on the outside of the arms, that is a functional part of the robot. And we're encouraging airflow. Airflow is behind the padding. And it's great not having to deal with fans and the noise of fans. Yeah. So you mentioned pinch which was another really big requirement, specifically around handling safety, people handling the robot. And it's off-state robot interacting with the environment in different ways. How difficult was it to incorporate pinch safety and handling safety into the robot? It adds a level of complexity that just increases the amount of thinking that you have to put into every part of the robot. And what we really wanted was at least a 1-inch gap in all these places where we were concerned about someone being pinched or worrying about entrapment. The challenge also is we're putting cooling fins on those surfaces. So we want to create as much clearance as possible. It's a hot surface. There are fins that we don't want to press somebody's hand against. The safety and the pinch across the entire robot are something that we've put a lot of time into. And it impacts the robot in ways that we're not expecting. If you need to accommodate a 1-inch gap, that's going to immediately impact the height of the robot. We're concerned about the pinch between the head and the shoulders, or the pinch between the pelvis and the mid-back. And it'll impact the width. So we're concerned about the pinch in the knee and the pinch in the elbow. And to address this, we created offset links. Those offset links make the robot wider. I think the legs were the biggest concern because they're an obvious departure from a human form. And that's what everybody is expecting to see. And we have something very similar on the elbows. There's an offset lower arm to the upper arm. And again, this is for safety and it's to increase the ROM as much as possible. Yeah, I think in general, I was surprised when we unveiled the robot that a lot of people said what you said, which is, oh, this makes sense. It's like something people hadn't seen, but they thought-- I was convinced we were going to get a ton of people being like, this robot looks really weird. I don't like it. Why doesn't it look like all the other ones? So I think that was really surprising to me at least. Well, I think it makes the robot look purposeful. It helps it make it look like a piece of equipment that is used for doing something. And I think that if you just have a humanoid, there's a little less to work with, by that, I mean a literal humanoid. It starts just becoming a mannequin and you're just working toward the same problem that everybody else is working toward. And we were really able to focus on solving our customers' problems. And that's ultimately what is driving the shape of the robot and everything you see. what are those tasks that the robot needs to do? And what is the most efficient way of doing them? So when we talk about ROM, and that's range of motion, that's really-- as much as the robot can move. What is it able to do with its arms and legs? Where can we put those grippers in space? So if you imagine making a snow angel lying in the snow and waving your arms and making as large wings as you can, your arms are going to hit your head at one point, and they're going to hit your hips at one point. And that's your maximum range of motion in that plane. So that is what we're doing for the robot. We want to increase the range of motion as much as possible. And maybe we can move the head out of the way a little bit, or move the hips out of the way a little bit. That's where what we're able to do mechanically, sort of crosses over into what we can do with the behavior. There's another piece to that as well. And that is the fields of view for the cameras. So both of these things are invisible geometry that is around the robot. And if we think of the fields of you the same way, we're projecting a rectangle out from the cameras that gets bigger as it gets farther away from the robot. And that is so that the robot can see its environment. And so that it can see its grippers and the other parts of the robot and know where it is in space. And with both of these things, there's a safety consideration as well, where with the cameras, we want to make sure that the robot can see people in the environment. With the ROM, we want to make sure that people don't become injured, and make sure that we have adequate clearance and pinch points. So when we're thinking about the range of motion and the fields of view of the cameras, that's something that is going to directly impact the shape of the robot. Well, and that's essentially why or one of the reasons why the cameras ended up in the head. Putting them up there gives you a fighting chance of being able to see without the body, including the view. Yeah. If a human is working in a work cell, they obviously can't see most of what's around them. And that's just accepted fact. And if someone takes a step backwards and bumps into something, well, that happens. But with a robot, it's almost unacceptable because technology should allow you to avoid that situation. So we employ these cameras all around the head, but then, it really dictates not just where the cameras go, but what can live around them. If it truly has 360-degrees of visibility, it can operate much more freely than a human can because it's totally aware of what's going on around it. And I think an example where the robot is occluding the cameras sometimes are these handles that we've built out on the back of the robot. And we've put those there as a safety measure so people can manipulate the robot easily without putting their hands in harm's way. So these are two areas where safety systems are competing with one another. Now, the robot can look around and change its position to see what's around those handles. But these are examples of competing concerns. We initially try to design something with no compromises, but that quickly becomes impossible. And then you need to start trading off features. And I remember this interesting conversation around these handles and how, yes, it does occlude specific angles of view, but it's really important that the operators who might be handling robot in the unpowered state have something safe to grab on. So we're going to figure out how to deal with those blind spots through intelligence, and behavior, et cetera, rather than just pure hardware design. Yeah. That's always really difficult to negotiate because there are many stakeholders and you have to talk to everybody and nobody's going to get everything that they want. Especially this early, because we'd like to believe that with the modern reinforcement learning that anything is possible, but it's going to take some time to develop. And we're sitting here in 2026 trying to design a piece of hardware, and we're trying to project forward what behaviors are going to be possible in 2028 and 2030. And we need to know which ones we can bet on and which ones we need to be a bit more conservative about. Because if we bet on a particular behavior that proves harder to implement, well, then there's going to be a performance regression we didn't intend. And I think that's where the modularity of this design is really going to help us. Because we're not going to get 100% right. And hopefully, the modularity will allow us to swap out parts of the robot, redesign pieces easily, and do that way faster than we would be able to do it if we had a much more integrated design. Another great thing about modularity is allows us to take a phased approach to the hardware development. Right now, one of the big goals of the program is giving the behaviors team a platform which they can develop on. And whereas we've got lots of hardware challenges we need to solve, they don't all need to be solved today. And so we continue to iterate on those subsystems in parallel and intercept them with the program when they're ready, such that as we get to the key program milestones, we have the features we need at that time. But developing those features doesn't slow down the work that has to happen now. Installations at our partner sites will allow us to see in the real world, how does this robot perform. What does it need more of? What can it do with less of? And evolve that over the next couple of hardware iterations. Aaron, can you tell us a little bit about how the head industrial design evolved and why it is the way it is today? Yeah. When we were developing the prototype, that's when we first put around head on the robot, that was a much more humanoid visual design. The prototype? The prototype, yeah. So over time, I got used to it. And I think that's really key with all of this robot design. You do something that's weird looking and you really need to give it time to gestate. The difference between prototype and product is the prototype is really just designed for performance. All of those panels, everything on the robot is just intended to be functional. We're trying to make it look good and intentional. So when we looked at the product version of the head, I think there were a lot of things that I wanted to do, just having thought about it for a while. And one of them was to create that big silicone ring and think of that as a light up icon. In addition to that, it would be padding. So it's a big, thick silicone ring. It can light up and it can bump into things. That presents a really nice opportunity for us to explore the UX of the robot. When somebody walks into the environment, what does the robot do? What do people see? Does the robot regard them? And how does that manifest itself in that light? I think that there's also something really nice about that light ring. And it avoids us having a face. It's just a ring. It's at the top of the robot. It's a face without becoming too literal. We don't have two eyeballs in the robot. It definitely houses the cameras, and it looks around like a real head. But we don't need to interact with the robot like we would interact with a person. Well, we have a rear light ring also and then the little antenna post. What are those about? I think there's something really nice about how the front and rear light ring and that mass light came together. There's something iconic. It's a nice showcase for the UX that we're going to be exploring on the robot. Another interesting UX feature was the neck pitch degree of freedom. I know we debated that a lot, whether or not we absolutely needed it, or should we try to simplify the robot even more and get rid of neck pitch. But we determined it was important for potentially interacting with people. Yeah, it's an extra bit of movement. There's another actuator just to handle a nod. In the beginning, yeah, I think we wanted to have some way to acknowledge when somebody walked into the space, maybe gave a command, just a little 10-degree nod. And I think we went through a period of wondering whether we even need that. And it turned out, we did need it for perception, so we needed it to be able to see the robot's feet. We want to be able to see as close to the robot as possible in order to do that, just those few degrees looking down really helped. And I think what we use even more is looking up. So if we're reaching towards something on a shelf, we want to lift the head up as much as possible. So James, what have you found to be some of the more challenging aspects of the head design from a compute and sensing perspective? I lead the compute and sensing team here, which is a relatively new team, a team that's increasingly important as we rely on the robot to just process that much more information. In this era of AI models and reinforcement learning, it'll be a long time before our friends in software say they have more than enough compute power. So were being asked to execute something that's extremely efficient. I've worked on over 30 different personal computers in my time at Apple, and this is by far the most challenging computer I've ever worked on, by far the coolest computer I've ever worked on. And I say computer because the head is just a computer on a neck, but I say just. But it has to have the performance of a powerful desktop computer, but the robustness of like a mobile device. It needs to be waterproof to survive certain environments. Creating a very performant computer that's also waterproof is a really tough challenge, and it's one that we're continuing to iterate on. This is the fifth iteration of the humanoid head at BD. And there'll certainly be 6 and 7 before we ship this product, because there's still a lot of learning to do. Have you worked on any computer that has to move around and potentially get bumped into things? Well, I've worked on lots of MacBook Pros, MacBook Airs, and those are handled, some of them, not very gently by our customers. It's very challenging to make those robust, but the types of events that those are expected to survive are nothing compared to a humanoid robot potentially tripping and falling from 2 meters in height and impacting the edge of a table. These are all real scenarios during these early stages of humanoid development. Most companies show lots of videos of robots never falling. The reality is, in the lab, robots fall all the time. And if the head broke every time the robot fell, it would be a disaster. And so we need something that's really, really robust. And over time, we have to figure out exactly how robust it needs to be to survive the real-world demands on a robot. Because anytime you make something robust, you're probably trading something else. You're probably trading lightness, or you're trading cost, or you're trading assembly complexity. And so we need to be sure we are not overengineering this. And in order to know that, we have to really think hard about, what is the robot environment in steady state? What are the expectations of the customer on what a robot can survive? And, the unique thing about a robot is if it does take damage, this robot's designed to be repaired and brought back online in a matter of minutes. So maybe you build that into your strategy. You take advantage of the fact you can repair it quickly in the calculation of how robust you need to make this. So it's a very challenging problem to solve because there is no right answer. There are lots of potential answers, and we need to decide on the philosophy that we're going to employ and the design of this. Well, this was fun. Thanks for joining us and visit bostondynamics.com if you want to learn more.
The production version of Atlas is a departure from the typical humanoid form factor, favoring industrial utility over human likeness. Intended for purposeful work in an industrial setting, Atlas has a form factor that signals its role as a machine rather than a companion or friendly assistant. Join two lead hardware engineers and our head of industrial design for a technical discussion of how key product requirements, ranging from passive thermal management to a modular architecture, dictated a bold new vision for a humanoid.
In this webinar, you’ll discover:
Recent Resources
Director - Hardware Innovation
Chris leads the hardware design and development of Atlas, combining technical excellence with cutting-edge technologies to bring one of the world’s most recognizable robots to life. With 15 years of experience at Boston Dynamics, he has been a key contributor to projects including Sandflea, Wildcat, and Stretch, and brings to his work deep knowledge of the company and its mission. Chris received his BS and MS in Mechanical Engineering from Lehigh University and his PhD in Mechanical Engineering from the University of Pennsylvania, where he conducted modular robotics and micro air vehicle research at the GRASP Robotics Laboratory.
Head of Industrial Design
Aaron leads the industrial design group, working closely with the hardware teams to develop the visual and functional design of market ready products. He is a graduate of the Rhode Island School of Design, with over 30 years of experience bringing form and definition to new technologies. Aaron’s design agency background began with high stakes usability and ergonomics for Bay Area medtech startups and industry giants like Medtronic, Abbott, and Siemens, eventually shifting focus to product innovation for consumer and B2B companies. In his time working with Boston Dynamics, Aaron has led the industrial design of the Spot, Stretch, and Atlas products.
Technical Director - Compute and Sensing
James is a seasoned engineering executive with over two decades of experience defining and delivering iconic hardware products. An MIT alumnus, his career has taken him from automotive and motorsports to consumer electronics, additive manufacturing, and now robotics. Most notably, Jim spent more than 10 years at Apple, rising to Director of Mac Product Design, where he oversaw the development of multiple Apple computer product lines. Through his career he has leveraged his expertise in scaling operations and complex product development to help companies optimize their engineering leadership and bring breakthrough technologies to market.
Have a question about our robots? Reach out to our team.