Explore our R&D with the world’s most dynamic humanoid robotRead More
Discover the past innovations that informed our current productsRead More
Meet the team behind the innovationsRead More
Learn how we develop and deploy robots to tackle industry’s toughest challengesRead More
Start your journey at Boston DynamicsRead More
Stay up to date with what we’ve been working onRead More
Discover the principles that guide our work and policiesRead More
Product News
Blogs •
Introducing improved workflows and mobility for consistent and comprehensive data collection
Agile mobile robots like Spot can collect limitless quantities of site data, turbocharging existing analysis tools and enabling teams to focus on action rather than observation. In Spot’s first year on the market, we’ve seen diverse teams in an array of industries put the robot to use. During this time, we worked closely with hundreds of Spot users to understand their application development workflow: how they attach sensors, analyze data, and integrate the robot into their existing systems. We identified common obstacles and mapped out an easier path to implementation.
Spot Release 2.1 acts on those insights and makes it easy for you to attach your own sensors, collect and save the data you care about, and integrate that data into your existing systems. With 2.1, we’re launching several features which make Spot immediately useful out of the box for autonomous data collection missions. These can be used by operations teams to repeatedly collect vital data in dangerous or remote sites.
New in this release:
Attaching new image sensors, like off-the-shelf spherical or thermal cameras, is now as easy as editing an example script and installing its docker container onto the Spot CORE compute payload. The new image sources show up on Spot’s tablet controller, and users can trigger captures in both teloperation and the easy-to-use Autowalk autonomy system. Spot can now be used to collect training images for computer vision models, to visualize data and model output live on the tablet controller, and to capture data from custom non-visual sensors like gas detectors or laser scanners.
Spot now has the ability to attach metadata to images, associating them with the robot’s location, user-defined labels, or custom values such as GPS coordinates from an attached payload. This enables users to put their data in context, for example: combining site photos from multiple missions into a single view, sorting images by asset-ID, or collecting datasets for computer vision model training. We’ve doubled-down on standard data types (JPEG images, JSON and CSV metadata files) to eliminate integration bottlenecks and built a high-performance system for developers to write their own data streams into the robot’s logs.
Powerful tools aren’t powerful if operators can’t use them in the field, so we’ve streamlined the data collection workflow significantly. Users can capture data manually and autonomously in Autowalk and download it to the tablet’s SD card for easy off-robot use. Common actions and callbacks can be configured on the tablet for quick use during operation. We’ve also made numerous under-the-hood improvements to Spot’s industry-leading locomotion and autonomy, further enabling operators to focus on the job and not the robot.
Improved data collection during teleoperation and Autowalk missions
Enhanced Robot Behaviors
Spot CORE and CORE AI pre-configured software improvement
On-robot log access for developers
Easier system administration
These new features in Release 2.1 unlock Spot’s full data collection potential and set the stage for exciting new capabilities coming early next year: self-charging and remote operation. Upgrade to 2.1 today or contact sales to put Spot’s game-changing technology to work quickly and reliably, right out of the box.
Recent Blogs
•49 min watch
The Missing Link in Your Digital Transformation
•3 min read
NASA Jet Propulsion Laboratory
6 Steps for Implementing Agile Mobile Robots