Stretch, our mobile, autonomous robot for the warehouse, tackles the physically demanding work of unloading containers and trailers. Inside the container, the robot picks cases and places them onto a conveyor to be scanned, sorted, and moved through the warehouse. Sometimes, however, picking one box dislodges the others, causing them to fall to the ground. It’s a familiar scenario for anyone who works in inbound warehouse operations. Floor-loaded containers can be packed haphazardly and cases can shift during transit.

Stretch recovers from these box falls automatically, locating and retrieving the boxes and placing them on the conveyor. It’s important for the robot to do this quickly and without the need for a person to intervene manually, so automated recoveries should be as fast and seamless as the rest of the automated unloading operation. Now, we have developed a new behavior that streamlines and quickens Stretch’s recovery process.

Picking Up the Pace

In the past, Stretch’s recovery behavior relied on the same perception system it uses to plan standard box picks. Cameras on Stretch’s mast take pictures of the boxes’ location and orientation; in theory, the recovery process is the same as grasping any other box from the container’s floor.

However, in practice, drops take a box out of the current field of view of the mast. The robot knows a box has fallen, but not its new position. Stretch has to back up, take new pictures of the environment, and process those images to detect the box and retrieve it. This process is effective, but it takes time in a busy inbound warehouse environment where every second counts. 

The new recovery behavior eliminates the need to take new pictures, cutting down the time to recover dropped boxes.

Looking with Lidar

In addition to the RGB and time-of-flight cameras on the perception mast, Stretch uses lidar for navigation as well as for safety functions. These lidars are positioned on all four sides of the robot’s base, and create a point cloud of known surfaces around the robot, including:

  • Unpicked boxes in front of Stretch
  • The container’s walls
  • The conveyor where Stretch is depositing cases

When Stretch needs to recover a dropped box, these known surfaces can be filtered out of the points considered for the box’s potential location. The point cloud data is then further filtered to find clusters that indicate the likely location of fallen boxes. Stretch then uses this data to parse corners, edges, and intersections—mapping the location and orientation of any fallen cases.

Now, using only this point cloud data, Stretch can plan a recovery pick trajectory without taking a picture. This novel approach eliminates the need for the robot to move back and to process a new picture for box detection—significantly speeding up the recovery process and further simplifying inbound warehouse operations.