Door traversal with the ATLAS Robot

For more details about this work, click here for the IEEE-RAS Humanoids 2015 paper. For my Master’s thesis on the door task, click here.

Door detection and walking to the door.
Door task complete run in the WPI Robotics Lab at 6x speed.
Vision based door detection.
Atlas Robot POV.
Door and handle detection.
Test run of all capabilities.

Introduction

The main motivation behind this work was to enable humanoid robots to perform tasks that are done by humans quite easily, so that the robots can be fielded in disaster scenarios where it is very risky, dangerous, and often impossible for humans to go. This was the main idea behind the DARPA Robotics Challenge (DRC), where the aim was to advance robotic technologies to assist humans in responding to natural and man-made disasters. One of the tasks that needed to be performed at the DRC Finals was the Door task, where the robot had to open and go through a door. The work described here was done by me and other team members of the WPI-CMU DRC team for the DRC Finals. The robot used was the Boston Dynamics’ ATLAS humanoid robot. The ATLAS has 30 degrees of freedom (DoF), weighs approximately 180kg, and stands tall at 1.8m. Its lower limbs, torso and upper arms are composed of hydraulic joints, while the forearms and wrists are electrically powered. It has two Robotiq 3-finger hands as end-effectors, and a Carnegie Robotics’ Multisense SL featuring a stereo camera and a spinning Hokuyo UTM-30LX-EW LIDAR for perception. The robot also includes a battery system and wireless communication to operate without a tether. At the DRC Finals, there was no safety belay. For simulation, OSRF provides a package called drcsim that runs in the Gazebo simulator.

Fig 1. The Boston Dynamics’ ATLAS robot of the WPI-CMU team at the DRC Finals in Pomona, CA.

Approach

We had a four step approach to completing the door task as shown in Fig. 2. The first step, door detection is where the robot detects the door and the door handle. Next, the robots plans footsteps and walks up to the door and positions itself accordingly w.r.t the door handle position. Next, the robot opens the door by grasping and turning the handle, and then blocking the door by inserting the left arm. Finally, the robot plans footsteps to traverse through the door to the other side. An event-driven Finite State Machine (FSM) with the sub-tasks as the states was used to control the autonomous execution of the four step process with human validation at critical transitions.

Fig 2. Four step door task approach.

Perception – Door detection

Assumptions –

  • Vertical door edges should be “almost” vertical in the image.
  • Door color should be consistent and the handle color should be significantly different from the door color.

The Carnegie Robotics’ Multisense Head was used to get the Perception data (stereo + LIDAR), and the OpenCV and PCL libraries were used to segment out the door and the handle from the data. The perception data consisted of a 2D colour image of the scene, a stereo point cloud and a LIDAR point cloud.

The first step was to do some contrast stretching and filtering of the 2D image. To increase the global contrast of the image, histogram equalisation was performed. Then the image was filtered using a bilateral filter to blur/smoothen it.

The Canny edge detector was applied on the filtered image to get the edges. Probabilistic Hough Transform was applied on the edge image to segment out lines. Finally, lines that were not vertical or that were very small were removed. The thresholds for outlier line removal was chosen based on experiments. Line pairs that had a minimum separable threshold pixel distance were grouped as a possible door pair. This grouping included all permutations of available straight lines.

Fig 3. Segmented vertical lines in the image after Hough Transform and outlier line removal.

The 2D image was then reprojected into 3D and using the RANSAC algorithm, the 3D line equations for the lines segmented in 2D were found. Based on the known width of a generic door (about 90cm), the line pairs having a perpendicular distance of about 90cm +/- 10cm were selected as door candidates. Redundant line pairs were removed by clubbing line pairs having the same line equations. To validate that a door candidate is a door in real, diagonal points in the 3D space were obtained and their existence on the door plane was checked.

The door normal was calculated by performing a cross product between the line joining the two door edges and one of the door edge.

Fig 4. Detected door in a complex scene.

Handle detection –

  • The mean (µ) and standard deviation (σ) of the pixels in the image space inside the door region detected earlier was calculated.
  • Connected Component Analysis (CCA) was used to grow the region which corresponded to the pixel value within the range of µ − σ and µ + σ and colour them white.
  • Based on the initial knowledge of whether the handle was on the left or the right hand side of the door, a region was selected to exclude the central part of the door.
  • In that selected region, the region which was not white starting from the bottom and exceeded a certain contour area threshold was selected as the door handle.
Fig 5. Handle detection in a complex scene from Fig 4.

The entire door detection algorithm was repeated over a temporal window of 10 frames. Each detection was recorded. At the end of 10 iterations, the detection with the maximum occurrence was selected as the door.

The flow diagram of the algorithm is summarized in a pictorial representation below (Fig. 6) –

Fig 6. Door and handle detection algorithm flow.

Manipulation – Motion Planning

Motion planning for a humanoid robot can be a bit tricky as there are several constraints that need to be satisfied at every timestep, like it is important to maintain the centre of mass over the support polygon at all times. We used TrajOpt, an optimization based motion planner to generate a path from an initial trajectory while satisfying the constraints specified. Trajectory optimization yields solutions quickly for high dimensional problems like this one but also has the drawback of getting stuck in local minima and is dependent on the initial trajectory seed provided. Since TrajOpt had the best speed performance in benchmark tests compared to other optimization based algorithms, we decided to adopt a modified TrajOpt as our motion planning library and set up costs and constraints for each manipulation task.

Constraints –

  • Joint limits constraint
  • Joint posture constraint
  • Cartesian posture constraint
  • Center of mass constraint
  • Collision avoidance constraint

To generate feasible plans, a variety of costs and constraints were set.

  • Approaching Handle – A Cartesian posture constraint was added on the final timestep of the trajectory for approaching the door handle. The parameters of the constraint were the desired position and orientation for the robot end effector to grasp the handle, which were computed based on the handle configuration detected by the robot’s vision system.
  • Turning Handle – During the handle turning motion, the handle hinge does not translate. There is only the rotation movement of the hinge. Two Cartesian posture constraints were applied to the trajectory. First was the final step constraint. The offset transform ^{1}T_2 is from grasper frame C_1 to the current handle hinge frame C_2, while the target frame C_3 was the current handle hinge frame rotating around 80°. The second one was a posture constraint for all trajectory steps, which limited the movement of the current handle hinge frame and only allowed it to move along the hinge axis by setting the coefficients of the posture constraints. When the handle was held in the hand, the transform ^{1}T_2 was obtained by adding a minor offset to the current gripper frame configuration, calculated by forward kinematics.
Fig 7. Handle turning plan.
  • Pulling the door open – Similar to the handle turning motion, opening the door also has a rotation-only point which is on the door hinge. Hence, the offset transform ^{1}T_2 was calculated using the width of the door, and the target frame C_3 was the current door hinge frame C_2 rotating around 40° which was determined experimentally. The movement of the door hinge frame was constrained to be only be able to rotate along the hinge axis.
Fig 8. Pulling the door open plan.
  • Blocking the door from closing – After the robot pulled the door out, it had to insert the other hand to prevent the door from closing back. Two cartesian posture constraints were added for this motion. One was positioning the hand grasping the handle during the motion and the other one was moving the other hand to a target position behind the door which was defined according to the position of the door handle.
Fig 9. Inserting hand to block the door from closing.

Results

Fig 10. Door and handle detection at various perspectives.
Fig 11. Door and handle detection with the normal visualization on the robot.
Fig 12. The WPI-CMU Atlas robot traversing the door at the DRC Finals in Pomona, CA.

Acknowledgment

This work was sponsored by the Defense Advanced Research Projects Agency, DARPA Robotics Challenge Program under Contract No. HR0011-14-C-0011. This work would not have been possible without Xianchao Long, my colleague at WPI and the guidance of my advisers, Professor Taskin Padir, Professor Michael Gennert, and Professor Christopher Atkeson and other members of the WPI-CMU DRC team.

Leave a Reply

Your email address will not be published. Required fields are marked *