Researchers from Carnegie Mellon University (CMU) in US found they could improve the accuracy of the map by incorporating the arm itself as a sensor, using the angle of its joints to better determine the pose of the camera.
This would be important for a number of applications, including inspection tasks, said Matthew Klingensmith, a PhD student at CMU.
That is important because robots usually have heads that consist of a stick with a camera on it, Srinivasa said. They can not bend over like a person could to get a better view of a work space.
However, an eye in the hand is not much good if the robot can not see its hand and does not know where its hand is relative to objects in its environment.
Also Read
A popular solution for mobile robots is called simultaneous localisation and mapping (SLAM) in which the robot pieces together input from sensors such as cameras, laser radars and wheel odometry to create a 3D map of the new environment and to figure out where the robot is within that 3D world.
"There are several algorithms available to build these detailed worlds, but they require accurate sensors and a ridiculous amount of computation," Srinivasa said.
Those algorithms often assume that little is known about the pose of the sensors, as might be the case if the camera was handheld, Klingensmith said.
"Automatically tracking the joint angles enables the system to produce a high-quality map even if the camera is moving very fast or if some of the sensor data is missing or misleading," Klingensmith said.
The researchers demonstrated their Articulated Robot Motion for SLAM (ARM-SLAM) using a small depth camera attached to a lightweight manipulator arm, the Kinova Mico.
In using it to build a 3D model of a bookshelf, they found that it produced reconstructions equivalent or better to other mapping techniques.