Planar P∅P: feature-less pose estimation with applications in UAV localization
IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)
We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar P∅P. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.
A. Amor, A. Santamaria-Navarro, F. Herrero, A. Ruiz and A. Sanfeliu. Planar P∅P: feature-less pose estimation with applications in UAV localization, 2016 IEEE International Symposium on Safety, Security and Rescue Robotics, 2016, Lausanne, pp. 15-20.