Dense segmentation-aware descriptors
Book Chapter (2016)
Dense Image Correspondences for Computer Vision
Dense descriptors are becoming increasingly popular in a host of tasks, such as dense image correspondence, bag-of-words image classification, and label transfer. However, the extraction of descriptors on generic image points, rather than selecting geometric features, requires rethinking how to achieve invariance to nuisance parameters. In this work we pursue invariance to occlusions and background changes by introducing segmentation information within dense feature construction. The core idea is to use the segmentation cues to downplay the features coming from image areas that are unlikely to belong to the same region as the feature point. We show how to integrate this idea with dense SIFT, as well as with the dense scale- and rotation-invariant descriptor (SID). We thereby deliver dense descriptors that are invariant to background changes, rotation, and/or scaling. We explore the merit of our technique in conjunction with large displacement motion estimation and wide-baseline stereo, and demonstrate that exploiting segmentation information yields clear improvements.
computer vision, feature extraction.
E. Trulls Fortuny, I. Kokkinos, A. Sanfeliu and F. Moreno-Noguer. Dense segmentation-aware descriptors. In Dense Image Correspondences for Computer Vision, 83-107. Springer, 2016.