Publication

Dense segmentation-aware descriptors

Book Chapter (2016)

Book Title

Dense Image Correspondences for Computer Vision

Publisher

Springer

Pages

83-107

Doc link

http://dx.doi.org/10.1007/978-3-319-23048-1_5

File

Download the digital copy of the doc pdf document

Abstract

Dense descriptors are becoming increasingly popular in a host of tasks, such as dense image correspondence, bag-of-words image classification, and label transfer. However, the extraction of descriptors on generic image points, rather than selecting geometric features, requires rethinking how to achieve invariance to nuisance parameters. In this work we pursue invariance to occlusions and background changes by introducing segmentation information within dense feature construction. The core idea is to use the segmentation cues to downplay the features coming from image areas that are unlikely to belong to the same region as the feature point. We show how to integrate this idea with dense SIFT, as well as with the dense scale- and rotation-invariant descriptor (SID). We thereby deliver dense descriptors that are invariant to background changes, rotation, and/or scaling. We explore the merit of our technique in conjunction with large displacement motion estimation and wide-baseline stereo, and demonstrate that exploiting segmentation information yields clear improvements.

Categories

computer vision, feature extraction.

Scientific reference

E. Trulls Fortuny, I. Kokkinos, A. Sanfeliu and F. Moreno-Noguer. Dense segmentation-aware descriptors. In Dense Image Correspondences for Computer Vision, 83-107. Springer, 2016.