Publication

Abstract

Recent advances in 3D human shape estimation build upon parametric representations that model very well the shape of the naked body, but are not appropriate to represent the clothing geometry. In this paper, we present an approach to model dressed humans and predict their geometry from single images. We contribute in three fundamental aspects of the problem, namely, a new dataset, a novel shape parameterization algorithm and an end-to-end deep generative network for predicting shape. First, we present 3DPeople, a large-scale synthetic dataset with 2.5 Million photo-realistic images of 80 subjects performing 70 activities and wearing diverse outfits. Besides providing textured 3D meshes for clothes and body, we annotate the dataset with segmentation masks, skeletons, depth, normal maps and optical flow. All this together makes 3DPeople suitable for a plethora of tasks.

Categories

computer vision.

Author keywords

3D reconstruction, 3DPeople dataset

Scientific reference

A. Pumarola, J. Sanchez, G. P. T. Choi, A. Sanfeliu and F. Moreno-Noguer. 3DPeople: Modeling the geometry of dressed humans, 2019 International Conference on Computer Vision, 2019, Seoul, pp. 2242-2251.