Publication

Integrating human body mocaps into Blender using RGB images

Conference Article

Conference

International Conference on Advances in Computer-Human Interactions (ACHI)

Edition

13th

Pages

285-290

Doc link

https://www.thinkmind.org/index.php?view=article&articleid=achi_2020_5_280_20138

File

Download the digital copy of the doc pdf document

Abstract

Reducing the complexity and cost of a Motion Capture (MoCap) system has been of great interest in recent years. Unlike other systems that use depth range cameras, we present an algorithm that is capable of working as a MoCap system with a single Red-Green-Blue (RGB) camera, and it is completely integrated in an off-the-shelf rendering software. This makes our system easily deployable in outdoor and unconstrained scenarios. Our approach builds upon three main modules. First, given solely one input RGB image, we estimate 2D body pose; the second module estimates the 3D human pose from the previously calculated 2D coordinates, and the last module calculates the necessary rotations of the joints given the goal 3D point coordinates and the 3D virtual human model. We quantitatively evaluate the first two modules using synthetic images, and provide qualitative results of the overall system with real images recorded from a webcam.

Categories

computer vision.

Author keywords

MoCap; 2D, 3D human pose estimation; Synthetic human model; Action mimic.

Scientific reference

J. Sanchez and F. Moreno-Noguer. Integrating human body mocaps into Blender using RGB images, 13th International Conference on Advances in Computer-Human Interactions, 2020, Valencia, pp. 285-290.