Generative Human Geometry Distribution
KAUST
Pipeline

(a) Given a human pose, we randomly synthesize realistic geometry with natural clothing details. (b) Given a normal image, we synthesize high-quality geometry in novel poses. All visual results are rendered from point clouds.

Abstract

Realistic human geometry generation is an important yet challenging task, requiring both the preservation of fine clothing details and the accurate modeling of clothing-pose interactions. Geometry distributions, which can model the geometry of a single human as a distribution, provide a promising representation for high-fidelity synthesis. However, applying geometry distributions for human generation requires learning a dataset-level distribution over numerous individual geometry distributions. To address the resulting challenges, we propose a novel 3D human generative framework that, for the first time, models the distribution of human geometry distributions. Our framework operates in two stages: first, generating the human geometry distribution, and second, synthesizing high-fidelity humans by sampling from this distribution. We validate our method on two tasks: pose-conditioned 3D human generation and single-view-based novel pose generation. Experimental results demonstrate that our approach achieves the best quantitative results in terms of realism and geometric fidelity, outperforming state-of-the-art generative methods.

(a) Our method enables infinite sampling to synthesize high-fidelity geometry.
(b) We obtain the realistic human shape by a denoising progress.

Animation

Pose-conditioned Random Generation

Pose-aware Novel Pose Generation

-->