TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Cross-Modal Perceptionist: Can Face Geometry be Gleaned fr...

Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?

Cho-Ying Wu, Chin-Cheng Hsu, Ulrich Neumann

2022-03-18CVPR 2022 13D Face ModellingImage Generation3D Face Reconstruction
PaperPDFCode

Abstract

This work digs into a root question in human perception: can face geometry be gleaned from one's voices? Previous works that study this question only adopt developments in image synthesis and convert voices into face images to show correlations, but working on the image domain unavoidably involves predicting attributes that voices cannot hint, including facial textures, hairstyles, and backgrounds. We instead investigate the ability to reconstruct 3D faces to concentrate on only geometry, which is much more physiologically grounded. We propose our analysis framework, Cross-Modal Perceptionist, under both supervised and unsupervised learning. First, we construct a dataset, Voxceleb-3D, which extends Voxceleb and includes paired voices and face meshes, making supervised learning possible. Second, we use a knowledge distillation mechanism to study whether face geometry can still be gleaned from voices without paired voices and 3D face data under limited availability of 3D face scans. We break down the core question into four parts and perform visual and numerical analyses as responses to the core question. Our findings echo those in physiology and neuroscience about the correlation between voices and facial structures. The work provides future human-centric cross-modal learning with explainable foundations. See our project page: https://choyingw.github.io/works/Voice2Mesh/index.html

Results

TaskDatasetMetricValueModel
3DVoxceleb-3DARE-CR0.0457CMP (supervised)
3DVoxceleb-3DARE-ER0.0152CMP (supervised)
3DVoxceleb-3DARE-FR0.0186CMP (supervised)
3DVoxceleb-3DARE-MR0.0169CMP (supervised)
3DVoxceleb-3DMean ARE0.0241CMP (supervised)
3DVoxceleb-3DARE-CR0.048CMP (unsupervised)
3DVoxceleb-3DARE-ER0.0181CMP (unsupervised)
3DVoxceleb-3DARE-FR0.0169CMP (unsupervised)
3DVoxceleb-3DARE-MR0.0174CMP (unsupervised)
3DVoxceleb-3DMean ARE0.0251CMP (unsupervised)
3D Face ModellingVoxceleb-3DARE-CR0.0457CMP (supervised)
3D Face ModellingVoxceleb-3DARE-ER0.0152CMP (supervised)
3D Face ModellingVoxceleb-3DARE-FR0.0186CMP (supervised)
3D Face ModellingVoxceleb-3DARE-MR0.0169CMP (supervised)
3D Face ModellingVoxceleb-3DMean ARE0.0241CMP (supervised)
3D Face ModellingVoxceleb-3DARE-CR0.048CMP (unsupervised)
3D Face ModellingVoxceleb-3DARE-ER0.0181CMP (unsupervised)
3D Face ModellingVoxceleb-3DARE-FR0.0169CMP (unsupervised)
3D Face ModellingVoxceleb-3DARE-MR0.0174CMP (unsupervised)
3D Face ModellingVoxceleb-3DMean ARE0.0251CMP (unsupervised)

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17FADE: Adversarial Concept Erasure in Flow Models2025-07-16CharaConsist: Fine-Grained Consistent Character Generation2025-07-15CATVis: Context-Aware Thought Visualization2025-07-15