Towards Realistic Generative 3D Face Models

Aashish Rai1         Hiresh Gupta*1         Ayush Pandey*1         Francisco Vicente Carrasco1        
Shingo Jason Takagi2         Amaury Aubel2         Daeil Kim2         Aayush Prakash2         Fernando de la Torre1        
1Carnegie Mellon University          2Facebook/Meta

More details coming soon... Stay tuned!


In recent years, there has been significant progress in 2D generative face models fueled by applications such as animation, synthetic data generation, and digital avatars. However, due to the absence of 3D information, these 2D models often struggle to accurately disentangle facial attributes like pose, expression, and illumination, limiting their editing capabilities. To address this limitation, this paper proposes a 3D controllable generative face model to produce high-quality albedo and precise 3D shape leveraging existing 2D generative models. By combining 2D face generative models with semantic face manipulation, this method enables editing of detailed 3D rendered faces. The proposed framework utilizes an alternating descent optimization approach over shape and albedo. Differentiable rendering is used to train high-quality shapes and albedo without 3D supervision.
Moreover, this approach outperforms the state-of-the-art (SOTA) methods in the well-known NoW benchmark for shape reconstruction. It also outperforms the SOTA reconstruction models in recovering rendered faces' identities across novel poses by an average of 10%. Additionally, the paper demonstrates direct control of expressions in 3D faces by exploiting latent space leading to text-based editing of 3D faces.

3D generative face model, novel views, and expression synthesis. a) High-resolution 3D shape and albedo recovered from a SytleGAN2 generated image. Novel views can be rendered using the estimated face model b) Editing of 3D faces with text. This method allows for expression manipulation through modification of the latent code using CLIP model.

3D Face Reconstruction

Randomly generated coarse mesh, detailed mesh and rendered faces from our model, corresponding to input 2D face.

Illumination Control

We can control the direction of light using SH parameters. Video shows illumination control from random directions.


    License information coming soon...


  		title={Towards Realistic Generative 3D Face Models},
  		author={Rai, Aashish and Gupta, Hiresh and Pandey, Ayush and Carrasco, Francisco Vicente and Takagi, Shingo Jason and Aubel, Amaury and Kim, Daeil and Prakash, Aayush and De la Torre, Fernando},
  		journal={arXiv preprint arXiv:2304.12483},