Jie Yang*, Kaichun Mo*, Yu-Kun Lai, Leonidas J. Guibas and Lin Gao, DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation, ACM Transactions on Graphics (ToG) 2022 (presented at SIGGRAPH 2022)

Abstract:

3D shape generation is a fundamental operation in computer graphics. While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structure, in a controllable manner. To tackle this, we introduce DSG-Net, a deep neural network that learns a disentangled structured & geometric mesh representation for 3D shapes, where two key aspects of shapes, geometry and structure, are encoded in a synergistic manner to ensure plausibility of the generated shapes, while also being disentangled as much as possible. This supports a range of novel shape generation applications with disentangled control, such as interpolation of structure (geometry) while keeping geometry (structure) unchanged. To achieve this, we simultaneously learn structure and geometry through variational autoencoders (VAEs) in a hierarchical manner for both, with bijective mappings at each level. In this manner we effectively encode geometry and structure in separate latent spaces, while ensuring their compatibility: the structure is used to guide the geometry and vice versa. At the leaf level, the part geometry is represented using a conditional part VAE, to encode high-quality geometric details, guided by the structure context as the condition. Our method not only supports controllable generation applications, but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.

Bibtex:

@article{yang2022dsg,
  title={DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation},
  author={Yang, Jie and Mo, Kaichun and Lai, Yu-Kun and Guibas, Leonidas J and Gao, Lin},
  journal={ACM Transactions on Graphics (ToG)},
  year={2022}
}