L. Yi, H. Huang, D. Liu, K. Evangelos, H. Su, and L. Guibas, Deep part induction from articulated object pairs, SIGGRAPH Asia 2018 Technical Papers. ACM, 2018: 209.


Object functionality is often expressed through part articulation as when the two rigid parts of a scissor pivot against each other to perform the cutting function. Such articulations are often similar across objects within the same functional category. In this paper we explore how the observation of different articulation states provides evidence for part structure and motion of 3D objects. Our method takes as input a pair of unsegmented shapes representing two different articulation states of two functionally related objects, and induces their common parts along with their underlying rigid motion. This is a challenging setting, as we assume no prior shape structure, no prior shape category information, no consistent shape orientation, the articulation states may belong to objects of different geometry, plus we allow inputs to be noisy and partial scans, or point clouds lifted from RGB images. Our method learns a neural network architecture with three modules that respectively propose correspondences, estimate 3D deformation flows, and perform segmentation. To achieve optimal performance, our architecture alternates between correspondence, deformation ow, and segmentation prediction iteratively in an ICP-like fashion. Our results demonstrate that our method significantly outperforms state-of-the-art techniques in the task of discovering articulated parts of objects. In addition, our part induction is object-class agnostic and successfully generalizes to new and unseen objects.


  title={Deep part induction from articulated object pairs},
  author={Yi, Li and Huang, Haibin and Liu, Difan and Kalogerakis, Evangelos and Su, Hao and Guibas, Leonidas},
  booktitle={SIGGRAPH Asia 2018 Technical Papers},