Learning Generalizable Final-State Dynamics of 3D Rigid Objects

CVPR 2019 Workshop on 3D Scene Understanding for Vision, Graphics, and Robotics

Download Video: HD (MP4, 1080p, 146 MB)

Abstract

Humans have a remarkable ability to predict the effect of physical interactions on the dynamics of objects. Endowing machines with this ability would allow important applications in areas like robotics and autonomous vehicles. In this work, we focus on predicting the dynamics of 3D rigid objects, in particular an object's final resting position and total rotation when subjected to an impulsive force. Different from previous work, our approach is capable of generalizing to unseen object shapes---an important requirement for real-world applications. To achieve this, we represent object shape as a 3D point cloud that is used as input to a neural network, making our approach agnostic to appearance variation. The design of our network is informed by an understanding of physical laws. We train our model with data from a physics engine that simulates the dynamics of a large number of shapes. Experiments show that we can accurately predict the resting position and total rotation for unseen object geometries.

Downloads


Citation

BibTeX, 1 KB

@inproceedings{RempeDynamics2019,
  author={Rempe, Davis and Sridhar, Srinath and Wang, He and Guibas, Leonidas J.},
  title={Learning Generalizable Final-State Dynamics of 3D Rigid Objects},
  journal={CVPR Workshop on 3D Scene Understanding for Vision, Graphics, and Robotics},
  year={2019}
}
      

Acknowledgments

This work was supported by a grant from the Toyota-Stanford Center for AI Research, NSF grant IIS-1763268, and a Vannevar Bush Faculty Fellowship. We would also like to thank Amazon for kindly donating AWS credits for this project.

Contact

For questions and clarifications please get in touch with:
Davis Rempe
drempe@stanford.edu

This page is Zotero translator friendly.