Object Learning for 6D Pose Estimation and Grasping from RGB-D Videos of In-hand Manipulation
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- IEEE International Conference on Intelligent Robots and Systems, 2021, 00, pp. 4831-4838
- Issue Date:
- 2021-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Object_Learning_for_6D_Pose_Estimation_and_Grasping_from_RGB-D_Videos_of_In-hand_Manipulation.pdf | Published version | 5.06 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Object models are highly useful for robots as they enable tasks such as detection, pose estimation and manipulation. However, models are not always easily available, especially in real-world domains of operation such as peoples' homes. This work presents a pipeline to generate high-quality object reconstructions from human in-hand manipulation to alleviate the necessity of specialised or expensive hardware. Missing data, due to occlusion or unseen sides, is explicitly handled by incorporating shape completion. We demonstrate the usability of the reconstructions by applying a model-based as well as a CNN-based object pose estimator that is trained on synthetic images by employing state-of-the-art texture synthesis. Using our pipeline to cheaply generate object models and synthetic RGB images for training, we achieve competitive performance compared to baselines that require an elaborate set-up to construct models or large amounts of annotated data. Object grasping is also enabled by learning with the reconstructions in simulation, then executing with a real robot. These evaluations show that our reconstructions are comparable to those made under near-perfect conditions and enable 6D object pose estimation as well as real-world grasping.
Please use this identifier to cite or link to this item: