3D-DAT: 3D-Dataset Annotation Toolkit for Robotic Vision
- Publisher:
- Institute of Electrical and Electronics Engineers (IEEE)
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings - IEEE International Conference on Robotics and Automation, 2023, 2023-May, pp. 9162-9168
- Issue Date:
- 2023-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
3D-DAT_3D-Dataset_Annotation_Toolkit_for_Robotic_Vision.pdf | Published version | 5.41 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Robots operating in the real world are expected to detect, classify, segment, and estimate the pose of objects to accomplish their task. Modern approaches using deep learning not only require large volumes of data but also pixel-accurate annotations in order to evaluate the performance and therefore safety of these algorithms. At present, publicly available tools for annotating data are scarce and those that are available rely on depth sensors, which excludes their use for transparent, metallic, and general non-Lambertian objects. To address this issue, we present a novel method for creating valuable datasets that can be used in these more difficult cases. Our key contribution is a purely RGB-based scene-level annotation approach that uses a neural radiance field-based method to automatically align objects. A set of user studies demonstrates the accuracy and speed of our approach over a purely manual or depth sensor assisted pipeline. We provide an open-source implementation of each component and a ROS-based recorder for capturing data with a eye-in-hand robot system. Code will be made available at https://github.com/markus-suchi/3D-DAT.
Please use this identifier to cite or link to this item: