Towards Robust Perception for Assistive Robotics: An RGB-Event-LiDAR Dataset and Multi-Modal Detection Pipeline

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), 2024, 00, pp. 920-925
Issue Date:
2024-10-23
Full metadata record
The increasing adoption of human robot interaction presents opportunities for technology to positively impact lives particularly those with visual impairments through applications such as guide dog like assistive robotics We present a pipeline exploring the perception and intelligent disobedience required by such a system A dataset of two people moving in and out of view has been prepared to compare RGB based and event based multi modal dynamic object detection using LiDAR data for 3D position localisation Our analysis highlights challenges in accurate 3D localisation using 2D image LiDAR fusion indicating the need for further refinement Compared to the performance of the frame based detection algorithm utilised YOLOv4 current cutting edge event based detection models appear limited to contextual scenarios such as for automotive platforms This is highlighted by weak precision and recall over varying confidence and Intersection over Union IoU thresholds when using frame based detections as a ground truth Therefore we have publicly released this dataset to the community containing RGB event point cloud and Inertial Measurement Unit IMU data along with ground truth poses for the two people in the scene to fill a gap in the current landscape of publicly available datasets and provide a means to assist in the development of safer and more robust algorithms in the future https uts ri github io revel
Please use this identifier to cite or link to this item: