Leveraging Deep Learning Based Object Detection for Localising Autonomous Personal Mobility Devices in Sparse Maps
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, 2019, pp. 4081-4086
- Issue Date:
- 2019-10-01
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
© 2019 IEEE. This paper presents a low cost, resource efficient localisation approach for autonomous driving in GPS denied environments. One of the most challenging aspects of traditional landmark based localisation in the context of autonomous driving, is the necessity to accurately and frequently detect landmarks. We leverage the state of the art deep learning framework, YOLO (You Only Look Once), to carry out this important perceptual task using data obtained from monocular cameras. Extracted bearing only information from the YOLO framework, and vehicle odometry, is fused using an Extended Kalman Filter (EKF) to generate an estimate of the location of the autonomous vehicle, together with it's associated uncertainty. This approach enables us to achieve real-time sub metre localisation accuracy, using only a sparse map of an outdoor urban environment. The broader motivation of this research is to improve the safety and reliability of Personal Mobility Devices (PMDs) through autonomous technology. Thus, all the ideas presented here are demonstrated using an instrumented mobility scooter platform.
Please use this identifier to cite or link to this item: