Improving Worst Case Visual Localization Coverage via Place-Specific Sub-Selection in Multi-Camera Systems

Date

2022

Authors

Hausler, S.
Xu, M.
Garg, S.
Chakravarty, P.
Shrivastava, S.
Vora, A.
Milford, M.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

IEEE Robotics and Automation Letters, 2022; 7(4):10112-10119

Statement of Responsibility

Stephen Hausler, Ming Xu, Sourav Garg, Punarjay Chakravarty, Shubham Shrivastava, Ankit Vora, and Michael Milford

Conference Name

Abstract

6-DoF visual localization systems utilize principled approaches rooted in 3D geometry to perform accurate camera pose estimation of images to a map. Current techniques use hierarchical pipelines and learned 2D feature extractors to improve scalability and increase performance. However, despite gains in typical recall@0.25mtype metrics, these systems still have limited utility for real-world applications like autonomous vehicles because of their worst areas of performance - the locations where they provide insufficient recall at a certain required error tolerance. Here we investigate the utility of using place specific configurations, where a map is segmented into a number of places, each with its own configuration for modulating the pose estimation step, in this case selecting a camera within a multi-camera system. On the Ford AV benchmark dataset, we demonstrate substantially improved worst-case localization performance compared to using off-the-shelf pipelines - minimizing the percentage of the dataset which has low recall at a certain error tolerance, as well as improved overall localization performance. Our proposed approach is particularly applicable to the crowdsharingmodel of autonomous vehicle deployment, where a fleet of AVs are regularly traversing a known route.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© 2022 IEEE.

License

Call number

Persistent link to this record