Hausler, S.Xu, M.Garg, S.Chakravarty, P.Shrivastava, S.Vora, A.Milford, M.2023-05-172023-05-172022IEEE Robotics and Automation Letters, 2022; 7(4):10112-101192377-37662377-3766https://hdl.handle.net/2440/1384366-DoF visual localization systems utilize principled approaches rooted in 3D geometry to perform accurate camera pose estimation of images to a map. Current techniques use hierarchical pipelines and learned 2D feature extractors to improve scalability and increase performance. However, despite gains in typical recall@0.25mtype metrics, these systems still have limited utility for real-world applications like autonomous vehicles because of their worst areas of performance - the locations where they provide insufficient recall at a certain required error tolerance. Here we investigate the utility of using place specific configurations, where a map is segmented into a number of places, each with its own configuration for modulating the pose estimation step, in this case selecting a camera within a multi-camera system. On the Ford AV benchmark dataset, we demonstrate substantially improved worst-case localization performance compared to using off-the-shelf pipelines - minimizing the percentage of the dataset which has low recall at a certain error tolerance, as well as improved overall localization performance. Our proposed approach is particularly applicable to the crowdsharingmodel of autonomous vehicle deployment, where a fleet of AVs are regularly traversing a known route.en© 2022 IEEE.Autonomous vehicle navigation; deep learning methods; localization; multi camera systemImproving Worst Case Visual Localization Coverage via Place-Specific Sub-Selection in Multi-Camera SystemsJournal article10.1109/LRA.2022.31911742023-05-16641981Garg, S. [0000-0001-6068-3307]