DSpace Community: Computer Vision
https://hdl.handle.net/2440/15000
Computer Vision2024-03-18T15:54:54ZScanMix : Learning from Severe Label Noise via Semantic Clustering and Semi-Supervised Learning
https://hdl.handle.net/2440/140263
Title: ScanMix : Learning from Severe Label Noise via Semantic Clustering and Semi-Supervised Learning
Author: Sachdeva, R.; Cordeiro, F.R.; Belagiannis, V.; Reid, I.; Carneiro, G.
Abstract: We propose a new training algorithm, ScanMix, that explores semantic clustering and semi-supervised learning (SSL) to allow superior robustness to severe label noise and competitive robustness to nonsevere label noise problems, in comparison to the state of the art (SOTA) methods. ScanMix is based on the expectation maximisation framework, where the E-step estimates the latent variable to cluster the training images based on their appearance and classification results, and the M-step optimises the SSL classification and learns effective feature representations via semantic clustering. We present a theoretical result that shows the correctness and convergence of ScanMix, and an empirical result that shows that ScanMix has SOTA results on CIFAR-10/-100 (with symmetric, asymmetric and semantic label noise), Red Mini-ImageNet (from the Controlled Noisy Web Labels), Clothing1M and WebVision. In all benchmarks with severe label noise, our results are competitive to the current SOTA.2023-01-01T00:00:00ZThe Road to Know-Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation
https://hdl.handle.net/2440/135813
Title: The Road to Know-Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation
Author: Qi, Y.; Pan, Z.; Hong, Y.; Yang, M.H.; Van Den Hengel, A.; Wu, Q.
Abstract: Vision-and-Language Navigation (VLN) requires an agent to find a path to a remote location on the basis of natural-language instructions and a set of photo-realistic panoramas. Most existing methods take the words in the instructions and the discrete views of each panorama as the minimal unit of encoding. However, this requires a model to match different nouns (e.g., TV, table) against the same input view feature. In this work, we propose an object-informed sequential BERT to encode visual perceptions and linguistic instructions at the same fine-grained level, namely objects and words. Our sequential BERT also enables the visual-textual clues to be interpreted in light of the temporal context, which is crucial to multiround VLN tasks. Additionally, we enable the model to identify the relative direction (e.g., left/right/front/back) of each navigable location and the room type (e.g., bedroom, kitchen) of its current and final navigation goal, as such information is widely mentioned in instructions implying the desired next and final locations. We thus enable the model to know-where the objects lie in the images, and to know-where they stand in the scene. Extensive experiments demonstrate the effectiveness compared against several state-of-the-art methods on three indoor VLN tasks: REVERIE, NDH, and R2R. Project repository: https://github.com/YuankaiQi/ORIST2022-01-01T00:00:00ZSatellite pose estimation with deep landmark regression and nonlinear pose refinement
https://hdl.handle.net/2440/128441
Title: Satellite pose estimation with deep landmark regression and nonlinear pose refinement
Author: Chen, B.; Cao, J.; Parra Bustos, A.; Chin, T.
Abstract: We propose an approach to estimate the 6DOF pose of a satellite, relative to a canonical pose, from a single image. Such a problem is crucial in many space proximity operations, such as docking, debris removal, and inter-spacecraft communications. Our approach combines machine learning and geometric optimisation, by predicting the coordinates of a set of landmarks in the input image, associating the landmarks to their corresponding 3D points on an a priori reconstructed 3D model, then solving for the object pose using non-linear optimisation. Our approach is not only novel for this specific pose estimation task, which helps to further open up a relatively new domain for machine learning and computer vision, but it also demonstrates superior accuracy and won the first place in the recent Kelvins Pose Estimation Challenge organised by the European Space Agency (ESA).2019-01-01T00:00:00ZDeep learning for 2D scan matching and loop closure
https://hdl.handle.net/2440/117299
Title: Deep learning for 2D scan matching and loop closure
Author: Li, J.; Zhan, H.; Chen, B.; Reid, I.; Lee, G.
Editor: Bicchi, A.; Okamura, A.
Abstract: Although 2D LiDAR based Simultaneous Localization and Mapping (SLAM) is a relatively mature topic nowadays, the loop closure problem remains challenging due to the lack of distinctive features in 2D LiDAR range scans. Existing research can be roughly divided into correlation based approaches e.g. scan-to-submap matching and feature based methods e.g. bag-of-words (BoW). In this paper, we solve loop closure detection and relative pose transformation using 2D LiDAR within an end-to-end Deep Learning framework. The algorithm is verified with simulation data and on an Unmanned Aerial Vehicle (UAV) flying in indoor environment. The loop detection ConvNet alone achieves an accuracy of 98.2% in loop closure detection. With a verification step using the scan matching ConvNet, the false positive rate drops to around 0.001%. The proposed approach processes 6000 pairs of raw LiDAR scans per second on a Nvidia GTX1080 GPU.2017-01-01T00:00:00Z