Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Fast neural architecture search of compact semantic segmentation models via auxiliary cells|
|Citation:||Proceedings: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019 / vol.2019-June, pp.9126-9135|
|Publisher:||Computer Vision Foundation / IEEE|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (16 Jun 2019 - 20 Jun 2019 : Long Beach, USA)|
|Vladimir Nekrasov, Hao Chen, Chunhua Shen, Ian Reid|
|Abstract:||Automated design of neural network architectures tailored for a specific task is an extremely promising, albeit inherently difficult, avenue to explore. While most results in this domain have been achieved on image classification and language modelling problems, here we concentrate on dense per-pixel tasks, in particular, semantic image segmentation using fully convolutional networks. In contrast to the aforementioned areas, the design choices of a fully convolutional network require several changes, ranging from the sort of operations that need to be used - e.g., dilated convolutions - to a solving of a more difficult optimisation problem. In this work, we are particularly interested in searching for high-performance compact segmentation architectures, able to run in real-time using limited resources. To achieve that, we intentionally over-parameterise the architecture during the training time via a set of auxiliary cells that provide an intermediate supervisory signal and can be omitted during the evaluation phase. The design of the auxiliary cell is emitted by a controller, a neural network with the fixed structure trained using reinforcement learning. More crucially, we demonstrate how to efficiently search for these architectures within limited time and computational budgets. In particular, we rely on a progressive strategy that terminates non-promising architectures from being further trained, and on Polyak averaging coupled with knowledge distillation to speed-up the convergence. Quantitatively, in 8 GPU-days our approach discovers a set of architectures performing on-par with state-of-the-art among compact models on the semantic segmentation, pose estimation and depth prediction tasks. Code will be made available here: https://github.com/drsleep/nas-segm-pytorch.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.