Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Template-based automatic search of compact semantic segmentation architectures|
|Citation:||Proceedings of the IEEE Winter Conferene on Applications of Computer Vision (WACV 2020), 2020, vol.abs/1904.02365, pp.1969-1978|
|Series/Report no.:||IEEE Winter Conference on Applications of Computer Vision|
|Conference Name:||IEEE Winter Conference on Applications of Computer Vision (WACV) (1 Mar 2020 - 5 Mar 2020 : Snowmass Village, USA)|
|Vladimir Nekrasov, Chunhua Shen, Ian Reid|
|Abstract:||Automatic search of neural architectures for various vision and natural language tasks is becoming a prominent tool as it allows to discover high-performing structures on any dataset of interest. Nevertheless, on more difficult domains, such as dense per-pixel classification, current automatic approaches are limited in their scope - due to their strong reliance on existing image classifiers they tend to search only for a handful of additional layers with discovered architectures still containing a large number of parameters. In contrast, in this work we propose a novel solution able to find light-weight and accurate segmentation architectures starting from only few blocks of a pre-trained classification network. To this end, we progressively build up a methodology that relies on templates of sets of operations, predicts which template and how many times should be applied at each step, while also generating the connectivity structure and downsampling factors. All these decisions are being made by a recurrent neural network that is rewarded based on the score of the emitted architecture on the holdout set and trained using reinforcement learning. One discovered architecture achieves 63.2% mean IoU on CamVid and 67.8% on CityScapes having only 270K parameters.|
|Appears in Collections:||Aurora harvest 4|
Computer Science publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.