Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Architecture search of dynamic cells for semantic video segmentation|
|Citation:||Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV '20), 2020, pp.1959-1968|
|Series/Report no.:||IEEE Winter Conference on Applications of Computer Vision|
|Conference Name:||IEEE Winter Conference on Applications of Computer Vision (WACV) (1 Mar 2020 - 5 Mar 2020 : Snowmass Village, CO, USA)|
|Vladimir Nekrasov, Hao Chen, Chunhua Shen, Ian Reid|
|Abstract:||In semantic video segmentation the goal is to acquire consistent dense semantic labelling across image frames. To this end, recent approaches have been reliant on manually arranged operations applied on top of static semantic segmentation networks – with the most prominent building block being the optical flow able to provide information about scene dynamics. Related to that is the line of research concerned with speeding up static networks by approximating expensive parts of them with cheaper alternatives, while propagating information from previous frames. In this work we attempt to come up with generalisation of those methods, and instead of manually designing contextual blocks that connect per-frame outputs, we propose a neural architecture search solution, where the choice of operations together with their sequential arrangement are being predicted by a separate neural network. We showcase that such generalisation leads to stable and accurate results across common benchmarks, such as CityScapes and CamVid datasets. Importantly, the proposed methodology takes only 2 GPU-days, finds high-performing cells and does not rely on the expensive optical flow computation.|
|Appears in Collections:||Aurora harvest 8|
Computer Science publications
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.