Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/129920
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Architecture search of dynamic cells for semantic video segmentation
Author: Nekrasov, V.
Chen, H.
Shen, C.
Reid, I.D.
Citation: CoRR, 2020; abs/1904.02371:1959-1968
Publisher: IEEE
Issue Date: 2020
ISSN: 0018-9219
Statement of
Responsibility: 
Vladimir Nekrasov, Hao Chen, Chunhua Shen, Ian Reid
Abstract: In semantic video segmentation the goal is to acquire consistent dense semantic labelling across image frames. To this end, recent approaches have been reliant on manually arranged operations applied on top of static semantic segmentation networks – with the most prominent building block being the optical flow able to provide information about scene dynamics. Related to that is the line of research concerned with speeding up static networks by approximating expensive parts of them with cheaper alternatives, while propagating information from previous frames. In this work we attempt to come up with generalisation of those methods, and instead of manually designing contextual blocks that connect per-frame outputs, we propose a neural architecture search solution, where the choice of operations together with their sequential arrangement are being predicted by a separate neural network. We showcase that such generalisation leads to stable and accurate results across common benchmarks, such as CityScapes and CamVid datasets. Importantly, the proposed methodology takes only 2 GPU-days, finds high-performing cells and does not rely on the expensive optical flow computation.
Rights: ©2020 IEEE
RMID: 1000011367
DOI: 10.1109/WACV45572.2020.9093531
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.