Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Multi-scale dense networks for deep high dynamic range imaging|
|Citation:||Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision, 2019 / pp.41-50|
|Series/Report no.:||IEEE Winter Conference on Applications of Computer Vision|
|Conference Name:||IEEE Winter Conference on Applications of Computer Vision (WACV) (07 Jan 2019 - 11 Jan 2019 : Waikoloa Village, HI, USA)|
|Qingsen Yan, Dong Gong, Pingping Zhang, Qinfeng Shi, Jinqiu Sun, Ian Reid, Yanning Zhang|
|Abstract:||Generating a high dynamic range (HDR) image from a set of sequential exposures is a challenging task for dynamic scenes. The most common approaches are aligning the input images to a reference image before merging them into an HDR image, but artifacts often appear in cases of large scene motion. The state-of-the-art method using deep learning can solve this problem effectively. In this paper, we propose a novel deep convolutional neural network to generate HDR, which attempts to produce more vivid images. The key idea of our method is using the coarse-to-fine scheme to gradually reconstruct the HDR image with the multi-scale architecture and residual network. By learning the relative changes of inputs and ground truth, our method can produce not only artificial free image but also restore missing information. Furthermore, we compare to existing methods for HDR reconstruction, and show high-quality results from a set of low dynamic range (LDR) images. We evaluate the results in qualitative and quantitative experiments, our method consistently produces excellent results than existing state-of-the-art approaches in challenging scenes.|
|Keywords:||Image reconstruction; optical imaging; adaptive optics; dynamic range; merging; cameras|
|Rights:||© 2019 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.