Dual-attention-guided network for ghost-free high dynamic range imaging
dc.contributor.author | Yan, Q. | |
dc.contributor.author | Gong, D. | |
dc.contributor.author | Shi, J.Q. | |
dc.contributor.author | van den Hengel, A. | |
dc.contributor.author | Shen, C. | |
dc.contributor.author | Reid, I. | |
dc.contributor.author | Zhang, Y. | |
dc.date.issued | 2021 | |
dc.description.abstract | Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results. | |
dc.description.statementofresponsibility | Qingsen Yan, Dong Gong, Javen Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid and Yanning Zhang | |
dc.identifier.citation | International Journal of Computer Vision, 2021; 130(1) | |
dc.identifier.doi | 10.1007/s11263-021-01535-y | |
dc.identifier.issn | 0920-5691 | |
dc.identifier.issn | 1573-1405 | |
dc.identifier.orcid | Shi, J.Q. [0000-0002-9126-2107] | |
dc.identifier.orcid | van den Hengel, A. [0000-0003-3027-8364] | |
dc.identifier.orcid | Shen, C. [0000-0002-8648-8718] | |
dc.identifier.orcid | Reid, I. [0000-0001-7790-6423] | |
dc.identifier.uri | https://hdl.handle.net/2440/133427 | |
dc.language.iso | en | |
dc.publisher | Springer Science and Business Media LLC | |
dc.relation.grant | http://purl.org/au-research/grants/arc/DP140102270 | |
dc.relation.grant | http://purl.org/au-research/grants/arc/DP160100703 | |
dc.rights | © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 | |
dc.source.uri | https://doi.org/10.1007/s11263-021-01535-y | |
dc.subject | High dynamic range imaging; de-ghosting; attention mechanism; deep learning | |
dc.title | Dual-attention-guided network for ghost-free high dynamic range imaging | |
dc.type | Journal article | |
pubs.publication-status | Published |