Dual-attention-guided network for ghost-free high dynamic range imaging

dc.contributor.authorYan, Q.
dc.contributor.authorGong, D.
dc.contributor.authorShi, J.Q.
dc.contributor.authorvan den Hengel, A.
dc.contributor.authorShen, C.
dc.contributor.authorReid, I.
dc.contributor.authorZhang, Y.
dc.date.issued2021
dc.description.abstractGhosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.
dc.description.statementofresponsibilityQingsen Yan, Dong Gong, Javen Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid and Yanning Zhang
dc.identifier.citationInternational Journal of Computer Vision, 2021; 130(1)
dc.identifier.doi10.1007/s11263-021-01535-y
dc.identifier.issn0920-5691
dc.identifier.issn1573-1405
dc.identifier.orcidShi, J.Q. [0000-0002-9126-2107]
dc.identifier.orcidvan den Hengel, A. [0000-0003-3027-8364]
dc.identifier.orcidShen, C. [0000-0002-8648-8718]
dc.identifier.orcidReid, I. [0000-0001-7790-6423]
dc.identifier.urihttps://hdl.handle.net/2440/133427
dc.language.isoen
dc.publisherSpringer Science and Business Media LLC
dc.relation.granthttp://purl.org/au-research/grants/arc/DP140102270
dc.relation.granthttp://purl.org/au-research/grants/arc/DP160100703
dc.rights© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
dc.source.urihttps://doi.org/10.1007/s11263-021-01535-y
dc.subjectHigh dynamic range imaging; de-ghosting; attention mechanism; deep learning
dc.titleDual-attention-guided network for ghost-free high dynamic range imaging
dc.typeJournal article
pubs.publication-statusPublished

Files