Dual-attention-guided network for ghost-free high dynamic range imaging

Date

2021

Authors

Yan, Q.
Gong, D.
Shi, J.Q.
van den Hengel, A.
Shen, C.
Reid, I.
Zhang, Y.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

International Journal of Computer Vision, 2021; 130(1)

Statement of Responsibility

Qingsen Yan, Dong Gong, Javen Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid and Yanning Zhang

Conference Name

Abstract

Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021

License

Call number

Persistent link to this record