Super-Resolving Cross-Domain Face Miniatures by Peeking at One-Shot Exemplar

dc.contributor.authorLi, P.
dc.contributor.authorYu, X.
dc.contributor.authorYang, Y.
dc.contributor.conferenceIEEE/CVF International Conference on Computer Vision (ICCV) (10 Oct 2021 - 17 Oct 2021 : Montreal, QC, Canada)
dc.date.issued2022
dc.description.abstractConventional face super-resolution methods usually assume testing low-resolution (LR) images lie in the same domain as the training ones. Due to different lighting conditions and imaging hardware, domain gaps between training and testing images inevitably occur in many real-world scenarios. Neglecting those domain gaps would lead to inferior face super-resolution (FSR) performance. However, how to transfer a trained FSR model to a target domain efficiently and effectively has not been investigated. To tackle this problem, we develop a Domain-Aware Pyramid-based Face Super-Resolution network, named DAP-FSR network. Our DAP-FSR makes the first attempt to super-resolve LR faces from a target domain by exploiting only a pair of high-resolution (HR) and LR exemplar in the target domain. To be specific, our DAP-FSR firstly employs its encoder to extract the multi-scale latent representations of the input LR face. Considering only one target domain example is available, we propose to augment the target domain data by mixing the latent representations of the target domain face and source domain ones, and then feed the mixed representations to the decoder of our DAP-FSR. The decoder will generate new face images resembling the target domain image style. The generated HR faces in turn are used to optimize our decoder to reduce the domain gap. By iteratively updating the latent representations and our decoder, our DAP-FSR will be adapted to the target domain, thus achieving authentic and high-quality upsampled HR faces. Extensive experiments on three benchmarks validate the effectiveness and superior performance of our DAP-FSR compared to the state-of-the-art methods.
dc.description.statementofresponsibilityPeike Li, Xin Yu, Yi Yang
dc.identifier.citationProceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision, 2022, pp.4449-4459
dc.identifier.doi10.1109/ICCV48922.2021.00443
dc.identifier.isbn9781665428125
dc.identifier.issn1550-5499
dc.identifier.orcidYu, X. [0000-0001-9890-5489] [0000-0002-0269-5649] [0000-0002-3388-9606] [0000-0002-6265-9519]
dc.identifier.urihttps://hdl.handle.net/2440/148900
dc.language.isoen
dc.publisherIEEE
dc.publisher.placeUnited States
dc.relation.granthttp://purl.org/au-research/grants/arc/DP200100938
dc.rights© 2021 IEEE
dc.source.urihttps://doi.org/10.1109/iccv48922.2021.00443
dc.subjectface super-resolution (FSR)
dc.subjectlow-resolution (LR)
dc.titleSuper-Resolving Cross-Domain Face Miniatures by Peeking at One-Shot Exemplar
dc.typeConference paper
pubs.publication-statusPublished

Files

Collections