Blindly assess image quality in the wild guided by a self-adaptive hyper network
Date
2020
Authors
Su, S.
Yan, Q.
Zhu, Y.
Zhang, C.
Ge, X.
Sun, J.
Zhang, Y.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, pp.3664-3673
Statement of Responsibility
Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, Yanning Zhang
Conference Name
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (14 Jun 2020 - 19 Jun 2020 : virtual online)
Abstract
Blind image quality assessment (BIQA) for authentically distorted images has always been a challenging problem, since images captured in the wild include varies contents and diverse types of distortions. The vast majority of prior BIQA methods focus on how to predict synthetic image quality, but fail when applied to real-world distorted images. To deal with the challenge, we propose a self-adaptive hyper network architecture to blind assess image quality in the wild. We separate the IQA procedure into three stages including content understanding, perception rule learning and quality predicting. After extracting image semantics, perception rule is established adaptively by a hyper network, and then adopted by a quality prediction network. In our model, image quality can be estimated in a self-adaptive manner, thus generalizes well on diverse images captured in the wild. Experimental results verify that our approach not only outperforms the state-of-the-art methods on challenging authentic image databases but also achieves competing performances on synthetic image databases, though it is not explicitly designed for the synthetic task.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
©2020 IEEE