Learning a no-reference quality metric for single-image super-resolution

Date

2017

Authors

Ma, C.
Yang, C.-Y.
Yang, X.
Yang, M.-H.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

Computer Vision and Image Understanding, 2017; 158:1-16

Statement of Responsibility

Chao Ma, Chih-Yuan Yang, Xiaokang Yang, Ming-Hsuan Yang

Conference Name

Abstract

Numerous single-image super-resolution algorithms have been proposed in the literature, but few studies address the problem of performance evaluation based on visual perception. While most super-resolution images are evaluated by full-reference metrics, the effectiveness is not clear and the required ground-truth images are not always available in practice. To address these problems, we conduct human subject studies using a large set of super-resolution images and propose a no-reference metric learned from visual perceptual scores. Specifically, we design three types of low-level statistical features in both spatial and frequency domains to quantify super-resolved artifacts, and learn a two-stage regression model to predict the quality scores of super-resolution images without referring to ground-truth images. Extensive experimental results show that the proposed metric is effective and efficient to assess the quality of super-resolution images based on human perception.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© 2017 Elsevier Inc. All rights reserved.

License

Grant ID

Call number

Persistent link to this record