Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Self-paced kernel estimation for robust blind image deblurring|
|Citation:||Proceedings of the IEEE International Conference on Computer Vision (ICCV 2017), 2017 / vol.2017, pp.1670-1679|
|Series/Report no.:||IEEE International Conference on Computer Vision|
|Conference Name:||IEEE International Conference on Computer Vision (ICCV 2017) (22 Oct 2017 - 29 Oct 2017 : Venice, ITALY)|
|Dong Gong, Mingkui Tan, Yanning Zhang, Anton van den Hengel, Qinfeng Shi|
|Abstract:||The challenge in blind image deblurring is to remove the effects of blur with limited prior information about the nature of the blur process. Existing methods often assume that the blur image is produced by linear convolution with additive Gaussian noise. However, including even a small number of outliers to this model in the kernel estimation process can significantly reduce the resulting image quality. Previous methods mainly rely on some simple but unreliable heuristics to identify outliers for kernel estimation. Rather than attempt to identify outliers to the model a priori, we instead propose to sequentially identify inliers, and gradually incorporate them into the estimation process. The self-paced kernel estimation scheme we propose represents a generalization of existing self-paced learning approaches, in which we gradually detect and include reliable inlier pixel sets in a blurred image for kernel estimation. Moreover, we automatically activate a subset of significant gradients w.r.t. the reliable inlier pixels, and then update the intermediate sharp image and the kernel accordingly. Experiments on both synthetic data and real-world images with various kinds of outliers demonstrate the effectiveness and robustness of the proposed method compared to the stateof- the-art methods.|
|Rights:||© 2017 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.