Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/111344
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur
Author: Gong, D.
Yang, J.
Liu, L.
Zhang, Y.
Reid, I.
Shen, C.
Hengel, A.
Shi, Q.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2017, vol.2017-January, pp.3806-3815
Publisher: IEEE
Publisher Place: Online
Issue Date: 2017
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9781538604588
ISSN: 1063-6919
Conference Name: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (21 Jul 2017 - 26 Jul 2017 : Honolulu, HI)
Statement of
Responsibility: 
Dong Gong, Jie Yang, Lingqiao Liu, Yanning Zhang, Ian Reid, Chunhua Shen, Anton van den Hengel, Qinfeng Shi
Abstract: Removing pixel-wise heterogeneous motion blur is challenging due to the ill-posed nature of the problem. The predominant solution is to estimate the blur kernel by adding a prior, but extensive literature on the subject indicates the difficulty in identifying a prior which is suitably informative, and general. Rather than imposing a prior based on theory, we propose instead to learn one from the data. Learning a prior over the latent image would require modeling all possible image content. The critical observation underpinning our approach, however, is that learning the motion flow instead allows the model to focus on the cause of the blur, irrespective of the image content. This is a much easier learning task, but it also avoids the iterative process through which latent image priors are typically applied. Our approach directly estimates the motion flow from the blurred image through a fully-convolutional deep neural network (FCN) and recovers the unblurred image from the estimated motion flow. Our FCN is the first universal end-to-end mapping from the blurred image to the dense motion flow. To train the FCN, we simulate motion flows to generate synthetic blurred-image-motion-flow pairs thus avoiding the need for human labeling. Extensive experiments on challenging realistic blurred images demonstrate that the proposed method outperforms the state-of-the-art.
Rights: © 2017 IEEE
DOI: 10.1109/CVPR.2017.405
Grant ID: http://purl.org/au-research/grants/arc/DP160100703
http://purl.org/au-research/grants/arc/CE140100016
http://purl.org/au-research/grants/arc/FL130100102
http://purl.org/au-research/grants/arc/DE170101259
Published version: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8097368
Appears in Collections:Aurora harvest 3
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.