Temporally-coherent novel video synthesis using texture-based priors

Date

2009

Authors

Shahrokni, A.
Woodford, O.
Reid, I.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

IPSJ Transactions on Computer Vision and Applications, 2009; 1:72-81

Statement of Responsibility

Ali Shahrokni, Oliver Woodford, Ian Reid

Conference Name

Abstract

In this paper we propose a method to construct a virtual sequence for a camera moving through a static environment, given an input sequence from a different camera trajectory. Existing image-based rendering techniques can generate photorealistic images given a set of input views, though the output images almost unavoidably contain small regions where the colour has been incorrectly chosen. In a single image these artifacts are often hard to spot, but become more obvious when viewing a real image with its virtual stereo pair, and even more so when a sequence of novel views is generated, since the artifacts are rarely temporally consistent. To address this problem of consistency, we propose a new spatio-temporal approach to novel video synthesis. Our method exploits epipolar geometry to impose constraints on temporal coherence of the rendered views. The pixels in the output video sequence are modelled as nodes of a 3-D graph. We define an MRF on the graph which encodes photoconsistency of pixels as well as texture priors in both space and time. Unlike methods based on scene geometry, which yield highly connected graphs, our approach results in a graph whose degree is independent of scene structure. The MRF energy is therefore tractable and we solve it for the whole sequence using a state-of-the-art message passing optimisation algorithm. We demonstrate the effectiveness of our approach in reducing temporal artifacts.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

Copyright © 2009 by the Information Processing Society of Japan

License

Grant ID

Call number

Persistent link to this record