Learning graphs to model visual objects across different depictive styles

Files

RA_hdl_108052.pdf (7.34 MB)
  (Restricted Access)

Date

2014

Authors

Wu, Q.
Cai, H.
Hall, P.

Editors

Fleet, D.
Pajdia, T.
Schiele, B.
Tuytelaars, T.

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

Lecture Notes in Artificial Intelligence, 2014 / Fleet, D., Pajdia, T., Schiele, B., Tuytelaars, T. (ed./s), vol.VII, iss.PART 7, pp.313-328

Statement of Responsibility

Qi Wu, Hongping Cai, and Peter Hall

Conference Name

13th European Conference on Computer Vision (ECCV) (6 Sep 2014 - 12 Sep 2014 : Zurich, Switzerland)

Abstract

Visual object classification and detection are major problems in contemporary computer vision. State-of-art algorithms allow thousands of visual objects to be learned and recognized, under a wide range of variations including lighting changes, occlusion, point of view and different object instances. Only a small fraction of the literature addresses the problem of variation in depictive styles (photographs, drawings, paintings etc.). This is a challenging gap but the ability to process images of all depictive styles and not just photographs has potential value across many applications. In this paper we model visual classes using a graph with multiple labels on each node; weights on arcs and nodes indicate relative importance (salience) to the object description. Visual class models can be learned from examples from a database that contains photographs, drawings, paintings etc. Experiments show that our representation is able to improve upon Deformable Part Models for detection and Bag of Words models for classification.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© Springer International Publishing Switzerland 2014

License

Grant ID

Call number

Persistent link to this record