Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Artistic image classification: an analysis on the PRINTART database|
da Silva, N.
Del Bue, A.
|Citation:||Proceedings of the 12th European Conference on Computer Vision, held in Florence, Italy, 7-13 October, 2012 / A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato and C. Schmid (eds.): pp.143-157|
|Series/Report no.:||Lecture Notes in Computer Science; 7575|
|Conference Name:||European Conference on Computer Vision (12th : 2012 : Florence, Italy)|
|Gustavo Carneiro, Nun Pinho da Silva, Alessio Del Blue and João Paulo Costeira|
|Abstract:||Artistic image understanding is an interdisciplinary research field of increasing importance for the computer vision and the art history communities. For computer vision scientists, this problem offers challenges where new techniques can be developed; and for the art history community new automatic art analysis tools can be developed. On the positive side, artistic images are generally constrained by compositional rules and artistic themes. However, the low-level texture and color features exploited for photographic image analysis are not as effective because of inconsistent color and texture patterns describing the visual classes in artistic images. In this work, we present a new database of monochromatic artistic images containing 988 images with a global semantic annotation, a local compositional annotation, and a pose annotation of human subjects and animal types. In total, 75 visual classes are annotated, from which 27 are related to the theme of the art image, and 48 are visual classes that can be localized in the image with bounding boxes. Out of these 48 classes, 40 have pose annotation, with 37 denoting human subjects and 3 representing animal types. We also provide a complete evaluation of several algorithms recently proposed for image annotation and retrieval. We then present an algorithm achieving remarkable performance over the most successful algorithm hitherto proposed for this problem. Our main goal with this paper is to make this database, the evaluation process, and the benchmark results available for the computer vision community.|
|Rights:||© Springer-Verlag Berlin Heidelberg 2012|
|Appears in Collections:||Computer Science publications|
Files in This Item:
|RA_hdl_77055.pdf||Restricted Access||2.76 MB||Adobe PDF||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.