Style Modeling for Tagging Personal Photo Collections

M. Duan, Adrian Ulges, Thomas Breuel, X.-Q. Wu
Proceedingsof the International Conference on Image and Video Retrieval, Santorini, Greece, ACM, New York, US, 7/2009


While current image annotation methods treat each input image individually, users in practice tend to take multiple pictures at the same location, with the same setup, or over the same trip, such that the images to be labeled come in groups sharing a coherent "style". We present an approach for annotating such style-consistent batches of pictures. The method is inspired by previous work in handwriting recognition and models style as a latent random variable. For each style, a separate image annotation model is learned. When annotating a batch of images, style is inferred using maximum likelihood over the whole batch, and the style-specific model is used for an accurate tagging. In quantitative experiments on the COREL dataset and real-world photo stock downloaded from Flickr, we demonstrate that %style consistency helps image annotation to disambiguate and improves the overall tagging performance. - by making use of the additional information that images come in style-consistent groups - our approach outperforms several baselines that tag images individually. Relative performance improvements of up to $80$\% are achieved, and on the COREL-5K benchmark the proposed method gives a mean recall/precision of 39%/25%, which is the best result reported to date.




@inproceedings{ DUAN2009,
	Title = {Style Modeling for Tagging Personal Photo Collections},
	Author = {M. Duan and Adrian Ulges and Thomas Breuel and X.-Q. Wu},
	BookTitle = {Proceedingsof the International Conference on Image and Video Retrieval},
	Month = {7},
	Year = {2009},
	Publisher = {ACM}

Last modified:: 30.08.2016