Identifying Relevant Frames in Weakly Labeled Videos for Training Concept Detectors

Adrian Ulges, Christian Schulze, Daniel Keysers, Thomas Breuel
Proceedings of the 7th ACM International Conference on Image and Video Retrieval, Pages 9-16, Niagara Falls, Canada, ACM, 7/2008

Abstract:

A key problem with the automatic detection of semantic concepts (like `interview' or `soccer') in video streams is the manual acquisition of adequate training sets. Recently, we have proposed to use online videos downloaded from portals like youtube.com for this purpose, whereas tags provided by users during video upload serve as ground truth annotations. The problem with such training data is that it is weakly labeled: Annotations are only provided on video level, and many shots of a video may be "non-relevant", i.e. not visually related to a tag. In this paper, we present a probabilistic framework for learning from such weakly annotated training videos in the presence of irrelevant content. Thereby, the relevance of keyframes is modeled as a latent random variable that is estimated during training. In quantitative experiments on real-world online videos and TV news data, we demonstrate that the proposed model leads to a significantly increased robustness with respect to irrelevant content, and to a better generalization of the resulting concept detectors.

Files:

  2008-IUPR-05Jun_1853.pdf

BibTex:

@inproceedings{ ULGE2008,
	Title = {Identifying Relevant Frames in Weakly Labeled Videos for Training Concept Detectors},
	Author = {Adrian Ulges and Christian Schulze and Daniel Keysers and Thomas Breuel},
	BookTitle = {Proceedings of the 7th ACM International Conference on Image and Video Retrieval},
	Month = {7},
	Year = {2008},
	Publisher = {ACM},
	Pages = {9-16}
}

     
Last modified:: 30.08.2016