Action recognition by exploring data distribution and feature correlation

Publication Type:
Conference Proceeding
Citation:
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 1370 - 1377
Issue Date:
2012-10-01
Filename Description Size
06247823.pdfPublished version686.05 kB
Adobe PDF
Full metadata record
Human action recognition in videos draws strong research interest in computer vision because of its promising applications for video surveillance, video annotation, interactive gaming, etc. However, the amount of video data containing human actions is increasing exponentially, which makes the management of these resources a challenging task. Given a database with huge volumes of unlabeled videos, it is prohibitive to manually assign specific action types to these videos. Considering that it is much easier to obtain a small number of labeled videos, a practical solution for organizing them is to build a mechanism which is able to conduct action annotation automatically by leveraging the limited labeled videos. Motivated by this intuition, we propose an automatic video annotation algorithm by integrating semi-supervised learning and shared structure analysis into a joint framework for human action recognition. We apply our algorithm on both synthetic and realistic video datasets, including KTH [20], CareMedia dataset [1], Youtube action [12] and its extended version, UCF50 [2]. Extensive experiments demonstrate that the proposed algorithm outperforms the compared algorithms for action recognition. Most notably, our method has a very distinct advantage over other compared algorithms when we have only a few labeled samples. © 2012 IEEE.
Please use this identifier to cite or link to this item: