Parallel lasso for large-scale video concept detection

Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2012, 14 (1), pp. 55 - 65
Issue Date:
2012-02-01
Filename Description Size
Thumbnail2012002987OK.pdf1.19 MB
Adobe PDF
Full metadata record
Existing video concept detectors are generally built upon the kernel based machine learning techniques, e.g., support vector machines, regularized least squares, and logistic regression, just to name a few. However, in order to build robust detectors, the learning process suffers from the scalability issues including the high-dimensional multi-modality visual features and the large-scale keyframe examples. In this paper, we propose parallel lasso (Plasso) by introducing the parallel distributed computation to significantly improve the scalability of lasso (the regularized least squares). We apply the parallel incomplete Cholesky factorization to approximate the covariance statistics in the preprocess step, and the parallel primal-dual interior-point method with the Sherman-Morrison-Woodbury formula to optimize the model parameters. For a dataset with samples in a -dimensional space, compared with lasso, Plasso significantly reduces complexities from the original for computational time and for storage space to and respectively, if the system has $m$ processors and the reduced dimension is much smaller than the original dimension. Furthermore, we develop the kernel extension of the proposed linear algorithm with the sample reweighting schema, and we can achieve similar time and space complexity improvements [time complexity from to and the space complexity from to for a dataset with training examples]. Experimental results on TRECVID video concept detection challenges suggest that the proposed method can obtain significant time and space savings for training effective detectors with limited communication overhead. © 2006 IEEE.
Please use this identifier to cite or link to this item: