Multi-task support vector machines for feature selection with shared knowledge discovery

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Signal Processing, 2016, 120, pp. 746-753
Issue Date:
2016-03-01
Filename Description Size
1-s2.0-S016516841400574X-main.pdfPublished version512.76 kB
Adobe PDF
Full metadata record
Feature selection is an effective way to reduce computational cost and improve feature quality for the large-scale multimedia analysis system. In this paper, we propose a novel feature selection method in which the hinge loss function with a ℓ2,1regularization term is used to learn a sparse feature selection matrix for each learning task. Meanwhile, shared information exploiting across multiple tasks has been also taken into account by imposing a constraint which globally limits the combined feature selection matrices to be low-rank. A convex optimization method is proposed to use in the framework by minimizing the trace norm of a matrix instead of minimizing the rank of a matrix directly. Afterwards, gradient descent is applied to find the global optimum. Extensive experiments have been conducted across eight datasets for different multimedia applications, including action recognition, face recognition, object recognition and scene recognition. Experimental results demonstrate that the proposed method performs better than other compared approaches. Especially, when the shared information across multiple tasks is very beneficial to the multi-task learning, obvious improvements can be observed.
Please use this identifier to cite or link to this item: