NPrSVM: Nonparallel sparse projection support vector machine with efficient algorithm

Publisher:
Elsevier BV
Publication Type:
Journal Article
Citation:
Applied Soft Computing Journal, 2020, 90, pp. 106142-106142
Issue Date:
2020-05-01
Filename Description Size
1-s2.0-S156849462030082X-main.pdfPublished Version2.34 MB
Adobe PDF
Full metadata record
The recently proposed projection twin support vector machine (PTSVM) is an excellent nonparallel classifier. However, PTSVM employs the least-squares loss function to measure its within-class empirical risk, resulting in several drawbacks, such as non-sparseness for decision, sensitivity to outliers, expensive matrix inversion, and inconsistency in the linear and nonlinear models. To alleviate these issues, in this paper, we propose a novel nonparallel sparse projection support vector machine (NPrSVM). Different from the original PTSVM that squeezes the projected values of within-class instances to its own class center, NPrSVM aims to cluster them as much as possible within an insensitive tube. Specifically, our NPrSVM owns the following attractive merits: (i) Benefiting from the L1-norm symmetric Hinge loss function, NPrSVM not only enjoys sparseness for decision but also improves robustness to outliers. (ii) The elegant formulation of dual problems in NPrSVM no longer involves the matrix inversion during the training procedure. This greatly saves the computing time compared to PTSVM. (iii) While the nonlinear formulation of PTSVM is not the direct extension of linear PTSVM, the linear and nonlinear versions of our NPrSVM are consistent. (iv) An efficient dual coordinate descent algorithm is further designed for NPrSVM to handle large-scale classification. Finally, the feasibility and effectiveness of NPrSVM are validated by extensive experiments on both synthetic and real-world datasets.
Please use this identifier to cite or link to this item: