On Sparse Feature Attacks in Adversarial Learning

Publication Type:
Conference Proceeding
Citation:
Proceedings - IEEE International Conference on Data Mining, ICDM, 2014, 2015-January (January), pp. 1013 - 1018
Issue Date:
2014-01-01
Filename Description Size
ad.pdfPublished version267.61 kB
Adobe PDF
Full metadata record
© 2014 IEEE. Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classifications for detecting spam email, network intrusion detection and credit card scoring. In fact as the gamut of application domains of machine learning grows, the possibility and opportunity for adversarial behavior will only increase. Till now, the standard assumption about modeling adversarial behavior has been to empower an adversary to change all features of the classifier sat will. The adversary pays a cost proportional to the size of 'attack'. We refer to this form of adversarial behavior as a dense feature attack. However, the aim of an adversary is not just to subvert a classifier but carry out data transformation in a way such that spam continues to appear like spam to the user as much as possible. We demonstrate that an adversary achieves this objective by carrying out a sparse feature attack. We design an algorithm to show how a classifier should be designed to be robust against sparse adversarial attacks. Our main insight is that sparse feature attacks are best defended by designing classifiers which use l1 regularizers.
Please use this identifier to cite or link to this item: