Is feature selection secure against training data poisoning?

Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

294 Scopus citations

Abstract

Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies. Feature selection has been widely used in machine learning for security applications to improve generalization and computational efficiency, although it is not clear whether its use may be beneficial or even counterproductive when training data are poisoned by intelligent attackers. In this work, we shed light on this issue by providing a framework to investigate the robustness of popular feature selection methods, including LASSO, ridge regression and the elastic net. Our results on malware detection show that feature selection methods can be significantly compromised under attack (we can reduce LASSO to almost random choices of feature sets by careful insertion of less than 5% poisoned training samples), highlighting the need for specific countermeasures.

Original languageEnglish
Title of host publication32nd International Conference on Machine Learning, ICML 2015
EditorsDavid Blei, Francis Bach
PublisherInternational Machine Learning Society (IMLS)
Pages1689-1698
Number of pages10
ISBN (Electronic)9781510810587
StatePublished - 2015
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: 6 Jul 201511 Jul 2015

Publication series

Name32nd International Conference on Machine Learning, ICML 2015
Volume2

Conference

Conference32nd International Conference on Machine Learning, ICML 2015
Country/TerritoryFrance
CityLile
Period6/07/1511/07/15

Fingerprint

Dive into the research topics of 'Is feature selection secure against training data poisoning?'. Together they form a unique fingerprint.

Cite this