Adversarial attack, defense, and applications with deep learning frameworks
- Publication Type:
- Chapter
- Citation:
- Advanced Sciences and Technologies for Security Applications, 2019, pp. 1 - 25
- Issue Date:
- 2019-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Yin2019_Chapter_AdversarialAttackDefenseAndApp.pdf | Published version | 1.08 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© Springer Nature Switzerland AG 2019. In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.
Please use this identifier to cite or link to this item: