Abstract: Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.
Comments: | Accepted in EMNLP 2018 (Conference on Empirical Methods in Natural Language Processing) |
Subjects: | Computation and Language (cs.CL) |
Cite as: | arXiv:1804.07998 [cs.CL] |
(or arXiv:1804.07998v2 [cs.CL] for this version) | |
https://doi.org/10.48550/arXiv.1804.07998 |
Focus to learn more
arXiv-issued DOI via DataCiteFrom: Moustafa Alzantot [view email]
[v1] Sat, 21 Apr 2018 17:02:20 UTC (159 KB)
[v2] Mon, 24 Sep 2018 20:29:35 UTC (29 KB)
View a PDF of the paper titled Generating Natural Language Adversarial Examples, by Moustafa Alzantot and 5 other authors