Direction-aggregated Attack for Transferable Adversarial Examples

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

10 Citaten (Scopus)
136 Downloads (Pure)

Samenvatting

Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs. However, these adversarial examples are most successful in white-box settings where the model and its parameters are available. Finding adversarial examples that are transferable to other models or developed in a black-box setting is significantly more difficult. In this article, we propose the Direction-aggregated adversarial attacks that deliver transferable adversarial examples. Our method utilizes the aggregated direction during the attack process for avoiding the generated adversarial examples overfitting to the white-box model. Extensive experiments on ImageNet show that our proposed method improves the transferability of adversarial examples significantly and outperforms state-of-the-art attacks, especially against adversarial trained models. The best averaged attack success rate of our proposed method reaches 94.6% against three adversarial trained models and 94.8% against five defense methods. It also reveals that current defense approaches do not prevent transferable adversarial attacks.

Originele taal-2Engels
Artikelnummer60
Aantal pagina's22
TijdschriftACM Journal on Emerging Technologies in Computing Systems
Volume18
Nummer van het tijdschrift3
DOI's
StatusGepubliceerd - jul. 2022

Bibliografische nota

Publisher Copyright:
© 2022 Copyright held by the owner/author(s)

Vingerafdruk

Duik in de onderzoeksthema's van 'Direction-aggregated Attack for Transferable Adversarial Examples'. Samen vormen ze een unieke vingerafdruk.

Citeer dit