Privacy Preserving Statistical Detection of Adversarial Instances

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Adversarial instances are malicious input designed by attackers to cause a classification model to make a false prediction, e.g. in Spam detection. Effective solutions have been proposed to detect and block adversarial instances in real time. Still, the proposed approaches fail to detect adversarial instances over private input (required by many on-line platforms analyzing sensitive personal data).In this work, we propose a novel framework that applies a statistical test to detect adversarial instances when data under analysis are in private format. The practical feasibility of our approach in terms of computation cost is shown through an experimental evaluation.

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE 29th International Conference on Enabling Technologies
Subtitle of host publicationInfrastructure for Collaborative Enterprises, WETICE 2020
PublisherIEEE Computer Society
Pages159-164
Number of pages6
ISBN (Electronic)9781728169750
DOIs
Publication statusPublished - Sep 2020
Event29th IEEE International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE 2020 - Virtual, Bayonne, France
Duration: 4 Nov 20206 Nov 2020

Conference

Conference29th IEEE International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE 2020
CountryFrance
CityVirtual, Bayonne
Period4/11/206/11/20

Keywords

  • Adversarial Machine Learning
  • Homomorphic Encryption
  • Privacy

Fingerprint Dive into the research topics of 'Privacy Preserving Statistical Detection of Adversarial Instances'. Together they form a unique fingerprint.

Cite this