Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks | Ferhat Ozgur Catak

Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks

Abstract

Makine Öğrenimi Modeli Saldırıları için Otokodlayıcı ile Saldırı Dayanıklılığının Arttırılması Nowadays, machine learning is being used widely. There have also been attacks towards machine learning process. In this study, robustness against machine learning model attacks which cause many results such as misclassification, disruption of decision mechanisms and avoidance of filters has been shown by autoencoding and with non-targeted attacks to a model trained with Mnist dataset. In this work, the results and improvements for the most common and important attack method, non-targeted attack are presented.

Publication
27th IEEE Signal Processing and Applications Conference
Date