%0 Journal Article %T Proposing a Robust Method Against Adversarial Attacks Using Scalable Gaussian Process and Voting %J Nashriyyah -i Muhandisi -i Barq va Muhandisi -i Kampyutar -i Iran %I Iranian Research Institute for Electrical Engineering %Z 16823745 %A Mehran Safayani %A Pooyan Shalbafan %A Seyed Hashem Ahmadi %A Mahdieh Falah aliabadi %A Abdolreza Mirzaei %D 1400 %\ 1400/12/14 %V 4 %N 19 %P 275-288 %! Proposing a Robust Method Against Adversarial Attacks Using Scalable Gaussian Process and Voting %K Neural networks %K Gaussian process %K scalable Gaussian process %K adversarial examples %X In recent years, the issue of vulnerability of machine learning-based models has been raised, which shows that learning models do not have high robustness in the face of vulnerabilities. One of the most well-known defects, or in other words attacks, is the injection of adversarial examples into the model, in which case, neural networks, especially deep neural networks, are the most vulnerable. Adversarial examples are generated by adding a little purposeful noise to the original examples so that from the human user's point of view there is no noticeable change in the data, but machine learning models make mistakes in categorizing the data. One of the most successful methods for modeling data uncertainty is Gaussian processes, which have not received much attention in the field of adversarial examples. One reason for this could be the high computational volume of these methods, which limits their used in the real issues. In this paper, a scalable Gaussian process model based on random features has been used. This model, in addition to having the capabilities of Gaussian processes for proper modeling of data uncertainty, is also a desirable model in terms of computational cost. A voting-based process is then presented to deal with adversarial examples. Also, a method called automatic relevant determination is proposed to weight the important points of the images and apply them to the kernel function of the Gaussian process. In the results section, it is shown that the proposed model has a very good performance against fast gradient sign attack compared to competing methods. %U http://rimag.ir/fa/Article/28868