Ketahanan Pembelajaran Mesin terhadap Adversarial examples: Metodologi dan Pertahanan
(1) Institut Teknologi Sains Bandung
(2) Institut Teknologi Sains Bandung
(3) Institut Teknologi Sains Bandung
(4) Institut Teknologi Sains Bandung
(5) Institut Teknologi Sains Bandung
(*) Corresponding Author
Abstract
Full Text:
PDF (Indonesian)References
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012.
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998, doi: 10.1109/5.726791.
C. Szegedy et al., “Intriguing properties of neural networks,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Conf. Track Proc., Dec. 2013, [Online]. Available: http://arxiv.org/abs/1312.6199
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–11, 2015.
D. Eagleman, The brain: The story of you. Canongate Books, 2015.
N. Carlini and D. Wagner, “Towards Evaluating the Robustness of Neural Networks,” in 2017 IEEE Symposium on Security and Privacy (SP), IEEE, May 2017, pp. 39–57. doi: 10.1109/SP.2017.49.
T. Mitchell, “Introduction to machine learning,” Mach. Learn., vol. 7, pp. 2–5, 1997.
B. CM, “Pattern recognition and machine learning.” Springer, New York, 2010.
J. MacQueen and others, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, 1967, pp. 281–297.
R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553. pp. 436–444, May 2015. doi: 10.1038/nature14539.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, 2012.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.
I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.
D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization 3rd International Conference on Learning Representations,” in ICLR 2015-Conference Track Proceedings, 2015.
L. Fei-Fei and P. Perona, “A bayesian hierarchical model for learning natural scene categories,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), 2005, pp. 524–531.
S. Amari, “Backpropagation and stochastic gradient descent method,” Neurocomputing, vol. 5, no. 4–5, pp. 185–196, 1993.
L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010: 19th International Conference on Computational StatisticsParis France, August 22-27, 2010 Keynote, Invited and Contributed Papers, 2010, pp. 177–186.
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–11, 2015.
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., pp. 1–28, Jun. 2017, [Online]. Available: http://arxiv.org/abs/1706.06083
H.-Y. Chen et al., “Improving Adversarial Robustness via Guided Complement Entropy,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, Oct. 2019, pp. 4880–4888. doi: 10.1109/ICCV.2019.00498.
K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Work. Track Proc., pp. 1–8, Dec. 2013, [Online]. Available: http://arxiv.org/abs/1312.6034
K. Warr, Strengthening Deep Neural Networks, First Edit. O’Reilly Media, Inc., 2019.
L. van der Maaten and G. Hinton, “Visualizing Data using t-SNE,” J. Mach. Learn. Res., 2008, [Online]. Available: https://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf
R. Huang, B. Xu, D. Schuurmans, and C. Szepesvari, “Learning with a Strong Adversary,” no. 2014, pp. 1–12, 2015, [Online]. Available: http://arxiv.org/abs/1511.03034
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks,” in 2016 IEEE Symposium on Security and Privacy (SP), IEEE, May 2016, pp. 582–597. doi: 10.1109/SP.2016.41.
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” 5th Int. Conf. Learn. Represent. ICLR 2017 - Work. Track Proc., no. c, pp. 1–14, Jul. 2016, [Online]. Available: http://arxiv.org/abs/1607.02533
C. Xiao, B. Li, J. Y. Zhu, W. He, M. Liu, and D. Song, “Generating adversarial examples with adversarial networks,” IJCAI Int. Jt. Conf. Artif. Intell., vol. 2018-July, pp. 3905–3911, 2018, doi: 10.24963/ijcai.2018/543.
S. Baluja and I. Fischer, “Learning to attack: Adversarial transformation networks,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, no. 1, pp. 2687–2695, 2018.
M. Bojarski et al., “End to end learning for self-driving cars,” arXiv Prepr. arXiv1604.07316, 2016.
A. K. Jain and S. Z. Li, Handbook of face recognition, vol. 1. Springer, 2011.
X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial Examples: Attacks and Defenses for Deep Learning,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 9, pp. 2805–2824, Sep. 2019, doi: 10.1109/TNNLS.2018.2886017.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
M. Hein and M. Andriushchenko, “Formal guarantees on the robustness of a classifier against adversarial manipulation,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
L. Beggel, M. Pfeiffer, and B. Bischl, “Robust anomaly detection in images using adversarial autoencoders,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16--20, 2019, Proceedings, Part I, 2020, pp. 206–222.
F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Jun. 2018, pp. 1778–1787. doi: 10.1109/CVPR.2018.00191.
A. Athalye, N. Carlini, and D. Wagner, “Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples,” 35th Int. Conf. Mach. Learn. ICML 2018, vol. 1, pp. 436–448, Feb. 2018, [Online]. Available: http://arxiv.org/abs/1802.00420
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2017.
DOI: http://dx.doi.org/10.30998/faktorexacta.v18i2.26078
Refbacks

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
