Please use this identifier to cite or link to this item:
https://elib.bsu.by/handle/123456789/248666
Title: | A comparative study of white-box and black-box adversarial attacks to the deep neural networks with different architectures |
Authors: | Voynov, D. M. Kovalev, V. A. |
Keywords: | ЭБ БГУ::ЕСТЕСТВЕННЫЕ И ТОЧНЫЕ НАУКИ::Кибернетика |
Issue Date: | 2020 |
Publisher: | Минск : БГУ |
Citation: | Компьютерные технологии и анализ данных (CTDA’2020) : материалы II Междунар. науч.-практ. конф., Минск, 23–24 апр. 2020 г. / Белорус. гос. ун-т ; редкол.: В. В. Скакун (отв. ред.) [и др.]. – Минск : БГУ, 2020. – С. 185-189. |
Abstract: | A few years ago, it was discovered that the Deep Convolutional Neural Networks (CNN) are vulnerable to so-called adversarial attacks. An adversarial attack supposes a subtle modification of an original image in such a way that the changes are almost invisible to the human eye. In this work, we are concentrating on biomedical images, which are playing the key role in the disease diagnosis and monitoring of various treatment processes. We present detailed results on the success rate for both white-box and black-box untargeted attacks to five types of popular deep CNN architectures including InceptionV3, Xception, ResNet50, DenseNet121, and Mobilenet |
Description: | Секция «Системы машинного и глубокого обучения» |
URI: | https://elib.bsu.by/handle/123456789/248666 |
ISBN: | 978-985-566-942-6 |
Appears in Collections: | 2020. Компьютерные технологии и анализ данных (CTDA’2020) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
185-189.pdf | 317,37 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.