Incontri del Giovedì - 13 Aprile 2023

Edizione “Giovani di IEIIT” : Dr. Sara Narteni , Dr. Alberto Carlevaro

 

CNR-IEIIT organizza una serie di seminari con frequenza bisettimanale denominata "Incontri del giovedì" in cui vengono affrontate con il supporto di relatori di rilievo nell'ambito della ricerca scientifica, accademica ed industriale le tematiche caratterizzanti l'Istituto stesso, con una visione trasversale ai domini applicativi ed agli ambiti tecnologici ed uno sguardo rivolto alla loro evoluzione.

Il seminario, in edizione “Giovani di IEIIT", si è svolto in forma telematica giovedì 13 Aprile 2023 alle ore 17:30 sulla piattaforma Microsoft Teams.

Link all'evento registrato:

https://youtu.be/M1i0hkSkFwQ

Relatori:  Dr. Sara Narteni (PhD) , Dr. Alberto Carlevaro (PhD)

Titolo seminario:

Countermeasures against adversarial machine learning based on eXplainable and Reliable Artificial Intelligence

Abstract:

Machine learning (ML) algorithms are nowadays widely adopted in different contexts to perform autonomous decisions and predictions. Due to the high volume of data shared in the recent years, ML algorithms are more accurate and reliable since training and testing phases are more precise. An important concept to analyze when defining ML algorithms concerns adversarial machine learning attacks. These attacks aim to create manipulated datasets to mislead ML algorithm decisions.
In this talk, we will present our research on new approaches able to detect and mitigate malicious adversarial machine learning attacks against a ML system. In particular, we investigate the Carlini-Wagner (CW), the fast gradient sign method (FGSM) and the Jacobian based saliency map (JSMA) attacks. The aim of the work is to exploit detection algorithms as countermeasures to these attacks. Initially, we performed some tests by using canonical ML algorithms with a hyperparameters optimization to improve metrics. Then, we adopt original reliable AI algorithms, either based on eXplainable AI (Logic Learning Machine) or Support Vector Data Description (SVDD). The obtained results show how the classical algorithms may fail to identify an adversarial attack, while the reliable AI methodologies are more prone to correctly detect a possible adversarial machine learning attack. The evaluation of the proposed methodology was carried out in terms of good balance between FPR and FNR on real world application datasets: Domain Name System (DNS) tunneling, Vehicle Platooning and Remaining Useful Life (RUL). In addition, a statistical analysis was performed to improve the robustness of the trained models, including evaluating their performance in terms of runtime and memory consumption.

 

Le registrazioni dei precedenti seminari della serie sono presenti sul canale YouTube dell’Istituto nella playlist “Incontri del Giovedì” al seguente link: https://cutt.ly/playlist-incontri-del-giovedi