hackgdl.exe
   __ __         __   ________  __ 
  / // /__ _____/ /__/ ___/ _ \/ / 
 / _  / _ `/ __/  '_/ (_ / // / /__
/_//_/\_,_/\__/_/\_\\___/____/____/

Torta Ahogada track
            
talk.exe

NeuroInvasion Penetrating the Core of Artificial Intelligence

Chen Shiri
Cyber Security Researcher, Accenture Security

This presentation delves into my new research and methodologies for attacking Deep Neural Networks (DNNs) and AI models in black-box environments (without access to internal parameters).
Traditionally, adversarial attacks require access to the model's internals (white-box access), limiting their application in black-box settings. However, this talk introduces **two innovative techniques** to bypass this restriction. Attendees will gain a deep understanding of how these techniques work, from identifying a model's architecture through **model enumeration** to adapting **white-box attack strategies** for black-box models.

Chen_Shiri.jpg
Chen Shiri
root@hackgdl.net Discord Twitter LinkedIn Instagram