hackgdl.exe
   __ __         __   ________  __ 
  / // /__ _____/ /__/ ___/ _ \/ / 
 / _  / _ `/ __/  '_/ (_ / // / /__
/_//_/\_,_/\__/_/\_\\___/____/____/

Torta Ahogada Track
            
talk.exe

Testing LLM Algorithms While AI Tests Us

In an era where artificial intelligence (AI) and Large Language Models (LLMs) are becoming integral to our digital interactions, ensuring their security and usability is paramount. This presentation embarks on a journey through the compelling intersection of these two pivotal domains within the automation landscape. The discourse unfolds cutting-edge methodologies, techniques, and tools employed in threat modeling, API testing, and red teaming, all aimed at fortifying security measures within these artificial narrow intelligent systems. Engage in a thought-provoking exploration of how we, as users and developers, can strategically plan and implement tests for GenAi & LLM systems, ensuring their robustness and reliability. The presentation not only demystifies the complexities of security testing in LLMs but also sparks a conversation about our daily interactions with GenAi, prompting us to ponder our conscious and subconscious engagements with these technologies.

Rob Ragan
Principal Architect & Researcher

Oscar Salazar
Principal Security Consultant

Rob_Ragan.jpg
Yael Basurto
Oscar_Salazar.jpg
Abraham Vargas
root@hackgdl.net Discord Twitter LinkedIn Instagram