Practical LLM and Agentic AI Attacks

Offensive Security

Practical LLM and Agentic AI Attacks Banner

Intermediate

6 Hours

18 Chapters

Ready to break AI? Dive into the offensive side of AI security in this compact workshop that blends theory with attacks in the wild. Whether you’re new or experienced, you’ll explore practical attacks on applications powered by LLMs and agentic AI, guided by the OWASP Top 10 for LLM Applications and the OWASP Agentic AI — Threats and Mitigations guide. Our Hands-on labs will immerse you in real-world scenarios, equipping you with technical insights on emerging threats and mitigations in this rapidly evolving field.

Workshop Summary

The digital landscape is experiencing a major transformation as LLMs and agentic systems are rapidly integrated into real-world applications. Even non-programmers can now launch AI-based startups, known as "AI wrappers," in just days using AI-assisted programming and vibe coding. This rapid integration has created a new, complex, and often misunderstood attack surface ripe for exploration. This workshop is designed to arm you with the knowledge and practical skills to navigate this new frontier from an adversarial perspective, moving beyond high-level theory and into the trenches of AI exploitation.

This intensive, hands-on course blends foundational concepts with real-world attack scenarios. We dive deep into the mechanics of how to subvert, manipulate, and compromise AI-powered applications. Our curriculum is structured around two of the most critical and up-to-date industry guides, providing a comprehensive framework for understanding these emerging threats:

  • OWASP Top 10 for LLM Applications 2025 : We will use this essential guide as our roadmap to explore the most critical vulnerabilities in LLM-powered applications. You will learn to execute practical attacks like prompt injection, insecure output handling, excessive agency, and sensitive information disclosure.
  • OWASP Agentic AI — Threats and Mitigations : In February 2025, the OWASP Agentic Security Initiative (ASI) published the “Agentic AI—Threats and Mitigations” guide to provide a threat-model-based reference of emerging agentic threats and discuss mitigations. We will explore the unique attack vectors as documented in the guide targeting agentic systems, including tool misuse, agent hijacking, and privilege compromise attacks that can compromise entire ecosystems.

This is not a passive lecture series. A significant amount of your time will be spent in our custom-built, hands-on lab environments . You will engage in real-world scenarios, tackling challenges that mirror the vulnerabilities found in today’s AI-powered applications. Through this immersive approach, you will actively apply the concepts you learn, a process that solidifies your understanding and builds muscle memory for real-world engagements. Solution Guides are included for each exercise lab to help you through the workshop should you feel stuck.

We will also review public reports and security advisories for each attack type covered, providing a clear understanding of how theoretical risks manifest as exploitable vulnerabilities in production applications you see in the wild.

Who Should Enroll?

This workshop is ideal for:

  • Penetration Testers and Red Teamers looking to expand their toolkit to include AI systems.
  • Security Researchers eager to explore new and emerging vulnerability classes.
  • Application Security (AppSec) Professionals tasked with securing their organization’s AI deployments.
  • Software Developers and AI/ML Engineers who want to understand the adversarial mindset to build more resilient systems.

By the end of this workshop, you will not only understand the emerging threats in AI but will have the practical experience to assess, exploit, and ultimately help secure these complex systems. Join us to move beyond the hype and gain the tangible skills required to lead in the new era of AI security.

Get Certified!

Complete this workshop to earn a Certificate of Completion with 6 CEU credits.

Certificate with CEU