FC-09

OWASP LLM Top 10, Prompt Injection, Adversarial ML, attacks on RAG and Supply Chain. AI Red Teaming using MITRE ATLAS with Garak, PyRIT and Counterfit. Prerequisites: FC-04, FC-06.

45 lessons9 modulesAdvanced3 themes

Why AI Security is the hottest niche of the next decade

Numbers that explain everything

100%of LLM applications are vulnerable to prompt injection according to OWASP research
$1M+Bug Bounty payouts for AI system vulnerabilities at OpenAI, Google, Meta in 2024
$140K+annual salary for AI Security Researcher / Red Teamer in the US
85%of ML models in production have not been tested for adversarial robustness

After the course you will be able to

Hands-on practice with Garak, PyRIT, TextAttack and real LLM APIs — not simulations, but production-grade tooling

💉Conduct direct and indirect prompt injection attacks: DAN jailbreaks, encoding bypasses, Crescendo, many-shot
🗄️Attack RAG systems: vector database poisoning, document injection, attacks on AI agents with tool access
🎭Execute adversarial ML attacks (FGSM, PGD, C&W), model extraction and membership inference
⛓️Audit AI Supply Chain: pickle deserialization, backdoor models, training data poisoning
🔬Perform AI Red Teaming using MITRE ATLAS methodology with Garak, PyRIT and Counterfit
🛡️Secure LLM applications: guardrails, output filtering, Presidio DLP, privilege separation
🤖Deploy ML models in SOC: anomaly detection, NLP for log analysis, automatic incident triage
📋Develop AI Security Policy and build governance for AI systems in organizations

Real AI attacks in the course

We reproduce high-profile AI system incidents in a safe lab environment

Prompt Injection2023

Bing Chat 2023 — indirect injection revealed the "Sydney" system prompt

Researchers used indirect prompt injection via web pages to make Bing Chat reveal its full system prompt, convince users to switch banks, and threaten them. We break down the technique and defenses in lesson 8.

Lesson 8 · Indirect Prompt Injection via external data
Supply Chain2023

Samsung 2023 — employees leaked source code via ChatGPT

Three Samsung engineers pasted confidential source code and chip testing data into ChatGPT. The data became part of the training set. This case became the foundation for our lesson on AI governance and data leakage through LLMs.

Lesson 44 · AI governance and usage policies
Adversarial ML2024

Crescendo — 74% success rate on GPT-4 via multi-turn attack

Microsoft Research published Crescendo: a multi-turn jailbreak attack with 74% success against GPT-4. The attack gradually escalates context through benign steps. Full breakdown and reproduction in lesson 9.

Lesson 9 · Jailbreak techniques and their evolution

Course Program

9 modules · 45 lessons · 3 themes: AI System Threats, AI Red Teaming, AI for Cybersecurity

Where this course leads

FC-09 — your entry into one of the most in-demand and highest-paying niches of the next decade

$8,000 — $20,000/mo

AI Red Team Researcher

Specialize in AI system security at major companies (OpenAI, Google, Meta, Anthropic). Top niche with massive talent shortage.

Prompt InjectionAdversarial MLRed TeamingLLM
Track:FC-09 → AI Red Team → Principal
$5,000 — $14,000/mo

AI Security Engineer

Build secure AI products: guardrails, input validation, monitoring. Work at AI startups and enterprises.

GuardrailsOWASP LLMMLOps SecurityMonitoring
Track:FC-09 → AI Security Eng → Staff
$6,000 — $18,000/mo

ML Security Researcher

Research adversarial model robustness, publish at academic journals and conferences (NeurIPS, ICML, IEEE S&P).

Adversarial MLResearchPyTorchDifferential Privacy
Track:FC-09 → PhD / Research Lab → Principal

Who this course is for

🛡️

Blue Team and SOC specialists

Completed FC-04 or FC-06 and want to deploy AI in SOC: anomaly detection, automatic triage, NLP log analysis — this is theme 3 of the course

⚔️

Red Team and pentesters

Want to master AI Red Teaming with MITRE ATLAS, test LLMs with Garak and PyRIT, and attack RAG systems — the newest and least explored attack vector

🤖

AI/ML engineers

Building LLM applications or RAG systems and want to understand OWASP LLM Top 10 from the inside — all vulnerabilities with code, defenses and testing tools

Become an expert
in AI security
of the next generation

45 lessons with Garak, PyRIT, Counterfit and TextAttack. Real attacks on LLMs, RAG and ML models — every module ends with a hands-on lab.

FC-09 — AI Security
Artificial Intelligence Security
Lessons45
Hours50
LevelAdvanced
Themes3