This course provides a practical and strategic foundation in securing AI systems. As organisations rapidly adopt AI tools and platforms, new attack surfaces are emerging that traditional cybersecurity approaches cannot address. This hands-on course will equip cybersecurity professionals and technical teams with the knowledge, tools, and frameworks needed to protect AI-driven systems from evolving threats.
Prerequisites
-
Foundational understanding of cybersecurity principles
-
Basic knowledge of machine learning concepts
-
Familiarity with web application security and APIs
-
Command line experience in Linux environments
Intended Audience
-
AI attack surface awareness: Understand how AI systems introduce new vulnerabilities, including prompt injection, data poisoning, and adversarial attacks.
-
LLM security fundamentals: Apply the OWASP Top 10 for Large Language Models to identify and mitigate key risks.
-
Hands-on defence: Use AI security tools to detect and prevent malicious activity, implement guardrails, and protect sensitive data.
-
Red teaming and testing: Simulate real-world adversarial scenarios and strengthen AI defences.
-
Governance and compliance: Navigate the NIST AI Risk Management Framework, EU AI Act, and emerging AI security regulations.
Check out our other AI courses here!
12 lessons cover:
Introduction to AI Security
This module sets the foundation for understanding how AI security differs from traditional cybersecurity. You’ll explore the unique threat landscape created by intelligent systems, including how attack surfaces expand through AI integrations. We’ll also discuss Shadow AI, governance gaps, and how emerging threats are reshaping security strategies.
Prompt Injection and Attack Vectors
Learn how attackers exploit AI systems through direct and indirect prompt injection. Using real-world case studies, you’ll examine how these techniques work in practice and develop strategies to detect and mitigate prompt-based threats before they cause harm.
OWASP Top 10 for LLM Applications
This module introduces the OWASP Top 10 for Large Language Models, highlighting the most critical vulnerabilities in AI systems. You’ll gain practical knowledge on preventing data leakage, protecting system prompts, and securing retrieval-augmented generation (RAG) pipelines and vector stores.
Adversarial AI and Model Attacks
Dive into the mechanics of adversarial attacks on AI models, including evasion, data poisoning, and model extraction. You’ll also explore transfer attacks and adversarial examples, learning how to implement defensive countermeasures and continuous monitoring to protect your models in production.
Jailbreaking and Guardrail Bypass
Understand how jailbreak techniques are used to override safety systems in language models. This module covers context manipulation, meta-query attacks, and practical testing frameworks that help you identify and mitigate these bypasses.
AI Supply Chain Security
AI systems often rely on third-party models, datasets, and integrations, introducing new risks. Here, you’ll learn how to secure the AI supply chain through robust procurement processes, vendor assessments, and contractual safeguards that minimise exposure to hidden vulnerabilities.
Deepfakes and Synthetic Media
Explore how deepfakes and synthetic media are created, the risks they pose, and their growing use in cyberattacks. You’ll evaluate the latest detection technologies, authentication methods, and organisational defence strategies to counter these evolving threats.
AI Security Tools and Platforms
This module introduces leading AI security tools such as Lakera Guard and Microsoft Prompt Shields, as well as open-source red teaming frameworks. You’ll learn how to integrate these solutions into your security workflows to detect and prevent threats effectively.
Governance and Regulation
AI security isn’t just technical—it’s also regulatory. Here, you’ll explore the NIST AI Risk Management Framework, EU AI Act requirements, and how to build internal governance programmes that align innovation with compliance and accountability.
AI Red Teaming and Security Testing
Learn how to apply structured threat modelling and red teaming methodologies to AI systems. You’ll map attack surfaces, run realistic adversarial scenarios, and ensure your security testing meets regulatory and industry standards.
Incident Response and Recovery
This module focuses on AI-specific incident response. You’ll learn how to detect, contain, and recover from prompt injection, model poisoning, and other AI-driven attacks. We’ll also cover forensic analysis, post-incident hardening, and automated detection workflows.
Capstone Project
In the final module, you’ll apply everything learned throughout the course. You’ll design and defend a secure AI architecture, participate in a red team vs blue team exercise, and present your mitigation strategies, gaining practical, job-ready experience in securing intelligent systems.