☁️ The Cloud Security Guy 🤖

☁️ The Cloud Security Guy 🤖

The AI Security Learning Paths That Actually Matter in 2026

From GenAI Foundations to Agentic AI Security and AI Governance

Taimur Ijlal's avatar
Taimur Ijlal
Jan 24, 2026
∙ Paid

January 2026 is almost over, and if you work in cybersecurity, it probably feels like AI has already outpaced your learning plans.

The problem isn’t lack of content. It’s lack of structure.

Most people are trying to learn “AI security” as if it’s a single skill. In reality, it has already split into multiple tracks, each with different starting points, responsibilities, and outcomes.

Below are four practical AI security learning tracks for 2026, depending on where you are today and where you want to go next along. I have also added special discounts for my best-selling courses that will help you achieve these goals

☁️ The Cloud Security Guy 🤖 is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Track 1: AI Security – Beginners Track

For cybersecurity professionals new to AI risk and security

This track is for people who understand cybersecurity but feel unsure when conversations shift to AI.

You don’t need to become a data scientist. You need to understand why AI systems break differently, why traditional controls fall short, and how AI risk fits into security programs.

Recommended Courses:

AI Risk and Cybersecurity Course
This course is designed specifically for security and risk professionals entering AI security. It explains why AI introduces new trust, risk, and security management requirements that conventional controls do not address, and how to adapt your security mindset accordingly.
👉 Avail the discounted price HERE

Generative AI – Risk and Cybersecurity Masterclass
A foundational overview of how GenAI systems work, where their risks come from, and how security needs to adapt.
👉 Avail the discounted price HERE

This track builds the mental models you’ll need before going deeper into frameworks, engineering, or governance.


Track 2: AI Governance, Risk & Compliance

For CISOs, GRC professionals, and security leaders

AI governance is not just policy writing. It is about accountability, ownership, decision-making, and control in systems that behave probabilistically.

This track focuses on turning AI governance from vague principles into enforceable security and risk practices.

Recommended Courses:

Responsible AI for CISOs and Cybersecurity Professionals (NEW)
A practical, security-first course that shows how Responsible AI maps to real controls, ownership models, and incident response processes.
👉 Avail the discounted price HERE

AI Regulations and Frameworks Crash Course (2025)
Covers the EU AI Act, NIST AI RMF, ISO 42001, and Google SAIF, with a focus on how they affect real security and risk decisions.
👉 Avail the discounted price HERE

The NIST AI Risk Management Framework (RMF) Masterclass
A deep dive into applying NIST AI RMF to real AI risk assessments and governance workflows.
👉 Avail the discounted price HERE

The EU AI Act Compliance Masterclass
Focused guidance on understanding AI risk categories, obligations, and compliance strategies under the EU AI Act.
👉 Avail the discounted price HERE


Track 3: GenAI Security

For engineers and architects securing GenAI applications

This track is for people actively working with GenAI systems and who need to secure them beyond surface-level controls.

GenAI introduces hallucinations, prompt injection, insecure orchestration, and over-reliance on AI outputs. These are not edge cases anymore—they are production risks.

Recommended learning sequence

Securing GenAI with Best Practice Frameworks
A practical guide to securing GenAI systems using Google SAIF and the AWS Generative AI Scoping Matrix.
👉 Avail the discounted price HERE

Social Engineering with GenAI
Explores how GenAI changes phishing, pretexting, and manipulation, and what defenders need to do differently.
👉 Avail the discounted price HERE

This track is ideal for AppSec, cloud security, platform security, and blue team engineers working on AI-powered products.


Track 4: Agentic AI Security (Including Vibe Coding)

For advanced practitioners working with autonomous systems

Once AI systems start taking actions, chaining tools, and making decisions, the security model changes again.

Agentic AI introduces risks like tool misuse, privilege escalation, goal misalignment, cascading failures, and silent automation errors. Vibe coding amplifies these risks by allowing AI to generate and modify code at speed, often without full human understanding.

Recommended learning sequence

Agentic AI – Risk and Security Masterclass
A comprehensive introduction to agentic AI systems and their unique security challenges.
👉 Avail the discounted price HERE

Threat Modeling for Agentic AI Masterclass
A hands-on course using MAESTRO and OWASP Agentic AI Threats to identify, assess, and mitigate agentic risks.
👉 Avail the discounted price HERE

Vibe Coding Risk and Security Course (NEW)
Focused on the security implications of AI-assisted coding, including over-trust, prompt misuse, and insecure automation pipelines.
👉 Avail the discounted price HERE


How to Use These Tracks

You don’t need to complete all four.

  • If you’re new to AI security, start with Track 1

  • If you own risk and compliance, start with Track 2

  • If you’re building GenAI systems, start with Track 3

  • If you’re pushing into autonomy and agents, start with Track 4

Over time, the most effective AI security professionals will understand all four perspectives.

A Note for Paid Subscribers

If you’re a paid Substack subscriber, then you get access to the first track for free. Thank you for supporting this newsletter !

User's avatar

Continue reading this post for free, courtesy of Taimur Ijlal.

Or purchase a paid subscription.
© 2026 Cloud Security Guy · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture