Episodi

  • You Can't Trust What You Can't Verify — The Case for AI Model Identity
    Apr 28 2026

    Most organizations deploying AI today cannot answer a deceptively simple question. Which model is actually running in their environment?

    It is not a hypothetical concern. Model substitution, supply chain compromise, adversarial fine-tuning, and jurisdictional compliance gaps are all live risk vectors — and the industry has largely been relying on contractual guarantees from AI vendors rather than technical controls to address them.

    That gap is exactly what Project VAIL was built to close.

    In this episode I sat down with Manish Shah, Co-founder and CEO of Project VAIL (Verifiable Artificial Intelligence Layer). Manish is a repeat founder with 20+ years of company building experience, including as co-founder of LiveRamp, and he is now bringing that background to one of the most consequential unsolved problems in AI security, provably knowing and verifying which model is executing in your environment at runtime.

    VAIL’s approach combines two core technologies. Behavioral fingerprinting creates a unique, verifiable identity for AI models based on how they actually behave during inference, without relying on access to model weights or architecture. ZkTorch, developed in collaboration with researchers at UIUC, brings zero-knowledge proofs to large generative AI models for the first time at practical scale, enabling cryptographic verification of model computations without exposing sensitive model internals.

    We covered a lot of ground in this conversation, including:

    • Why behavioral fingerprinting is a fundamentally different and more resilient approach to model identification
    • How model identity becomes a critical security primitive as agentic AI deployments expand
    • Detecting prohibited and derivative models, including open-source models derived from Chinese-origin foundations like DeepSeek and Qwen
    • Where frameworks like NIST AI RMF and the EU AI Act fall short on model verification requirements
    • How verified model fingerprints fit into zero-trust architectures for AI systems and agentic workflows
    • What standardization for verifiable AI needs to look like and which bodies should be driving it

    Model verification is not a niche research problem. It is becoming a foundational requirement for AI governance, compliance, and security in regulated industries and high-stakes deployments alike.

    This episode gives you both the technical grounding and the strategic context to understand why.

    Mostra di più Mostra meno
    1 min
  • Securing the Vibe: Tanya Janca on AI-Generated Code, Mythos, and the New AppSec Reality
    Apr 27 2026

    A new episode of the Resilient Cyber Show just dropped, and this one is a conversation I’ve been looking forward to for a long time.

    I sat down with Tanya Janca, better known to most of the AppSec world as SheHacksPurple. Tanya is the best-selling author of Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding, an OWASP Lifetime Distinguished Member, CEO of She Hacks Purple Consulting, and one of the most recognized voices in application security and developer education on the planet.

    The timing of this conversation is hard to overstate. The OWASP Top 10 2025 was announced at the Global AppSec Conference last year, with two new categories, Software Supply Chain Failures and Mishandling of Exceptional Conditions, and SSRF folded into Broken Access Control. Recently, Anthropic released the Claude Mythos Preview system card, documenting a model that has already found thousands of high-severity zero-day vulnerabilities autonomously, including bugs in every major operating system and web browser, and a 27-year-old vulnerability in OpenBSD.

    In other words, AppSec is at a hinge moment, and Tanya is exactly the right person to think out loud with about it.

    Here’s what we get into:

    • What the OWASP Top 10 2025 got right, what it missed, and how teams should actually use it
    • AI-generated code, “vibe coding,” and Tanya’s brand-new free prompt library for secure coding with AI assistants, SecureMyVibe.ca
    • What Mythos-class capabilities mean for the offense/defense asymmetry AppSec has always lived with
    • How AI is genuinely changing the SDLC, where it creates lift, where it creates noise, and where it creates entirely new attack surface
    • Architecting real defenses at the prompt layer, across MCP servers, and inside RAG pipelines, not just bolting content filters onto the front door
    • Why developers are the new attack surface, and why a lot of what gets labeled as “supply chain attacks” lately is really a developer compromise that cascaded into the supply chain
    • Tanya’s threat model, defense framework, and maturity model for protecting developers themselves
    • DevSec Station, Tanya’s new podcast delivering 5–10 minute secure coding lessons in a format built for how developers actually consume content
    • What she’d change tomorrow about how AppSec programs are built and run if she could change just one thing

    This is one of those conversations that ranges from the practical (what to do Monday morning) to the philosophical (what does it even mean to “secure software” when an AI can find more zero-days in a weekend than a Red Team finds in a year). Tanya brings the rare combination of deep technical chops, real teaching ability, and genuine warmth that makes a hard subject feel approachable.

    If you lead an AppSec program, write code for a living, run a security team trying to keep up with AI-assisted development, or you’re just trying to figure out where this whole industry is heading, this is the episode for you.

    Resources from the episode:

    • SecureMyVibe
    • DevSec Station Podcast (Tanya’s new show)
    • She Hacks Purple Consulting
    • Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding
    • OWASP Top 10 2025 — https://owasp.org/Top10/2025/
    • Claude Mythos Preview System Card — Anthropic

    Thanks for being here. If this episode landed for you, the best thing you can do is share it with one person on your team who’d find it useful, that’s how this newsletter and show grow.


    Mostra di più Mostra meno
    38 min
  • AI and the Future of Secure Coding
    Apr 16 2026

    What happens to application security when AI agents start writing most of the code?

    Jack Cable knows both sides of this problem better than almost anyone. As a Senior Technical Advisor at CISA, he helped architect the Secure by Design initiative that challenged the entire software industry to stop shipping insecure products and expecting customers to clean up the mess. Now, as the founder of Corridor, he's building at the center of a question that didn't exist two years ago: how do you govern, secure, and trust code that no human wrote?

    In this episode, Jack walks us through the journey from federal cybersecurity policy to startup founder, and why he believes we're at an inflection point that makes everything before it look manageable. We talk about why a decade of shift-left never actually fixed the vulnerability backlog, and why the rise of coding agents, Cursor, Claude Code, Codex, and the internal tools enterprises are quietly building, is about to make that backlog look quaint.

    Jack makes the case for a new category he's helping define called Agentic Security Coding Management, and explains what separates it from the SAST tools and ASPM platforms security teams already have. We get into the uncomfortable duality of AI as both the source of the problem and the proposed solution, the frontier labs showing up in AppSec with unclear intentions, and the market confusion that's leaving CISOs struggling to tell real governance from repackaged scanning.

    We spend the back half of the conversation on the hard questions. What does real governance of AI-generated code actually look like when thousands of developers are running agents in parallel? Is it policy enforcement at the agent level, provenance tracking, runtime attestation, or something nobody has built yet? And drawing on his time at CISA, Jack shares where he sees regulation heading: liability frameworks, mandatory disclosure, and what happens if we get the policy either too heavy or too absent at the exact wrong moment.

    Whether you're a CISO trying to get ahead of this, a founder building in the space, or a developer watching your workflow transform in real time, this is the conversation that frames where AppSec goes from here.

    Mostra di più Mostra meno
    24 min
  • Your AI Agent Is Running As Root
    Apr 8 2026

    When you fire up Claude Code, Cursor, or any AI coding agent, it launches with your full system permissions, your SSH keys, cloud credentials, browser passwords, every file on your machine. Most developers never think twice about it.

    Luke Hinds did. And then he built something about it.

    Luke is the creator of Sigstore, the cryptographic signing infrastructure now used by PyPI, Homebrew, GitHub, and Google as the industry standard for software supply chain security. In this episode, he joins Chris to talk about why he's watching the industry make the exact same mistake it made a decade ago, and what he built to try to stop it.

    We cover the full picture: why application-layer guardrails and system prompts fundamentally fail as security boundaries for AI agents (and what kernel-level enforcement actually means), the .md file as an emerging control plane attack surface, the OpenClaw wake-up call and what the skills marketplace ecosystem gets structurally wrong about trust and provenance, the approval fatigue problem and Anthropic's 17% false negative rate on Claude Code's auto-mode classifier, extending SLSA and Sigstore attestation frameworks to AI-generated code, and why LLM-as-a-judge may not be the silver bullet many are hoping for.

    Luke also makes a broader argument about where this is all heading — volumes of AI-generated code growing faster than human capacity to review it, junior engineers being priced out of the industry, and an aging cohort of engineers who can actually read and reason about code at depth. It's a candid, technically grounded conversation from someone who's been in open source security for 20+ years and has seen this movie before.

    nono is at nono.sh, one line to install, one line to run. No excuse not to

    Mostra di più Mostra meno
    45 min
  • The 350 Million Problem: Securing the Businesses No One Else Will
    Mar 17 2026

    Show Description

    Joe Levy is the CEO of Sophos and a 30-year cybersecurity veteran who has held technical and executive roles across some of the industry's most recognizable brands. In this episode, we dig into a stat that should reframe how the entire industry thinks about its mission: out of roughly 359 million businesses worldwide, fewer than 32,000 have a CISO. That's less than one in 10,000 organizations with a security strategy leader — and it's a number Joe worked with Cybersecurity Ventures to quantify for the first time.

    We explore what that structural gap means for how vendors build products, why the cybersecurity market is a 40-year-old market failure where spending goes up every year but outcomes don't improve, and how Sophos is betting that agentic AI can deliver CISO-level intuition to the hundreds of millions of organizations that could never conceive of hiring one. Joe breaks down where AI is genuinely delivering in security operations — and where the industry is overselling — drawing from Sophos's experience running the world's largest MDR service with 36,000 customers.

    We also get into Sophos's Pacific Rim disclosure, a five-year engagement with a Chinese nation-state actor targeting their firewalls that Joe calls the highest form of threat intelligence sharing. He walks through the calculus of going public with that story, including the kernel-level monitoring they deployed on a handful of devices to stay one step ahead of the attacker. Plus, we discuss the SecureWorks acquisition, the CTO-to-CEO transition, competing with hyperscalers like Microsoft, and what the next chapter looks like for a billion-dollar PE-backed security company approaching maturity with Thoma Bravo.


    Show Notes

    • The cybersecurity poverty line quantified: out of 359 million businesses worldwide, fewer than 32,000 have a CISO — less than one in 10,000 — and this leadership gap compounds with the skills shortage and what Joe calls an "AI-enhanced market for lemons" where information asymmetry between buyers and vendors is getting worse
    • The real problem isn't missing technology — most organizations already have endpoints and firewalls — it's misconfigurations, ignored alerts, undeployed agents, and no SOC to respond, which is why secure-by-default design and hybrid product-service models like MDR create more predictable outcomes than tools alone
    • AI in the SOC is overhyped but not hype: Sophos runs 36,000 MDR customers and says the vast majority of Tier 1 (triage, false positive management) and Tier 2 (investigation, response) can now be performed by agents — but the industry lacks standard vocabulary for metrics like MTTR, letting vendors be "intentionally opaque" about what "response" actually means
    • Joe introduces the concept of "humans as the accountability API" in an agentic world — AI can approximate analyst intuition, but someone still needs to be held accountable for remediation decisions, and a fully autonomous SOC may just be "a protection product with a very long data pipeline"
    • The Pacific Rim story: Sophos spent five years engaged with a Chinese nation-state actor targeting their firewalls, deployed a kernel implant on fewer than a handful of attacker-controlled devices to observe exploit development in real time, and concealed targeted fixes among 150 other patches to avoid tipping off the adversary
    • Sophos's CISO Advantage program aims to deliver the intuitions of a skilled security leader to the hundreds of millions of organizations that could never hire one — Joe calls it fixing a 40-year-old market failure and says they're shipping it this year


    Mostra di più Mostra meno
    45 min
  • Before the Breach: The Zero Day Clock and the Race Against Exploitation
    Mar 11 2026


    Show Description

    The Zero Day Clock is ticking — and the numbers should make every security leader uncomfortable. In this episode, I sit down with Sergej Epp, CISO at a leading security firm, who built the Zero Day Clock after a weekend experiment using AI to discover vulnerabilities firsthand. What he found shocked him: with no professional vulnerability research background and just a few hours of work, he was successfully finding zero days across major security projects using AI models and basic scaffolding.

    Sergej breaks down his concept of the "Verifier's Law" — the idea that offense has the cheapest verifier in cybersecurity because feedback is binary and instant (you either popped a shell or you didn't), while defense operates in a space where validation is expensive, ambiguous, and slow. We dig into what this asymmetry means for the industry, why 20 years of warnings from Ross Anderson, Bruce Schneier, Halvar Flake, and others have gone unheeded, and whether coordinated disclosure models are broken now that AI can reverse engineer a patch into a working exploit in minutes.

    We also discuss the tension between regulation and deregulation playing out in the U.S. and EU, why the answer might be outcome-based accountability rather than prescriptive compliance, and what a realistic defensible posture actually looks like when the mean time to exploit for actively exploited vulnerabilities is under two days — while most organizations are still operating on 30-day patch cycles.


    Show Notes

    • Sergej shares how a weekend AI experiment led him to discover multiple zero days across major security projects with no professional vulnerability research experience — and why that should alarm the entire industry
    • The "Verifier's Law" explained: offense has cheap, deterministic validators (pop a shell, exfiltrate data, trigger an XSS) while defense faces expensive, ambiguous validation (parsing SIM alerts, measuring security posture), giving AI-accelerated offense a structural advantage
    • The Zero Day Clock synthesizes 3,500+ CVE-exploit pairs and shows the mean time to exploit for actively exploited vulnerabilities is now under two days — while organizations still operate on 14-to-30-day patch cycles
    • 20 years of ignored warnings: from Ross Anderson's 2001 economics paper through Bruce Schneier, Halvar Flake's "the patch is the advisory" insight, and DARPA's Cyber Grand Challenge — the industry has consistently failed to act on clear signals
    • AI can now reverse engineer patches to identify underlying flaws and generate working exploits in minutes, potentially breaking coordinated disclosure models and compressing the window between patch release and active exploitation to near zero
    • The regulation paradox: the EU risks overregulating AI in ways that hamper defenders while attackers face no such constraints, while the U.S. is pushing deregulation that may remove the only forcing function for vendor accountability — Sergej and Chris discuss outcome-based regulation as a potential middle path
    • Defenders have a data advantage: by understanding their own environments, infrastructure, and processes, security teams can detect AI-driven attacks through behavioral anomalies like hallucinated API calls, non-existent user accounts, and other artifacts of AI-generated attack playbooks
    • The Zero Day Clock's real power is as a board-level communication tool — a single slide that translates the patching gap into a number executives and policymakers can't ignore, shifting the conversation from "are we compliant?" to "are we fast enough?"
    Mostra di più Mostra meno
    5 min
  • Securing the Future with Autonomous Defense
    Feb 23 2026

    Summary:

    In this conversation, Chris Hughes and Stanislav Fort discuss the transformative role of AI in cybersecurity, particularly in vulnerability management. Stanislav shares insights on how AI can discover zero-day vulnerabilities in widely used codebases, the challenges of balancing AI-driven discoveries with quality assurance, and the importance of proactive security measures. They also explore the economic sustainability of AI in cybersecurity, the burden on maintainers, and the ongoing arms race between defenders and attackers. The discussion emphasizes the potential for AI to significantly enhance software security and the aspiration towards achieving zero vulnerabilities in critical infrastructure.


    Takeaways:

    AI is revolutionizing vulnerability management in cybersecurity.
    The ability to find long-hidden vulnerabilities is unprecedented.
    AI can enhance both offensive and defensive security measures.
    Proactive security integration into development pipelines is essential.
    The quality of vulnerability reports is declining due to AI-generated noise.
    Maintainers face increasing burdens from rapid AI-driven discoveries.
    AI can help secure open source projects effectively.
    Sustainability in AI cybersecurity requires financial backing.
    The arms race between attackers and defenders is intensifying with AI.
    Achieving zero vulnerabilities is an aspirational yet achievable goal.


    Chapters

    00:00 Introduction to AI in Cybersecurity
    02:52 The Evolution of AI and Vulnerability Discovery
    05:45 AI's Impact on Software Development
    08:59 Discovering Zero-Day Vulnerabilities
    11:48 The Great Bifurcation in Security Research
    14:52 Balancing AI-Driven Discoveries and Quality
    17:59 Proactive Security Measures in Software Development
    20:53 The Role of AI in Securing Open Source Projects
    23:54 Sustainability of AI in Cybersecurity
    27:07 Addressing the Burden on Maintainers
    30:09 The Tension Between Autonomy and Security
    33:03 The Arms Race Between Defenders and Attackers
    36:12 Aiming for Zero Vulnerabilities
    38:58 Conclusion and Future Outlook

    Mostra di più Mostra meno
    41 min
  • Selling Cyber: Deal Flow and Market Signals with Momentum Cyber
    Feb 18 2026

    In this episode of Resilient Cyber I catch up with Momentum Cyber's Founder & CEO, Eric McAlpine.

    We will be unpacking 2025's M&A and capital market activities, using Momentum Cyber's 2025 Cybersecurity Almanac Report, as well as discussing some of the overlooked and untold details under the hood of cyber M&A, building world class teams and more.

    Mostra di più Mostra meno
    42 min