Episodi

  • Episode 319 - Vercel Breach, Security vs. Compliance, Pull Request Flows w/ AI Agents
    Apr 21 2026
    Episode 319 covers a range of industry developments, primarily focusing on the recent Vercel security incident and the evolving landscape of AI-driven compliance. The hosts detail how a Vercel employee's use of a consumer-level Context AI plan led to a workspace compromise via a leaked OAuth token, eventually allowing attackers to access sensitive environment variables. This leads to a critical discussion about the SOC 2 provider Delve, with the hosts addressing allegations regarding "fake" compliance automation and the general limitations of auditing frameworks that do not inherently equate to true security. This episode also explores the future of the Pull Request (PR) flow, debating whether traditional human-led code reviews are "dead" due to the massive volume of code generated by AI agents. While they acknowledge that startups are moving toward autonomous commits, Seth argues that the PR concept is evolving into a system of agentic attestation and guardrails rather than disappearing entirely. The episode concludes with community survey results on this shift and a reminder about the hosts' upcoming training sessions in Singapore.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 318 - Slack Impersonation, Mythos, Vulnerability Research Future
    Apr 14 2026
    Episode 318 examines critical vulnerabilities and the evolving impact of AI on the security industry. The episode details a recent sophisticated impersonation and malware attack targeting open-source Slack communities, including their own, where attackers spoofed Seth's identity to distribute malicious links via Google Sites. The hosts express significant frustration with Slack's lack of built-in impersonation controls, comparing the flaw to the inherent trust issues in the Git protocol. A major portion of the discussion focuses on the "leak" of Anthropic's highly capable Mythos model and its potential to disrupt the market. They analyze how such frontier model announcements contribute to massive stock market volatility for traditional security firms while simultaneously creating an "intense echo chamber" regarding AI's ability to replace human practitioners. Referencing Thomas Ptacek's thesis, they debate whether AI agents will soon supplant human vulnerability research for common bug classes, shifting the human role toward high-level governance and "context infusion". Ultimately, the hosts advocate for autonomous defense and rigorous evaluation frameworks to manage "reasoning drift" and the exploding velocity of AI-generated code.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 317 - (Post-RSAC/BSidesSF), Supply Chain Security, Future of SDLC
    Mar 31 2026
    Ken Johnson and Seth Law reflect on the 2026 RSA Conference and BSidesSF, noting an industry-wide "awakening" regarding the high costs and engineering complexities of operationalizing AI security tools. A major focus is the recent "supply chain attack hell," specifically the compromise of the Axios HTTP client through dual-account breaches that allowed attackers to bypass legitimate OIDC deploy setups via a misconfigured NPM CLI. The malware used was particularly evasive, deleting itself and replacing its package.json with a clean version post-execution. The hosts also discuss the emergence of the "Agentic Development Lifecycle" (ADLC), where engineering teams are increasingly "committing on time" rather than features, creating a volume of code that traditional security gates cannot manage. They debate Thomas Ptacek’s thesis that AI agents will soon "supplant" human vulnerability research for common bug classes, shifting the human role toward high-level governance and "context infusion". Economically, they highlight how Anthropic's security announcements contributed to nearly half a trillion dollars in market value loss for traditional security firms, as investors increasingly bet on frontier models to consume established security domains.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 316 - w/Coffee, Chaos, and ProdSec - Agentic Development Lifecycle
    Mar 17 2026
    In episode 316 of Absolute AppSec, hosts Ken Johnson and Seth Law participate in a crossover with Kurt Hendle and Cameron Walters from the Coffee, Chaos, and ProdSec podcast to discuss the radical transformation of security roles in an AI-driven landscape. The guests share origin stories rooted in gaming and "mischievous" curiosity, which evolved into deep careers in security architecture and engineering. The primary discussion centers on the industry's shift toward an "Agentic Development Lifecycle" (ADLC), where the sheer volume of AI-generated code renders traditional manual review gates obsolete. This acceleration risks a "rubber stamp" culture where developers approve fixes in seconds rather than minutes, potentially leading to a mountain of technical debt. Consequently, the role of security is shifting from manual bug finding to high-level governance and "context infusion," requiring practitioners to manage AI agents that automate complex tasks. Economically, the group highlights how frontier model announcements have caused massive market volatility, wiping billions from traditional security stocks. Ultimately, they conclude that while older "primitive" tools are failing, professionals who lean into AI as a "superpower" for governance and oversight will be essential for navigating this new, non-deterministic reality.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 315 - Risks of "AI-Native" Security Products, Rapid Software Development
    Mar 3 2026
    In episode 315 of Absolute AppSec, Ken Johnson and Seth Law discuss the rapidly evolving challenges of securing software in an era of AI-assisted development. The hosts provide updates on their "Harnessing LLMs for Application Security" training, noting that the field is changing so fast that they must constantly update their exercises to include new agents and advanced tools like Claude Code. A primary concern raised is the "naivete" of many new security tools, where prompts are often automatically generated by AI rather than expertly crafted, causing a loss of essential nuance. The hosts also warn against AI companies building security products without specialized expertise, citing a zero-click exploit in the "Comet" AI browser that could exfiltrate sensitive secrets via calendar summaries. As development teams now ship code at "AI speed," the hosts argue that traditional AppSec methods are too slow, necessitating a strategic pivot toward automated design reviews, governance, and observability rather than just chasing individual vulnerabilities. Despite the inherent risks and the ongoing difficulty of managing AI reasoning drift, they remain optimistic that these tools can eventually unlock more efficient, hands-off AppSec workflows if managed with proper guardrails and deterministic oversight.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 314 - LLM AppSec Disruption, Limitations of AI in Security, AppSec Oversight
    Feb 24 2026
    In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic’s "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 313 - AppSec Role Evolution, AI Skills & Risks, Phishing AI Agents
    Feb 17 2026
    Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password’s SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.
    Mostra di più Mostra meno
    Meno di 1 minuto
  • Episode 312 - Vibe Coding Risks, Burnout, AppSec Scorecards
    Feb 10 2026
    In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
    Mostra di più Mostra meno
    Meno di 1 minuto