Episodi

  • The Blueprint for the Future: Implementing AI for the Intelligence Community
    Oct 14 2025

    In this episode, we will dive deep into this crucial topic. You will gain a comprehensive understanding of:

    • The importance and benefits of AI for intelligence.

    • Essential use cases across the intelligence cycle—from Planning and Direction to Analysis and Dissemination.

    • Key insights on effectively implementing AI within the Intelligence Community.

    Our goal is to explore how AI can be harnessed to enhance intelligence operations and the critical decision-making processes. Stay with us as we guide you through the future of intelligence!

    Mostra di più Mostra meno
    15 min
  • Applied Intelligence: Mastering the Craft
    Oct 12 2025

    Welcome to the Deep Dive. In this episode, drawing on Accenture’s extensive analysis into The art of AI maturity.

    Artificial intelligence (AI) has evolved from a scientific concept to a societal constant, and companies across industries are relying on and investing in AI to drive logistics, improve customer service, and increase efficiency. However, despite these ever-expanding use cases, most organizations are barely scratching the surface of AI’s full potential.

    Accenture’s research found that only 12% of firms globally have advanced their AI maturity enough to achieve superior growth and business transformation. We call these companies the “AI Achievers”. Achieving this high performance requires understanding that while there is a science to AI, there is also an art to AI maturity. Achievers succeed not through a single sophisticated capability, but by combining strengths across strategy, processes, and people.

    Advancing AI maturity is no longer a choice, but an opportunity facing every industry and leader. In this episode, we will discuss how Accenture, the global consulting firm, defines the AI maturity journey

    Mostra di più Mostra meno
    13 min
  • The Perilous World of AI Data Security
    Sep 10 2025

    In this episode, we’re diving into one of the most critical challenges in artificial intelligence—data security. From supply chain risks and maliciously modified data to data drift that can quietly erode accuracy, protecting information throughout the AI system lifecycle is essential.

    We’ll explore insights from global cybersecurity agencies, including best practices and mitigation strategies designed to safeguard the integrity of data that powers AI and machine learning systems. Because in the end, the quality and security of data determine the trustworthiness of AI itself.

    So, let’s unpack how securing data can strengthen the future of AI.

    Mostra di più Mostra meno
    20 min
  • Decoding the NIST AI Risk Framework: Building Trustworthy AI in a Complex World
    Sep 2 2025

    In this episode, we explore the NIST Artificial Intelligence Risk Management Framework, also known as the AI RMF 1.0. Released in January 2023, this free resource from NIST is designed to help organizations manage the unique risks of AI while promoting responsible and trustworthy use.

    We’ll break down the seven characteristics of trustworthy AI—like safety, security, accountability, fairness, and more—and dive into the four core functions: Govern, Map, Measure, and Manage. These principles guide organizations through the entire AI lifecycle, ensuring AI systems are not only powerful but also reliable and ethical.

    So, if you’re looking to strengthen your understanding of AI risk management and build trust in the future of AI, you’re in the right place. Let’s get started with the NIST AI RMF 1.0.

    Mostra di più Mostra meno
    22 min
  • Beyond the Buzzwords: How Goldman Sachs Manages Cyber Risk
    Aug 27 2025

    In this episode, we’re diving into how Goldman Sachs, one of the world’s leading investment banks, manages cyber risk. Forget the buzzwords—this is about real-world strategies in operational resilience, business continuity, and disaster recovery. You’ll hear how these practices protect clients, stabilize markets, and keep the firm running through disruption. Our goal? To give you a clear shortcut to understanding Goldman’s multi-layered approach to digital security and operational stability.

    Mostra di più Mostra meno
    20 min
  • Prioritizing Cybersecurity Risk and Opportunity in Enterprise Management
    Aug 25 2025

    In this episode, we unpack NIST IR 8286B-upd1, which guides organizations on aligning cybersecurity risk with enterprise goals. We cover how to prioritize risks, choose effective responses (accept, avoid, transfer, mitigate), and use the Cybersecurity Risk Register (CSRR) to communicate clearly with leadership. We also highlight the value of considering both threats and opportunities to strengthen enterprise resilience.

    Mostra di più Mostra meno
    20 min
  • Evolving the Standard for Scoring Software Vulnerabilities
    Apr 8 2025

    In this episode, we dive into the work of the CVSS Special Interest Group (SIG), part of the Forum of Incident Response and Security Teams (FIRST). The CVSS SIG is the driving force behind the Common Vulnerability Scoring System—an essential standard used worldwide to measure and prioritize the severity of software vulnerabilities. We explore the group’s efforts in shaping CVSS version 4.0, including key updates, new documentation, a roadmap for the future, and community-driven surveys. Whether you’re a cybersecurity pro or just curious about how digital risk is quantified, this episode sheds light on the evolving mission to strengthen vulnerability management across the industry.

    Mostra di più Mostra meno
    18 min
  • New Security Risk: Why LLM Scale Doesn't Deter Backdoor Attacks
    Oct 12 2025

    Today, we are discussing a startling finding that fundamentally challenges how we think about protecting large language models (LLMs) from malicious attacks. We’re diving into a joint study released by Anthropic, the UK AI Security Institute, and The Alan Turing Institute.

    As you know, LLMs like Claude are pretrained on immense amounts of public text from across the internet, including blog posts and personal websites. This creates a significant risk: malicious actors can inject specific text to make a model learn undesirable or dangerous behaviors, a process widely known as poisoning. One major example of this is the introduction of backdoors. These are specific phrases, like the trigger

    Now, previous research often assumed that attackers needed to control a percentage of the training data. If true, attacking massive, frontier models would require impossibly large volumes of poisoned content.

    But the largest poisoning investigation to date has found a surprising result. In their experimental setup, they found that poisoning attacks require a near-constant number of documents regardless of model and training data size. This completely challenges the assumption that larger models need proportionally more poisoned data.

    The key takeaway is alarming: researchers found that as few as 250 malicious documents were sufficient to successfully produce a "backdoor" vulnerability in LLMs ranging from 600 million parameters up to 13 billion parameters—a twenty-fold difference in size. Creating just 250 documents is trivial compared to needing millions, meaning data-poisoning attacks may be far more practical and accessible than previously believed.

    We’ll break down the technical details, including the specific "denial-of-service" attack they tested, which forces the model to produce random, gibberish text when it encounters the trigger. We will also discuss why these findings favor the development of stronger defenses and what questions remain open for future research.

    Stay with us as we explore the vital implications of this major security finding on LLM deployment and safety.

    Mostra di più Mostra meno
    13 min