Episodi

  • AI Agents vs. Real Innovation: What Actually Happened at re:Invent 2025
    Dec 31 2025

    Welcome to Let’s Talk Shop! In this episode, host Elias Khnaser is joined by cloud industry expert Sanjeev Mohan to break down the biggest takeaways from AWS re:Invent.

    The landscape of cloud and enterprise tech is shifting rapidly, and keeping up with the "onslaught" of new models and technologies can be deafening for IT professionals. We dive deep into why this year’s re:Invent felt different—from the "AI agent" saturated keynotes to the game-changing announcements that actually matter for your business.

    We discuss:

    ► The MultiCloud Pivot: Why AWS finally "threw in the towel" and embraced multi-cloud with the new AWS Interconnect.

    ► AI Factories: A look at AWS’s new approach to on-prem AI infrastructure.

    ► Nova Forge vs. RAG: Sanjeev explains why Nova Forge is a differentiator, allowing companies to build and tune proprietary frontier models for just $100k/year.

    ► The Future of Interconnectivity: How the industry is moving toward pointing compute at data, regardless of which cloud provider holds it.

    2026 Predictions: What’s next for the market as AI continues to "take the oxygen out of the room".

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    56 min
  • How AWS is Moving Beyond 'Bolt-On AI' to Full Autonomy
    Dec 18 2025

    How close are we to building Jarvis from Iron Man? Host Elias Khnaser sits down with Ali Maaz, AWS's leader for Go-to-Market Developer Services, to discuss how Amazon Q (Kiro) is transforming from a simple coding assistant to an Autonomous Agent and a peer on your team.

    Recorded live at AWS re:Invent 2025, this conversation dives deep into the future of enterprise software development and Cloud Operations.

    Key Takeaways:

    ► The Evolution of AI Agents: Why the biggest problem isn't coding, but the planning cycle—and how Kiro is solving it by arbitrating between product and engineering teams.
    ► Autonomous Agents: Our first look at Kiro Autonomous Agents, designed to address issues like bug fixes right out of Slack or Teams without a human opening a laptop.
    ► The Trust Factor: How AWS builds validation and trust with Property Based Testing, making Kiro a reliable, productive teammate.
    ► Cloud Ops Revolution: A major announcement focusing on a new agent specifically for Cloud Operations and DevOps to reduce Mean Time to Resolution (MTTR) and detect state/policy drift.
    ► AWS Differentiation: How AWS remains focused on customer-driven innovation, viewing internal teams (like Amazon.com) as just one of their largest and most important customers.
    ► The era of "bolt-on AI" is ending. The next step is "AI-driven development and operations." Tune in to see how you can "get out of the way" and let AI manage your next big project.

    00:00:00 Intro & Guest Welcome: Ali Maz, AWS Developer Services
    00:01:11 The "Jarvis" Question: How Close is AWS Q to Iron Man's AI?
    00:01:46 Beyond Coding: Kiro's Role in the Planning Cycle (PR-FAQ)
    00:03:43 Announcement 1: Kiro Autonomous Agent (From Assistant to Peer)
    00:05:21 Building Trust: Validation, Oversight, and the Human in the Loop
    00:06:46 Automated Reasoning & Property Based Testing (AI Validating AI)
    00:07:34 Announcement 2: Kiro Powers & Personalized Context for ISVs
    00:10:40 Agent Core: Policy Management & Evaluation for Production-Grade Agents
    00:12:43 AWS Differentiation: Why We are Customer-Obsessed, Not Competitor-Obsessed
    00:14:12 New Agent Announced: Focused on Cloud Operations & DevOps
    00:16:33 The AI Evolution: Moving to "AI Managed" and "Get Out of the Way"
    00:17:47 Conclusion & Wrap-up

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    18 min
  • Storage Architect to AI: Is Your Data Performance Fast Enough?
    Dec 18 2025

    Is storage is the new bottleneck in the age of AI? Elias Khnaser and Asad Khan, Senior Director of Google Cloud Storage, discuss this topic in depth. While all the spotlight is on fast, expensive GPUs and TPUs, Elias and Asad are back to basics.

    In the past, CPU was never the bottleneck; slow storage was. Today, AI training and inferencing workloads require feeding high-cost GPUs/TPUs with data at an unprecedented speed to prevent them from sitting idle and wasting millions of dollars.

    Key Takeaways:

    ► The shift: Why high-performance storage is now mission-critical for maximizing your ROI on massive GPU clusters.
    ► How Google Cloud is solving the data performance problem by moving beyond HDDs to intelligent SSD tiering.
    ► Deep dive into Google Cloud Storage solutions for AI, including Anywhere Cache and Rapid Store, designed to automatically handle caching, prefetching, and high-performance throughput across all zones without the customer having to worry about colocation.
    ► The importance of data APIs for researchers: object storage (GCS) vs. full POSIX compliance (Lustre).
    ► The truth: The best AI performance isn't just about the fastest chip—it's the correct configuration of GPUs, storage, and networking.

    00:00:00 Intro & Guest Welcome: Asad Khan, Google Cloud Storage
    00:01:19 GCS, Lustre, & the Full Google Cloud Storage Portfolio
    00:02:00 Is Storage Dead? The GPU vs. Storage Conversation
    00:03:12 The New AI Bottleneck: Why GPUs Sit Idle (Wasting Money)
    00:06:39 From Cheap Scale to High-Performance Cloud Storage
    00:08:22 The Two Dimensions of AI Storage: SSDs & APIs
    00:10:37 Anywhere Cache: Automatic High-Performance Caching
    00:13:15 How Storage Differs for AI Training vs. Inferencing
    00:15:35 Rapid Store and Full POSIX Compliance with Lustre
    00:18:26 The True Formula for AI Performance (It's Not Just the GPU)
    00:20:39 Sony Honda Mobility Case Study: Lustre in Action
    00:23:41 Traditional vs. AI Customers: Different Storage Priorities
    00:27:07 The Future: Unlocking Insights from Unstructured Enterprise Data
    00:33:40 Final Thoughts & Key Takeaways

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    35 min
  • AI in the Enterprise: Real World Use Cases & Reskilling
    Dec 13 2025

    The honeymoon phase of AI is over. Elias Khnaser discusses with Pankaj Kumar, Executive Partner at IBM Consulting, the practical realities of deploying Agentic AI in the enterprise—beyond simple chatbots.

    Recorded live from AWS re:Invent 2025, this episode answers the biggest question facing executives: How do we do AI?

    We dive into a real-world case study of a major gas utility company (powering Las Vegas) that is completely reimagining its workflow to address one of its biggest problems: high-bill customer calls. Discover how the solution moves far beyond automating the contact center by using a multi-pronged approach that analyzes customer usage, infrastructure data, and weather patterns.

    Key Takeaways:
    ► Why the "boring work"—data governance, cloud architecture, and security—is the mandatory foundation for successful enterprise AI deployment.
    ► The strategic, phased approach: How the utility customer first executed a full cloud migration (DC exec to AWS) and SAP RISE before bolting on Agentic AI.
    ► The technology stack: How they integrated AWS Bedrock, LangChain, and LangGraph to create a comprehensive, agile solution.
    ► The Job Question: An honest conversation about the impact of Agentic AI on jobs. Is it mass firing, or a necessary focus on workforce reskilling and filling hard-to-fill contact center roles?

    #IBMPartner

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    9 min
  • From Cloud to On-Prem: Gemini, GPUs, and the AI Anywhere Vision
    Oct 23 2025

    The AI Revolution is here, but what about enterprises dealing with sensitive data, regulatory compliance, and low latency requirements? They can't always move to the public cloud—but now, they don't have to choose between compliance and innovation.

    In this episode of Let's Talk Shop, host Elias Khnaser sits down with two technology giants: Justin Boitano, VP of Enterprise AI at NVIDIA, and Rohan Grover, Senior Director and Head of Product for Google Distributed Cloud.

    They break down the deep technical partnership that is making the "AI Anywhere" vision a reality, allowing customers to run Google's cutting-edge Gemini 2.5 models directly on-premises using NVIDIA GPU servers. Discover how this collaboration uses confidential computing on both CPUs and NVIDIA Blackwell GPUs to secure sensitive customer data and proprietary model weights, turning previously inaccessible "dark data" into a source of competitive advantage.

    If you work in public sector, finance, healthcare, oil and gas, or any enterprise with strict data sovereignty rules, this discussion on on-prem GenAI and Google Distributed Cloud's managed and customer-owned deployment models is a must-watch.

    👍 Like this video and Subscribe for more insights on cloud and enterprise tech!

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    55 min
  • Google Cloud’s AI Infrastructure Strategy | TPU, NVIDIA Blackwell & More
    Oct 23 2025

    In this episode, we dive deep into AI infrastructure at Google Cloud—what it means, why it matters, and how it’s evolving.

    Our guest shares insights from over 8 years at Google and previous experience as a hardware engineer at IBM, bringing a unique perspective on the nuts and bolts that power today’s AI revolution. We explore:

    ✅ The foundations of AI infrastructure—chips, networking, storage, and workload-optimized systems

    ✅ How Google’s custom hardware (TPUs, Axion, ARM processors) differentiates it from AWS, Microsoft, Oracle, and IBM

    ✅ The concept of the AI Hypercomputer—a reference architecture combining hardware, software, and flexible consumption models

    ✅ Key announcements from Google, including NVIDIA Blackwell, GB200, Ironwood TPUs, and Cluster Director

    ✅ Why inference (not just training) is now the hot topic—and how Google helps customers lower the cost per inference

    From hardware assembly roots to leading AI infrastructure strategy, this conversation highlights how Google builds and scales the systems behind Gemini, Vertex AI, and beyond.

    📌 If you’re curious about the future of AI infrastructure, supercomputing, and how enterprises can actually run large-scale AI workloads efficiently—this one’s for you.


    🔔 Don’t forget to like, comment, and subscribe for more in-depth discussions on technology and innovation!

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    40 min
  • Google Cloud WAN: A New Era for Enterprise Networking Powered by AI
    Jul 31 2025

    The world of enterprise technology is evolving, and networking is more critical than ever. In this episode of Let's Talk Shop, host Elias Khnaser sits down with Muninder Singh Sambi, the General Manager and Vice President of Google Cloud's Cloud Networking.

    Forget everything you thought you knew about networking. Muninder explains why a robust and intelligent network is the secret sauce for a successful AI strategy. They discuss Google's massive global network, including its vast subsea cable infrastructure, and the innovative new products announced at Google Next.

    What You'll Learn:

    ► The Four Pillars of an AI Strategy: Understand the essential components, from AI infrastructure to data management and, most importantly, networking.

    ► The Power of Google Cloud WAN: Discover how this new, managed backbone service can simplify and secure enterprise networking, offering a potential 40% reduction in total cost of ownership.

    ► Cloud WAN in Action: Learn how companies like Nestle and Citadel Securities are leveraging Google's network to accelerate their business journeys.

    ► Openness in the Cloud: Muninder addresses the concept of multi-cloud and explains how Google Cloud WAN is designed to be an open ecosystem, allowing you to connect to applications and services wherever they are hosted.

    ► Why Google's Network is Different: Uncover the unique redundancy and reliability features, including Google's multi-shard architecture and proprietary subsea cables, that set its network apart from competitors.

    Whether you're an IT professional, a thought leader, or just curious about the future of enterprise networking, this episode will challenge your assumptions and provide valuable insights into how the cloud is shaping the future of connectivity.

    Additional Resources:

    ►Nestlé's network transformation with Cloud WAN:
    https://www.youtube.com/watch?v=mHLlU7mjuvY

    ►BRK2-133: Google’s AI-powered next-gen global network: Built for the Gemini era:
    https://www.youtube.com/watch?v=oZN9kUIVLOU

    ►BRK3-043: Best practices for designing and deploying Cross-Cloud network security:
    https://www.youtube.com/watch?v=X0LQTHc1FOw


    🔔 Don’t forget to like, comment, and subscribe for more in-depth discussions on technology and innovation!

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    41 min
  • The Digital Backbone: How Equinix Connects Everything for Modern Enterprises
    May 27 2025

    Think Equinix is just about colocation? Think again! In this insightful episode of "Let's Talk Shop," your host sits down with Arun Dev, VP of Interconnection Services at Equinix, to explore how Equinix is revolutionizing digital infrastructure far beyond its traditional roots.

    We dive deep into the power of interconnection and virtual networking, revealing how enterprises like IHG are modernizing their networks to achieve incredible scale and agility across global operations. Discover how Equinix helps solve real-world challenges, from simplifying complex legacy networks to enabling seamless hybrid and multi-cloud strategies.

    Arun sheds light on:

    ►The true value of Equinix's global ecosystem: Over 260 data centers, 75 metros, 35 countries, and an unparalleled network of 2,000+ network providers and 3,000+ cloud/IT companies.

    ►What "interconnection services" truly means at Equinix: Secure, private, low-latency connectivity to financial exchanges, hyperscalers, and beyond.

    ►The magic of Equinix Fabric: On-demand, virtual connectivity across regions, driven by APIs and SDKs – allowing you to spin up connections in seconds and scale bandwidth on the fly.

    ►Real-world enterprise transformation: The IHG success story – how virtualized networking with Equinix helped them serve 115 million mobile app users with a flawless experience.

    ►Equinix's role in the Age of AI: How current network limitations are driving urgency for modernization and how Equinix is uniquely positioned to handle demanding AI workloads at the edge.

    ►The interconnected edge: Why Equinix's global footprint makes them the ideal partner for delivering low-latency experiences, especially for use cases like in-store retail innovation.

    ►Complementary cloud strategies: Understanding how Equinix works with hyperscaler backbones and offers a neutral, abstraction layer for seamless multi-cloud connectivity, even between competing cloud providers.

    ►Future of intelligent networking: Equinix's vision for AI-driven network optimization, predictive insights, and cost-saving recommendations for customers.

    🔔 Don’t forget to like, comment, and subscribe for more in-depth discussions on technology and innovation!

    Sign Up Now for my online course "The Cloud Strategy Master Class":
    ► On my web site: https://lnkd.in/gcxcrX and use promo code LINKEDIN20 to receive a 20% discount.
    ► On Udemy: https://www.udemy.com/course/cloud-strategy-master-class/

    PODCASTS: Listen wherever you get your podcasts:
    ► Let's Talk Shop: http://letstalkshop.buzzsprout.com
    ► Reality Distortion Fields RDFs Podcast: https://youtu.be/88z1UiVaV00

    Follow me:
    ► TikTok: @ekhnaser
    ► Instagram: @ekhnaser
    ► Twitter: @ekhnaser
    ► LinkedIn: https://www.linkedin.com/in/eliaskhnaser/
    ► Website: www.eliaskhnaser.com

    Mostra di più Mostra meno
    47 min