Episodi

  • Why ChatGPT is "Useless" for Banks (and what actually works)
    Mar 18 2026

    Can AI be trusted with our most sensitive financial data?

    Most of us use AI by typing a prompt into ChatGPT and hoping for the best. But when you’re a wealth manager or a global bank, "hoping for the best" isn't an option. You can't just upload confidential client data into a public model, and you definitely can’t afford an AI "hallucination" when millions are on the line.

    In this episode, I sit down with Suvrat Bansal, the founder and CEO of Clarista. Suvrat shares his journey from the databases of Citigroup to becoming the Chief Data Officer at UBS, and eventually launching a startup that is fundamentally changing how enterprises interact with their data.

    We dive into the "Netflix moment" that sparked the idea for Clarista, why Suvrat decided to throw away nine months of work to get the technology right, and his "zero data footprint" philosophy that allows AI to talk to data without ever copying it.

    Timestamp:
    01:16 — Welcome & introducing Suvrat Bansal
    02:07 – Suvrat’s journey: From Citigroup databases to Wall Street leadership.
    04:10 – The leap from Chief Data Officer at UBS to Founder.
    08:05 – Why ChatGPT doesn't work for Finance (and what Clarista does differently).
    11:10 – Data Sovereignty: The importance of "going to the data" instead of copying it.
    14:32 – Why a startup should pursue patents early on.
    16:40 – The hardest moment: Throwing away nine months of investment to build it right.
    19:45 – Advice for aspiring founders: Don't wait for the "right moment.

    🔗 Referenced in this episode

    Columbia Business School — Where Suvrat pursued his MBA → business.columbia.edu

    🤝 Connect with Suvrat Bansal

    🌐 Clarista: clarista.ai
    💼 LinkedIn: https://linkedin.com/in/suvrat-bansal-b45651


    🎙 Connect with Sachin & 101 Talks

    🎧 Spotify: https://open.spotify.com/show/4zLUooX...
    🍎 Apple Podcasts: https://podcasts.apple.com/us/podcast...
    💼 LinkedIn – Sachin Menon: https://linkedin.com/in/sachin-menon-techsigma-technology
    🌐 Website: https://www.techsigmaglobal.com

    If this made you think differently, share it with a leader who needs to hear it. 🙏

    🎙 About 101 Talks
    Hosted by Sachin Menon, 101 Talks brings together thinkers, builders & leaders shaping the future. We don't chase trends. We chase clarity. Subscribe so you never miss a conversation that matters.

    #101Talks #SachinMenon #AIStrategy #ResponsibleAI #FutureOfWork #Leadership #Clarista #SuvratBansal #FinanceAI #EnterpriseAI #DataSovereignty #StartupFounder #Entrepreneurship #WallStreet #AIInFinance #FounderStory

    Mostra di più Mostra meno
    23 min
  • "She Advises CEOs on AI. Here's What They're Too Afraid to Admit." | Alice Stein's unfiltered take
    Mar 11 2026

    🎙️ AI didn't arrive quietly. It kicked the door open.

    Everyone talks about what AI can do. Very few decide how it's used. Alice Stein lives in that gap — between Silicon Valley speed and boardroom reality. She doesn't build models. She helps leaders decide what to do, and what NOT to do.

    This conversation wasn't about tools. It was about judgment. Not speed — responsibility.

    ━━━━━━━━━━━━━━━━━━━━━━━
    🎯 ABOUT ALICE STEIN
    ━━━━━━━━━━━━━━━━━━━━━━━
    Founder of Stein & Partners. 25+ years across web development, digital strategy, real estate, financial services & AI consulting. Former Compass Real Estate leader managing a $1B territory with 150 sales professionals. MIT-connected strategist. She sits at the intersection of strategy, trust, and AI.

    ━━━━━━━━━━━━━━━━━━━━━━━
    ⏱️ TIMESTAMPS
    ━━━━━━━━━━━━━━━━━━━━━━━
    00:00 — Cold Open: Stop Obsessing Over AGI
    01:02 — Intro: Between Ambition and Accountability
    01:59 — Alice's Origin: Med School Dropout to AI Strategist
    04:27 — Change Management & The Ranstad Merger
    05:16 — Compass Real Estate & Leading AI Adoption
    08:26 — Immigrant Household, Two Paths & Career Pressure
    09:20 — MIT: How Innovation Becomes Contagious
    12:36 — Lifelong Learning in the Age of AI
    13:04 — What CEOs Are Actually Afraid Of
    15:36 — The Hardest Truth Alice Tells Leaders
    16:34 — Risk, Opportunity or Responsibility — What Leaders Miss
    18:48 — The AI Question Boards Should Ask But Don't
    20:04 — The Human Skill AI Will Never Replace
    22:05 — Forget AGI. Double Down on This Instead
    24:43 — Is AGI Really Coming? Alice's Honest Take
    27:41 — One Advice for Indian Boardrooms
    29:32 — The Mistake Indian Companies Must NOT Copy from Silicon Valley
    30:39 — ⚡ Rapid Fire Round
    32:04 — What Will Leadership Mean When Intelligence Becomes Cheap?

    ━━━━━━━━━━━━━━━━━━━━━━━
    🤝 CONNECT WITH ALICE STEIN
    ━━━━━━━━━━━━━━━━━━━━━━━
    🌐 Stein & Partners: https://www.alicesteinandpartners.com/
    🔗 LinkedIn:https://www.linkedin.com/in/alicestein/

    ━━━━━━━━━━━━━━━━━━━━━━━
    📡 CONNECT WITH SACHIN & 101 TALKS
    ━━━━━━━━━━━━━━━━━━━━━━━
    🎧 Spotify:https://open.spotify.com/show/4zLUooXgeNmPqDn90i558R
    🍎 Apple Podcasts: https://podcasts.apple.com/us/podcast/that-bluecoat-guy/id1874932215
    🔗 LinkedIn – Sachin Menon: https://www.linkedin.com/in/sachin-menon-techsigma-technology/
    🌐 Website: www.techsigmaglobal.com
    ━━━━━━━━━━━━━━━━━━━━━━━

    Mostra di più Mostra meno
    33 min
  • "AI Isn’t About Tools. It’s About Who You Become.” | Sash Mohapatra on the AI Shift
    Mar 9 2026

    If AI still feels optional to you, this conversation might change your mind.

    In this episode of 101 Talks, I’m joined by Sash Mohapatra —
    former Microsoft leader (20 years), founder of The Rift, and creator of Pollzy.

    Sash doesn’t talk about AI tools or hype.
    He talks about capability, mindset, systems, and upgrading your future self.

    We go deep into:
    + Why layoffs can become launchpads
    + Why capability beats information
    + How non-technical professionals can actually use AI
    + Why AI is a geopolitical shift, not just a tech trend
    + And how Sash accidentally built Pollzy using AI — in public

    This episode is especially for:
    + Professionals feeling overwhelmed by AI
    + Founders, operators, and builders
    + Anyone wondering “Where do I even start?”

    ⏱️ Timestamps

    00:00 – Why AI is not optional anymore
    00:50 – 20 years at Microsoft → walking away to build
    03:57 – Layoffs: how to face it the right way
    06:47 – Creativity, music, and engineering mindset
    10:00 – What the “AI Shift” really means
    12:46 – Separating AI signal from noise
    16:22 – Curiosity as a survival skill
    16:46 – Why Sash built Pollzy
    21:22 – “Capability beats information” explained
    23:59 – Fear, shame & resistance to AI
    27:05 – AI as a geopolitical advantage (US vs Asia)
    31:32 – Rapid fire: habits, beliefs & tools
    34:00 – “Be scared of people who are not scared of AI”
    34:33 – Final takeaway

    📚 Book Recommendation

    Sapiens – Yuval Noah Harari
    👉 Buy here: https://www.amazon.in/dp/0099590085
    (A book that deeply shaped Sash’s worldview on humanity, power, and progress.)

    🔗 Connect with the Speaker

    Linkedin - https://www.linkedin.com/in/shasmo/
    The Rift → https://www.therift.ai/
    Pollzy → https://pollzy.co/

    🤝 Let’s Connect

    If this conversation made you think, let’s stay connected on LinkedIn:
    👉 https://www.linkedin.com/in/sachin-menon-techsigma-technology/

    Mostra di più Mostra meno
    35 min
  • Will AI Replace Professionals in 10 Years? | Vincent Teyssier on the Next Era of Work |
    Mar 4 2026

    This Conversation Felt Different.

    Some episodes are about technology.
    Some are about careers.

    This one was about consequences.

    When I sat down with Vincent Teyssier, a technologist who started coding at 10, served in the Air Force, studied justice and ethics at Harvard, and now builds AI systems in wealth management - I knew this wouldn’t be a surface-level AI discussion.

    But I didn’t expect this level of clarity.

    We talked about:
    Why massive job displacement may happen sooner than we think
    Why most AI builders can’t truly think about consequences
    Why leadership is measured by delivery, not perception
    Why exposure matters more than curriculum in an AI-first world
    And why AI might soon become your teammate

    One line that stayed with me:

    “It’s not because you have a generative AI hammer that every problem is a nail.”

    This episode isn’t about hype.
    It’s about discipline, ethics, trade-offs, and how serious builders are actually thinking.

    Vincent doesn’t sugarcoat the future.
    He talks about job displacement, robotics, agent farms, and AI elitism — bluntly.

    But he also talks about opportunity.
    About surfing the wave of change instead of resisting it.

    If you're a:

    Student trying to stay relevant
    Founder building with AI
    Engineer transitioning into leadership
    Or professional wondering where this is headed

    This conversation will stretch how you think.

    ⏱️ Chapters

    00:40 – Coding at 10 & dropping out
    03:10 – What the Air Force teaches about limits
    05:50 – Why study justice & ethics at Harvard?
    09:30 – Do AI builders think about consequences?
    11:20 – Economic shock & AI displacement
    14:00 – The rise of generalist specialists
    17:30 – Exposure is greater than curriculum in AI
    20:00 – The hardest leadership lesson
    23:00 – AI in wealth management: hallucinations & risk
    27:50 – Guardrails, prompt injection & security
    31:20 – AI agents as team members
    33:20 – Rapid fire
    34:40 – Final reflections

    Course recommended by Vincent:

    🔗 Harvard justice course by Michael Sandel
    https://www.youtube.com/playlist?list=PL30C13C91CFFEFEA6

    About the Guest

    Vincent has worked across military systems, telco, fintech, NGOs, and private equity-backed environments.
    Today he operates at the intersection of AI, finance, and ethics — where mistakes are expensive and trust is non-negotiable.
    🔗 LinkedIn: https://www.linkedin.com/in/vincent-teyssier/
    🔗 BetterSG: https://better.sg/

    Connect with Me

    🎙 101 Talks with Sachin Menon
    🔗 Spotify: https://open.spotify.com/show/4zLUooXgeNmPqDn90i558R?si=daa1b1912f164f25
    🔗 Apple Podcasts: https://podcasts.apple.com/us/podcast/that-bluecoat-guy/id1874932215
    🔗 My LinkedIn: https://www.linkedin.com/in/sachin-menon-techsigma-technology/

    About 101 Talks

    101 Talks explores leadership, technology, and judgment.
    Because the future won’t just be shaped by intelligence.
    It will be shaped by responsibility.

    Mostra di più Mostra meno
    35 min
  • AI and Leadership: What Every Founder Must Know in 2026
    Feb 27 2026

    This episode is not about AI.
    It’s about something far more uncomfortable - leadership and responsibility in an age of intelligence we don’t fully control.

    In this conversation, I sit down with Dino Perone — a leader who has scaled billion-dollar revenue engines, spent over two decades inside AT&T, and is now building AI that understands human emotion.

    But this isn’t a conversation about tools, trends, or hype.

    It’s about:
    +What leadership looks like when machines influence human behavior
    +Why execution matters more than perfect strategy
    +The real risk of AI — not replacement, but over-dependence
    +And the one thing leaders must protect when everything else is changing

    We go deep into:
    +Military leadership → corporate scale → AI empathy
    +Building trust in a world filled with uncertainty
    +Why values are not optional in AI but they are the foundation

    One line that stayed with me:
    “There is no perfect plan. Execution is infinitely more important.”

    If you’re a founder, operator, student, or someone trying to make sense of where AI is taking us — this conversation will challenge how you think about leadership.

    🧠 ABOUT THE SHOW

    101 Talks with Sachin Menon explores the intersection of:
    Leadership
    Technology
    Human judgment

    Because the future isn’t just built on intelligence.
    It’s built on how we choose to use it.

    ⏱️ TIMESTAMPS

    00:00 – Why AI without responsibility is dangerous
    01:00 – Leadership lessons from the military
    04:00 – Scale, patience & execution inside AT&T
    06:30 – Why AI + empathy is the next frontier
    10:40 – What AI is really changing in sales leadership
    12:30 – The line between innovation & responsibility
    14:10 – The biggest risk of emotional AI
    16:30 – Missing values in today’s AI systems
    18:10 – What defines a leader under pressure
    19:30 – Skills that matter in an AI-first world
    20:55 – Rapid fire
    22:10 – What leaders must protect in the future

    Connect with Dino Perone:
    https://www.linkedin.com/dinoperone-cro/

    If this conversation made you pause, question, or rethink how you see AI — that means we did our job.

    AI will keep moving fast.
    The real question is whether our thinking can keep up.

    Let me know your biggest takeaway in the comments.

    Connect with me here:
    LinkedIn: https://www.linkedin.com/sachin-menon-techsigma-technology

    Subscribe for more conversations with global leaders shaping the future of AI and business.

    Mostra di più Mostra meno
    23 min
  • LLMs Are Not the Beginning of AI. Here’s the Truth.| Dr. Sam Li on the Real AI Journey
    Feb 20 2026

    My guest today is Dr. Sam Lee, Global AI Leader, Board Advisor, and someone who was building AI long before it became the loudest word in business.

    AI didn’t begin with LLMs. It didn’t start with ChatGPT. And it definitely didn’t start with hype.

    In this conversation, we go beyond tools and trends. We talk about responsibility, leadership, adaptability, and what it really takes to build meaningful AI systems.

    In this episode, we discuss:

    • Why LLMs are not the beginning of AI
    • The evolution from traditional ML to modern agentic systems
    • The difference between a Chief AI Officer and Head of ML
    • Why AI should make life easier, not replace your thinking
    • What students should actually be learning in the AI era

    One thing that stood out to me personally was this — the hardest AI question isn’t “Can we build it?” It’s “Should we build it?”

    Dr. Sam also shares insights from working across enterprises, consulting, academia, and boardrooms — giving a rare perspective that connects code, classrooms, and C-suites.

    If you are a student, an AI professional, a founder, or someone trying to understand where this AI wave is heading — this conversation will help you zoom out and think clearly.

    Because AI will keep moving fast.

    The real question is whether our thinking can keep up.

    📘 Book Recommendation from Dr. Sam:
    The First 90 Days by Michael Watkins
    https://www.amazon.com/First-90-Days-Strategies-Expanded/dp/1422188612

    🔎 References Mentioned in the Episode:

    • Multi-Agent Systems: https://en.wikipedia.org/wiki/Multi-agent_system
    • ELIZA (early AI chatbot): https://en.wikipedia.org/wiki/ELIZA
    • Large Language Models (LLMs): https://en.wikipedia.org/wiki/Large_language_model
    • Responsible AI & Governance (OECD AI Principles) : https://oecd.ai/en/ai-principles

    Timestamps
    00:00 The Journey of AI: From Labs to Boardrooms
    03:05 Early Career and the Transition to AI
    06:05 Understanding Multi-Agent Systems
    08:34 AI in the Business Context
    10:51 Shaping Decisions in AI Strategy
    13:05 Misconceptions in Boardrooms about AI
    15:47 The Role of Chief AI Officer vs. Head of ML
    19:46 Skills for the Future: What Students Should Learn
    25:11 The Importance of Adaptability in AI Development
    26:22 Understanding the Evolution of AI Models
    28:48 India's Position in AI Leadership
    31:31 Rapid Fire Insights on AI and Personal Preferences
    35:22 The Role of AI Governance in Innovation

    Connect with Dr. Sam Li:
    https://www.linkedin.com/in/dr-sam-li-weixian-844033ba

    If this conversation made you pause, question, or rethink how you see AI — that means we did our job.

    AI will keep moving fast.
    The real question is whether our thinking can keep up.

    Let me know your biggest takeaway in the comments.

    Connect with me here:
    LinkedIn: https://www.linkedin.com/in/sachin-menon-techsigma-technology/

    Subscribe for more conversations with global leaders shaping the future of AI and business.

    Mostra di più Mostra meno
    39 min
  • AI Won’t Replace You — But It Will Magnify You | Waqas Aliemuddin on AI, Education & the Future
    Feb 18 2026

    Waqas Aliemuddin joins me on 101 Talks for one of the most honest conversations I’ve had about AI — not the hype, not flashy demos, but the real human side of it. As the founder of AI4ALL and KAABIL, Waqas talks about building access to AI for non-technical people, why education needs a complete rethink, and why AI should augment humans instead of replacing them.

    In this conversation, Waqas Aliemuddin discusses the importance of making AI education accessible to everyone, especially those without a technical background. He emphasizes the need for practical knowledge and the integration of AI into various domains. Waqas also highlights the role of research in enhancing AI education, the cultural differences in AI adoption between Asia and the West, and the ethical considerations surrounding AI use. He concludes with insights on the future of work, the potential widening of inequality due to AI, and the importance of understanding one's value in an AI-driven economy.

    What stood out most is Waqas’ mindset as a “forever student of life.” From personal learning experiences to building platforms that convert AI fear into AI confidence, this conversation goes deep into how AI will shape careers, education, and inequality over the next decade - and what individuals can do right now to stay relevant.

    If you’re a student, early-career professional, founder, or simply curious about how AI fits into YOUR future, this one’s for you.

    📌 Key Takeaways

    + AI is an augmentation system, not a replacement.
    + The real gap today is education vs real-world application.
    + Slow learning creates depth; speed alone doesn’t.

    Timestamps:

    00:00 Introduction to AI for Everyone
    02:02 The Gap in Education
    04:07 Building AI4ALL and KAABIL
    05:03 AI4ALL: Making AI Accessible
    06:14 Integration Over Balance
    07:29 The Importance of Research
    11:02 AI Adoption in Asia vs. the West
    12:21 Missed Opportunities in AI Education
    13:52 Ethics and Responsibility in AI
    15:53 Inequality and AI
    17:09 The Future of Work with AI
    19:01 Rapid Fire Round: Insights and Reflections
    19:16 – Book recommendation: Games People Play
    21:04 – Asia’s biggest AI advantage
    21:32 – The one question students should ask
    22:27 – Final thoughts: staying human in an AI world

    📚 Book Recommendation

    📖 Games People Play — Eric Berne
    Waqas mentions this book as one that shaped his thinking in 2025. It explores human psychology, behavioral patterns, and transactional analysis.
    🔗 Official publisher page: https://www.penguinrandomhouse.com/books/12725/games-people-play-by-eric-berne-md/

    🤝 Connect with Waqas
    💼 LinkedIn: https://www.linkedin.com/in/waqasaliemuddin/

    📩 Connect With Me
    If this episode resonated with you, I’d love to hear your thoughts 👇
    💼 LinkedIn: https://www.linkedin.com/in/sachin-menon-techsigma-technology/

    Mostra di più Mostra meno
    23 min
  • Inside Cornell: How Ivy League Students Really Use AI | A Cornell Student’s Bold Take | Tianyi Chen
    Feb 12 2026

    What does AI look like through the eyes of someone who is actually growing up with it?

    In this episode, I sit down with Tianyi Chen, a student leader at Cornell who isn’t just using AI — she’s building communities around it.

    Tianyi is:

    • Founder of the Computational Sustainability Club
    • Executive Vice President of the Cornell Data Journal
    • Deeply involved in AI, sustainability and leadership on campus

    And here’s what impressed me — she doesn’t treat AI as a shortcut.

    She treats it as a responsibility.

    We spoke about:
    • Why students should “solve first, prompt later”
    • Why AI can accelerate sustainability instead of destroying it
    • Why leadership among smart peers isn’t about IQ — it’s about trust
    • Why she chose startup life over research
    • And why AI can define a tear… but can’t understand why you cried

    This generation isn’t waiting for policies or permission. They are building.

    And Tianyi represents that mindset.

    🔑 Key Takeaways From The Conversation

    ✔ AI should assist your thinking — not replace it
    ✔ Sustainability needs data acceleration, not just ESG buzzwords
    ✔ Leadership is about facilitation, not intellectual superiority
    ✔ Community building is invisible work before visible impact
    ✔ Interdisciplinary skills = future-proof careers
    ✔ Startup environments build adaptability faster than classrooms

    🎉 Fun Facts From The Podcast

    • Best place she thinks clearly? Library stacks — you can hear a pin drop.
    • Her AI tool of choice? Notion AI note taker.
    • Sleep schedule? 10 PM to 6 AM — non-negotiable.
    • Late-night debugging? Absolutely not.
    • Sustainability buzzword she’s tired of? ESG.
    • Most unexpected fun class? Introduction to Ancient Rome.
    • If AI were a classmate? Overachiever.

    If you care about:
    – Responsible AI
    – Sustainability beyond greenwashing
    – Leadership in high-performance environments
    – How Gen Z is actually thinking

    This episode is for you.

    Connect with Tianyi Chen:

    🔗 LinkedIn: https://www.linkedin.com/in/tchen06/

    Connect with Me:

    🔗 LinkedIn: https://www.linkedin.com/in/sachin-menon/
    🔔 Subscribe for more grounded conversations on AI, leadership & the future of work.

    This is 101 Talks — where we break big ideas into simple, honest conversations.

    See you in the next one.

    Mostra di più Mostra meno
    18 min