Episodi

  • The $3 Trillion Blue Economy! How AI & Robotics Are Unlocking the Ocean Now | Kendra MacDonald
    Feb 17 2026

    In this episode of the An Hour of Innovation podcast, Vit Lyoshin explores how the blue economy is rapidly becoming one of the most important industries of our time, powered by AI, robotics, ocean technology, and deep ocean data.

    Vit is joined by Kendra MacDonald, CEO of Canada’s Ocean Supercluster, one of the world’s leading ocean innovation ecosystems, driving commercialization, clean tech, and marine technology breakthroughs.

    They explore how the $3 trillion ocean economy has already surpassed global projections, and why AI, autonomous vessels, robotics, and advanced ocean data are transforming everything from aquaculture to climate change solutions. Kendra explains how the deep ocean remains 75% unmapped, why the ocean produces 50% of the oxygen we breathe, and how carbon removal, marine biotechnology, and ocean sustainability innovations could define the next decade. They also dive into how startups can enter this space and why the ocean is no longer “too big to fail.”

    Kendra MacDonald leads Canada’s Ocean Supercluster, an organization with over 150 funded projects and nearly 1,000 members accelerating ocean innovation across shipping, aquaculture, marine biotechnology, and clean tech. She works at the intersection of industry, technology, and sustainability, helping de-risk and scale ocean startups globally. With deep insight into autonomous vessels, AI-powered ocean data systems, and blue economy investment trends, she brings a rare economic and climate lens to the future of the deep ocean.

    Takeaways

    * The ocean economy has already reached $3 trillion, doubling in size since 2016 and outpacing projections.

    * The ocean produces 50% of the oxygen we breathe, making it critical to human survival.

    * Around 85–90% of global goods move by ship, making ocean infrastructure essential to supply chains.

    * Nearly 99% of international internet traffic runs through subsea cables on the ocean floor.

    * AI and computer vision now track fish movement without tagging, improving conservation and dam efficiency.

    * Autonomous vessels can operate 24/7 in extreme environments, reducing cost and safety risks.

    * eDNA genomics allows scientists to detect biodiversity from a single water sample.

    * Small efficiency gains in shipping routes can significantly reduce fuel use and emissions.

    * Seaweed farming can reduce methane emissions when added to livestock feed.

    Timestamps

    00:00 Introduction

    01:39 What is Canada’s Ocean Supercluster

    04:32 What Is the $3 Trillion Ocean Economy?

    08:22 Why the Ocean Economy Matters to Everyone

    10:09 AI, Robotics & Ocean Technology Breakthroughs

    14:19 Sustainable Ocean Tech & Climate Solutions

    16:28 Investment & Growth in the Blue Economy

    19:24 How Companies Can Enter Ocean Economy

    23:27 Aquaculture, Agriculture & Ocean Sustainability

    28:12 The Future of Ocean Data & Measurement

    30:57 Deep Ocean Challenges & Carbon Removal

    32:44 Startup Opportunities in Ocean Technology

    36:00 AI, Autonomous Vessels & Ocean Robotics

    39:11 The Power & Impact of the Ocean Economy

    44:09 Why the Blue Economy Is Rising

    44:20 Innovation Q&A

    Connect with Kendra

    * Website: https://oceansupercluster.ca/

    * LinkedIn: https://www.linkedin.com/in/kendra-macdonald-40b574/

    * Substack: https://substack.com/@saltwatersignals

    * Other: https://kendramacdonald.com/

    This Episode Is Supported By

    * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH

    * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe

    * MeetGeek: Record, transcribe, summarize, and share insights from every meeting - https://get.meetgeek.ai/yjteozr4m6ln

    For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com

    Connect with Vit

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * Substuck: https://anhourofinnovation.substack.com/

    * X: https://x.com/vitlyoshin

    Mostra di più Mostra meno
    48 min
  • Own Your AI Agent: Security, OpenClaw, Data Ownership, and the Future of Work | Toufi Saliba
    Feb 11 2026

    What if the AI agent working for you today could quietly become a risk, or your greatest long-term advantage, depending on how well you secure and own it?

    In this episode of the An Hour of Innovation podcast, Vit Lyoshin sits down with Toufi Saliba to unpack one of the most urgent and misunderstood shifts in modern technology: AI agents with real autonomy, real agency, and real consequences for humans.

    Toufi Saliba is a seasoned AI and infrastructure leader, founder and CEO of Hypercycle, and a long-time voice in AI security, governance, and decentralized systems.

    Toufi explains what AI agents are and how they work, why tools like OpenClaw reveal serious security risks, and how giving AI full system access can expose users to data loss, manipulation, and loss of control. He breaks down the importance of AI governance, containerized AI environments, and why human agency must remain at the center as autonomous systems become more powerful. The discussion also reframes the future of work with AI agents, arguing that AI doesn’t eliminate human work, but multiplies it for those who take ownership early.

    Toufi Saliba is the CEO of Hypercycle and a vocal advocate for human agency in an AI-driven world. He has spent years working on infrastructure that allows AI agents to communicate securely without relying on centralized third parties. In this episode, his perspective matters because he frames AI not as something to fear—but as something humans must actively own, secure, and govern before that choice disappears.

    Takeaways

    * AI agents are not just tools; they have agency, meaning they can make decisions and act autonomously on a user’s behalf.

    * Giving an AI agent full system access turns it into a powerful assistant and a potential security liability.

    * A single vulnerability in an autonomous AI agent can expose emails, files, and credentials, and even allow malware to be installed.

    * Most current AI security solutions reduce risk by limiting capability, but that tradeoff may undermine AI’s real value.

    * Containerized and sandboxed AI environments are a practical way to preserve AI power while reducing attack surfaces.

    * If you don’t actively capture and secure your data, platforms and governments will do it for you by default.

    * AI governance is not about stopping AI; it’s about defining who owns, controls, and benefits from AI-generated intelligence.

    * The future of work isn’t humans vs. AI; it’s humans managing fleets of AI agents working 24/7 on their behalf.

    * The Internet of AI will create massive new wealth, but only those who own their agents will participate in it.

    * Saving more personal data isn’t the problem; saving it without security, encryption, and control is the real risk.

    Timestamps

    00:00 Introduction to OpenClaw and AI Agents

    10:33 Global Brain, Data Ownership, and Human Agency

    17:14 Mosaic Spot: AI Security for Everyone

    18:44 AI Agent Security Risks and Protection

    21:11 Human - AI Collaboration and AI Governance

    29:41 AI Wealth Creation and Ownership

    32:29 Mosaic Spot: Secure AI Interaction Layer

    35:15 Future of Work with AI Agents

    37:02 One Rule for Securing Your AI

    41:41 Innovation Q&A

    Connect withToufi

    * Website: https://www.hypercycle.ai/

    * LinkedIn: https://www.linkedin.com/in/toufisaliba/

    * X: https://x.com/toouufii

    This Episode Is Supported By

    * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH

    * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe

    * Monkey Digital: Unbeatable SEO. Outrank your competitors - https://www.monkeydigital.org?ref=110260

    For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com

    Connect with Vit

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * Substuck: https://anhourofinnovation.substack.com/

    * X: https://x.com/vitlyoshin

    * Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    48 min
  • Why Smart Engineering Teams Fail: Alignment, Ownership, and Real Delivery | Prashanth Tondapu
    Feb 3 2026

    Why do smart engineering teams miss deadlines, struggle with alignment, and fail at real software delivery, even when everyone is talented and working hard?

    In this episode of An Hour of Innovation podcast, Vit Lyoshin sits down with Prashanth Tondapu, the CEO of InnoStax, to unpack why intelligence alone doesn’t guarantee outcomes and how alignment, ownership, and engineering leadership are the real drivers of execution.

    They explore why Agile teams often fall into the trap of local optimization, where individuals optimize tasks but projects still fail at the system level. Prashanth explains how the tech lead role, clear ownership, and visible progress transform project management and software delivery. The episode dives into practical lessons on engineering leadership, team accountability, and why outcome ownership matters more than raw talent. You’ll also hear real examples of how startups can scale development teams without micromanaging while improving ROI.

    Prashanth Tondapu is the CEO of InnoStax, a software consulting company that works with startups and scale-ups across the US and Europe, helping engineering teams move from slow delivery to measurable results. He brings over 15 years of experience leading and observing hundreds of development teams across different industries. He is known for helping smart engineering teams fix execution gaps by focusing on alignment, clarity, and leadership instead of process-heavy rituals.

    Takeaways

    * Smart engineers often slow projects down by optimizing individual tasks instead of the whole system.

    * Alignment and clear ownership matter more than raw talent for consistent software delivery.

    * When everyone “owns” the outcome, accountability disappears, and execution suffers.

    * A dedicated tech lead acts as a system-level thinker, not just the best coder on the team.

    * Teams move faster when progress is demonstrable, not just explained in status updates.

    * Daily visible progress exposes blockers early and prevents engineers from rabbit-holing.

    * Agile rituals can hide delivery problems when they prioritize narrative over proof.

    * Developers are more likely to ask for help when transparency is built into the workflow.

    * Tech leads should reduce their own coding over time as the team becomes more effective.

    * Startup founders must delegate with checkpoints or risk becoming the execution bottleneck.

    Timestamps

    00:00 Introduction

    02:10 Why Team Alignment Matters More Than Talent

    04:14 Why Smart Engineering Teams Struggle to Deliver

    05:27 Owning Outcomes vs Task-Based Work

    06:56 The Tech Lead Role Explained

    11:23 Early Warning Signs of Failing Teams

    12:40 Daily Visible Progress for Faster Delivery

    16:52 How Daily Updates Expose Hidden Issues

    18:57 Building a Culture of Openness and Trust

    22:55 Why Teams Need a Single Tech Lead

    25:58 Avoiding Tech Lead Burnout and Micromanagement

    29:15 Startup Scaling Advice for Founders

    31:59 Ideal Team Structure for Software Delivery

    33:44 The One Thing That Guarantees Outcomes

    34:34 Innovation Q&A

    Connect with Prashanth

    * Website: https://innostax.com/

    * LinkedIn: https://www.linkedin.com/in/prashanth-tondapu/

    This Episode Is Supported By

    * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH

    * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe

    * MeetGeek: Record, transcribe, summarize, and share insights from every meeting - https://get.meetgeek.ai/yjteozr4m6ln

    For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com

    Connect with Vit

    * Substuck: https://anhourofinnovation.substack.com/

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * X: https://x.com/vitlyoshin

    * Website: https://vitlyoshin.com/contact/

    * Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    38 min
  • AI Video Analysis: How AI Is Changing Mental Health Care Between Doctor Visits l Loren Larsen
    Jan 27 2026

    Patients often hide how they’re really doing, but when AI listens between visits, the truth finally comes out, reshaping mental health care with empathy and precision.

    In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Loren Larsen, founder and CEO of Videra Health, to explore how AI in healthcare is transforming behavioral health by capturing what patients actually say and feel outside the clinic, using human-in-the-loop AI to support better care decisions.

    They discuss why the most dangerous moments in mental health care often happen between doctor visits, how AI-based check-ins can surface real patient narratives, and why ethical, well-tested AI matters more than ever. The conversation breaks down the limits of score-based assessments, the risks of poorly built AI, and how technology can extend, not replace, clinical judgment. It’s a practical look at mental health technology that’s already being used in real clinical settings.

    Loren Larsen is a longtime builder at the intersection of AI, video, and human decision-making. Before founding Videra Health, he served as CTO of HireVue, deploying video AI at a massive scale. In this episode, his experience matters because he’s navigated bias, ethics, and real-world deployment, offering a grounded perspective on what responsible healthcare AI should look like today.

    Takeaways

    * The most dangerous moment in a mental health patient’s life is right after leaving inpatient care.

    * AI check-ins between visits restore visibility into patient wellbeing when clinicians cannot scale human outreach.

    * Patients often share more honestly with AI than with therapists because they feel less judged and less pressure to perform.

    * Mental health scores without narrative (like PHQ-9) miss the “why” behind patient distress.

    * AI should augment clinical judgment, not replace therapists, especially during high-risk treatment moments.

    * Generative AI is not ready to safely conduct therapy, particularly in crises.

    * Model drift can occur from unexpected factors, such as medications or cosmetic procedures, not just bad data.

    * Poorly built healthcare AI can look legitimate, making it hard for buyers to distinguish safe tools from risky ones.

    * Ethical healthcare AI requires clear consent, transparency, and human oversight, not just technical accuracy.

    * The biggest challenge in AI healthcare adoption is balancing speed, safety, and trust in a fast-moving market.

    Timestamps

    00:00 Introduction

    01:35 Videra Health Origin Story

    03:02 AI Patient Check-Ins Between Doctor Visits

    05:33 Why Human Judgment Still Matters in AI Care

    08:49 Gaps in Mental Health Patient Care

    12:07 AI vs Human Care in Mental Health

    13:23 Testing & Validating Healthcare AI Systems

    17:16 Edge Cases, Bias, and AI Model Failure

    19:29 Ethical AI in Healthcare

    23:33 Why Healthcare AI Adoption Is Hard

    25:43 Common Myths About AI in Healthcare

    30:02 Lessons from Building Video AI at Scale

    34:54 Early Warning Signs in AI Systems

    38:31 Advice for First-Time Video AI Builders

    42:05 Innovation Q&A

    Connect with Loren

    * Website: https://www.viderahealth.com/

    * LinkedIn: https://www.linkedin.com/in/loren-larsen/

    This Episode Is Supported By

    * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH

    * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe

    * Monkey Digital: Unbeatable SEO. Outrank your competitors - https://www.monkeydigital.org?ref=110260

    For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com

    Connect with Vit

    * Substuck: https://substack.com/@vitlyoshin

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * X: https://x.com/vitlyoshin

    * Website: https://vitlyoshin.com/contact/

    * Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    45 min
  • AI Isn’t the Problem! Why AI Adoption Fails at Work (95% Get Zero ROI) | Jay Kiew
    Jan 17 2026

    Most teams adopt AI, expecting a breakthrough, but end up frustrated, disappointed, and wondering what went wrong when productivity doesn’t improve.

    In this episode of An Hour of Innovation podcast, Vit Lyoshin sits down with Jay Kiew, a globally recognized expert in organizational change and transformation, to unpack why so many AI initiatives fail to deliver value, even when the technology itself is powerful and widely available.

    They explore why AI alone does not create productivity or innovation, and why research shows that nearly 95% of companies see little to no ROI from their AI initiatives. Jay explains how broken processes, weak critical thinking, and low change readiness quietly sabotage even the best AI tools. Instead of chasing the next technology, this episode reframes AI adoption as a human and organizational challenge, one that requires mindset shifts before tools can deliver results.

    Jay Kiew is a change strategist and transformation leader who works with organizations navigating complex change at scale. He is known for helping leaders move beyond tool-driven thinking toward building adaptive, change-ready cultures. In this episode, Jay’s perspective matters because it challenges the assumption that AI failures are technical problems and shows why leadership, process discipline, and learning capability are the real differentiators.

    Takeaways

    * AI does not create productivity by itself; it only amplifies the quality of existing processes and decision-making.

    * Most AI initiatives fail not because of weak models, but because teams cannot clearly explain how their work actually gets done.

    * Research showing that 95% of companies see no AI ROI reflects organizational readiness gaps, not a lack of AI capability.

    * Poorly defined workflows become painfully visible the moment AI is introduced into a team.

    * Leaders often deploy AI as a solution before agreeing on what problem they are trying to solve.

    * Organizations that struggle with change management tend to struggle the most with AI adoption.

    * AI agents fail when humans cannot articulate rules, context, and success criteria for the work.

    * Critical thinking is becoming more valuable than technical AI skills as automation increases.

    * Change fluency, the ability to adapt continuously, is emerging as a core career skill for the next decade.

    * Teams that succeed with AI focus less on tools and more on learning, feedback loops, and behavior change.

    Timestamps

    00:00 Introduction

    01:48 Why Leaders Misunderstand AI

    03:22 How AI Reveals Organizational Dysfunction

    05:58 SOPs and Critical Thinking for AI Success

    08:41 AI Adoption and ROI Reality

    13:19 Learning and Integration Matter More Than Tools

    16:11 What AI Agents Really Are

    18:03 How AI Agents Change Roles

    22:42 Training Teams for AI Adoption

    23:59 Why Teaching AI Tools Is Hard

    25:49 Learning on the Job with AI

    28:01 Essential Skills for the AI Era

    29:03 Design Thinking and Influence

    32:16 Why Human Perception Matters

    33:17 Change Fluency as a Future Skill

    34:13 AI’s Real Impact on Productivity

    36:19 Asking Better Questions with AI

    37:55 Practical AI Use at Work

    39:38 Innovation Q&A

    Connect with Jay

    * Website: https://www.changefluency.com/

    * LinkedIn: https://www.linkedin.com/in/jaykiew-change-fluency/

    * Instagram: https://www.instagram.com/changefluency

    * Book: https://www.amazon.com/Change-Fluency-Principles-Uncertainty-Innovation/dp/1774586991

    Sponsors

    * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH

    * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe

    * MeetGeek: Record, transcribe, summarize, and share insights from every meeting - https://get.meetgeek.ai/yjteozr4m6ln

    Connect with Vit

    * Substuck: https://substack.com/@vitlyoshin

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * X: https://x.com/vitlyoshin

    * Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    45 min
  • Can AI Steal Your Book? The Alarming Plagiarism Problem! | US Publishing Expert
    Jan 10 2026

    What if your book could be copied, republished, and sold under someone else’s name, and you’d barely know it happened?

    In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Julie Trelstad, a longtime publishing leader and one of the most thoughtful voices on copyright, metadata, and digital trust. Julie brings a rare insider’s view into how books are discovered, distributed, and increasingly misused in an AI-driven world.

    They explore a growing fear among writers, creators, and publishers: how AI is quietly reshaping plagiarism, authorship, and trust in the publishing ecosystem.

    They examine how AI-generated content is blurring the line between original work and imitation, why traditional copyright protections struggle in a machine-readable world, and how fake or derivative books can appear online within days. The episode breaks down the real risks authors face today, not hypothetical futures, and what structural changes may be required to protect creative work. It’s a practical, sober look at AI plagiarism.

    Julie Trelstad is a publishing executive and strategist known for her work at the intersection of technology and intellectual property. She has spent decades helping publishers, authors, and platforms navigate the identification, protection, and trust of content at scale. In this episode, her perspective matters because she explains not just that AI plagiarism is happening, but why the system makes it so hard to detect and stop, and what could actually help.

    Takeaways

    * AI can clone and resell a book in days, and most platforms struggle to reliably prove that the theft occurred.

    * AI-generated plagiarism often looks legitimate enough to fool retailers, reviewers, and buyers.

    * Authors lose sales and reputation when fake AI versions of their books appear at lower prices.

    * Traditional copyright law exists, but it was never designed for machine-scale copying and AI training.

    * There has been no machine-readable way for AI systems to recognize who owns content, until now.

    * Content fingerprinting can detect similarity across languages and paraphrased AI rewrites.

    * Time-stamped content registries can establish legal proof of who published first.

    * Most books already inside AI models were scraped without the author's consent or compensation.

    * AI lawsuits focus less on training itself and more on the use of pirated content.

    * Authors could earn micro-payments when AI systems use specific paragraphs or ideas from their work.

    Timestamps

    00:00 Introduction

    01:37 Why AI Plagiarism Is So Hard to Detect

    03:25 Amlet.ai and the Fight for Content Ownership

    05:32 How Copyright Worked Before Generative AI

    08:09 The Origin Story Behind Amlet.ai

    12:22 Building Machine-Readable Infrastructure for Copyright

    14:24 How Publishing Is Changing in the AI Era

    17:34 How Authors Can Protect Their Work with Amlet.ai

    20:38 Tools Publishers Use to Detect and Enforce Rights

    21:38 How Authors Can Monetize Content Through AI

    24:27 The Reality of AI Scraping and Plagiarism Today

    27:00 Publisher Rights, Digital Security, and Enforcement

    29:08 Evolving the Business Model for AI Licensing

    35:34 The Future of Digital Ownership and AI Rights

    38:37 Innovation Q&A

    Support This Podcast

    * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/

    Connect with Julie

    * Website: https://paperbacksandpixels.com/

    * LinkedIn: https://www.linkedin.com/in/julietrelstad/

    * Amlet AI: https://amlet.ai/

    Connect with Vit

    * Substuck: https://substack.com/@vitlyoshin

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * X: https://x.com/vitlyoshin

    * Website: https://vitlyoshin.com/contact/

    * Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    41 min
  • Functional Precision Medicine: How Cancer Drugs Are Tested Before Treatment | Jim Foote
    Dec 20 2025

    Cancer care still forces patients and doctors to guess! Learn how functional precision medicine is replacing that uncertainty by testing cancer drugs before treatment even begins.

    In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Jim Foote, co-founder and CEO of First Ascent Biomedical, an innovator who is challenging one of the most uncomfortable truths in modern medicine: many cancer treatments are chosen without knowing if they will actually work.

    First Ascent Biomedical is a company focused on transforming personalized cancer treatment through functional precision medicine and data-driven decision support.

    In this conversation, they explore how functional precision medicine differs from traditional precision medicine and why testing drugs on patients’ live tumor cells changes everything. Jim explains how AI, robotics, and large-scale drug testing help doctors move from trial-and-error to a true test-and-treat approach. The discussion also covers the risks of ineffective or harmful treatments, the economic cost of cancer care, and what must change for this model to become part of standard oncology practice.

    Jim Foote is a former technology executive turned healthcare innovator whose work is deeply shaped by personal loss and firsthand experience with cancer care. He is best known for advancing functional precision medicine by combining genomics, live-cell drug testing, and AI-driven analysis to guide treatment decisions. His perspective matters because it connects real clinical outcomes with the technology needed to give doctors and patients clearer, faster, and more humane options.

    Takeaways

    * Cancer treatment still relies heavily on trial-and-error, even with modern medical technology.

    * Two biologically different patients often receive the same cancer treatment based on population averages.

    * Precision medicine based on DNA and RNA sequencing still cannot confirm if a drug will work before it’s given.

    * Functional precision medicine tests drugs directly on a patient’s live tumor cells before treatment begins.

    * Some FDA-approved cancer drugs can be completely ineffective or even make a patient’s cancer worse.

    * Testing drugs outside the body can prevent patients from being exposed to harmful or useless treatments.

    * AI and robotics enable hundreds of drug tests to be completed in days instead of weeks or months.

    * In a published study, 83% of refractory cancer patients did better when treatment was guided by this approach.

    * Knowing which drugs won’t work is just as important as knowing which ones will.

    * Personalized, test-and-treat cancer care has the potential to improve outcomes while reducing overall healthcare costs.

    Timestamps

    00:00 Introduction

    02:46 The Core Problem in Modern Cancer Care

    04:16 Functional Precision Medicine Explained

    06:42 How AI, Robotics, and Data Are Changing Cancer Treatment

    10:01 How Cancer Drugs Are Tested Before Treatment

    13:20 Personalized, Patient-Centric Cancer Care

    18:22 Cost, Access, and the Economics of Cancer Treatment

    22:19 The Future of Cancer Care and Patient Empowerment

    25:21 Real Patient Outcomes and Success Stories

    26:50 Why Functional Precision Medicine Is the Future

    31:18 Predicting, Detecting, and Preventing Cancer Earlier

    34:27 Where to Learn More About Functional Precision Medicine

    36:12 Transforming Healthcare Beyond Trial-and-Error

    37:27 Regulations, FDA Pathways, and Scaling Innovation

    40:09 Why Cancer Is Affecting Younger Patients

    41:17 Innovation Q&A

    Support This Podcast

    * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/

    Connect with Jim

    * Website: https://firstascentbiomedical.com/

    * LinkedIn: https://www.linkedin.com/in/jim-foote/

    * TEDx Talk: https://www.youtube.com/watch?v=CqLCgNxUhVc

    Connect with Vit

    LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    X: https://x.com/vitlyoshin

    Website: https://vitlyoshin.com

    Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    46 min
  • The Future of Music Education: AI Tutors, Human Mentors, and Creativity
    Dec 13 2025

    Music education is quietly undergoing a massive shift, and most people haven’t noticed yet.

    AI tutors are no longer just tools; they’re starting to shape how musicians learn, practice, and improve. But here’s the real question: where does human creativity and mentorship still matter in an AI-driven world?

    In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with John von Seggern, a longtime musician, educator, and founder of Futureproof Music School, to unpack what’s actually changing, and what isn’t, in the future of music education. John has spent over a decade designing online music education programs and now works at the intersection of AI, creativity, and human mentorship.

    In this conversation, they explore how AI is personalizing music education in ways traditional schools struggle to scale. John explains how AI tutors can analyze music, guide students through complex production workflows, and surface the one or two things that matter most at each stage of learning. They also dig into why AI still falls short in mastery, taste, and creative judgment, and why human mentors remain essential. They discuss the hybrid model of AI tutors and human teachers, the future of music production learning, and what this shift means for creators trying to stay relevant in a fast-changing industry.

    John von Seggern is a musician, producer, educator, and music technologist who has worked with film composers and contributed sound design to Pixar’s WALL·E. He previously helped lead and design one of the world’s most respected electronic music programs before founding Futureproof Music School, where he’s building AI-powered, personalized music education systems. His work matters because it goes beyond hype, offering a practical, grounded view of how AI can support creativity without replacing the human elements that make music meaningful.

    Takeaways

    * AI tutors are most effective when they surface only one or two actionable fixes, not long reports that overwhelm learners.

    * Music education improves dramatically when AI can analyze your actual work (like mixes), not just answer theoretical questions.

    * The biggest limitation of AI in music is that elite, professional knowledge is often undocumented, so models can’t learn it.

    * Human mentors remain essential at advanced levels because taste, judgment, and creative intuition can’t be automated.

    * Personalized learning paths outperform one-size-fits-all programs, especially in creative and technical fields like music production.

    * Generative AI tools are fun, but most professionals prefer AI that assists the process, not tools that generate finished music.

    * AI acts best as an intelligence amplifier, helping creators move faster rather than replacing their role.

    * The future of music education isn’t AI-only, but a hybrid model where AI accelerates learning, and humans guide mastery.

    Timestamps

    00:00 Introduction

    03:02 How AI Is Transforming Music Education

    07:50 Why AI + Human Mentorship Works Better Than Music Schools

    11:43 Why Music Education Curricula Must Evolve Faster

    15:04 How AI Personalizes Music Learning for Every Student

    19:38 Building an AI-Powered Education Business

    24:22 What Students Really Say About AI Music Education

    26:20 Electronic Music vs Learning Traditional Instruments

    27:58 The Future of AI in Music and Creative Industries

    30:28 Why Artists Still Matter in AI-Generated Art

    32:21 Who Owns Music Created With AI?

    36:50 How Creators Can Survive and Thrive Using AI

    42:24 Innovation Q&A

    Support This Podcast

    * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/

    Connect with John

    * Website: https://futureproofmusicschool.com/

    * LinkedIn: https://www.linkedin.com/in/johnvon/

    Connect with Vit

    * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/

    * X: https://x.com/vitlyoshin

    * Website: https://vitlyoshin.com/contact/

    * Podcast: https://www.anhourofinnovation.com/

    Mostra di più Mostra meno
    46 min