Episodi

  • What 14 Quantum Titans Revealed at GTC
    Apr 8 2025

    Deploy Your AI Agents 8x faster with LangWatch. Get a demo: https://langwatch.ai/?utm_source=louis-yt


    ► Master the most in-demand skill for building AI-powered solutions—from scratch: https://academy.towardsai.net/courses/python-for-genai?ref=1f9b29

    ► Master LLMs and Get Industry-ready Now: https://academy.towardsai.net/?ref=1f9b29

    ►Twitter: https://twitter.com/Whats_AI

    ►My Newsletter (My AI updates and news clearly explained): https://louisbouchard.substack.com/

    ►Join Our AI Discord: https://discord.gg/learnaitogether

    Mostra di più Mostra meno
    16 min
  • OpenAI's NEW Fine-Tuning Method Changes EVERYTHING (Reinforcement Fine-Tuning Explained)
    Mar 16 2025

    Have you ever wanted to take a language model and make it answer the way you want without needing a mountain of data?

    Well, OpenAI’s got something for us: Reinforcement Fine-Tuning, or RFT, and it changes how we customize AI models. Instead of retraining it with feeding examples of what we want and hoping it learns in the classical way, we actually teach it by rewarding correct answers and penalizing wrong ones, just like training a dog — but, you know, with fewer treats and more math.

    Let’s break down reinforcement fine-tuning compared to supervised fine-tuning!

    Both essentially have their use that we can discuss in one line:

    1. Supervised fine-tuning teaches new things the model does not know yet, like a new language, which is powerful for small and less “intelligent” models.

    2. While reinforcement fine-tuning orients the current model to what we really want it to say. It basically “aligns” the model to our needs, but we need an already powerful model. This is why reasoning models are a perfect fit.

    I’ve already covered fine-tuning on the channel if you are interested in that. Today, let’s get into how RFT actually works!

    Mostra di più Mostra meno
    13 min
  • Learn to Code with AI Assistance
    Mar 6 2025

    ChatGPT is completely changing how we learn programming.


    Instead of getting bogged down by coding theory, even beginners can jump right into building projects from day one.


    Quite the difference compared to university!


    With tools as simple as ChatGPT, you can experiment with building real applications right from the start quite easily without understanding much.


    This hands-on approach lets you learn by doing, offering instant feedback and a way to explore coding in a practical, exciting way.


    But there's a good and a wrong way to approach this.


    Relying solely on copy-pasting code won’t make you a programmer.


    When ChatGPT gives you a code snippet—say, a script that processes data or handles user login—use it as a starting point.


    TAKE THE TIME to UNDERSTAND why the code works, experiment with modifications, and see how changes affect the outcome.


    True mastery comes from engaging with the code, troubleshooting errors, and making it your own.


    If you can't explain anything, even if your app runs, it won't make you a better programmer or get you a good job. It will also have the downside of making a precarious app. You'll one day end up with too much code to follow what's happening, and ChatGPT will be stuck in an endless debugging loop.


    Yes, do embrace the power of AI to kickstart your projects, but just keep in mind that real growth (and value) happens when you do things and learn the logic behind every line.


    We've built a whole course about that principle to learn Python: https://academy.towardsai.net/courses/python-for-genai?ref=1f9b29

    Mostra di più Mostra meno
    17 min
  • How LLMs Will Impact Your Job (And How to Stay Ahead)
    Feb 11 2025

    Here's an overview of the impact of LLMs on human work, which is complex and varied across different job categories...

    Mostra di più Mostra meno
    13 min
  • The Future of AI Development: The Need for LLM Developers
    Feb 5 2025

    Software engineers vs. ML engineers vs. prompt engineers vs. LLM developers... all explained

    The rise of LLMs isn’t just about technology; it’s also about people. To unlock their full potential, we need a workforce with new skills and roles. This includes LLM Developers, who bridge the gap between software development, machine learning engineering, and prompt engineering.

    Let’s compare these roles...


    Master, Use and Build with LLMs in this Programming Language Agnostic Course: https://academy.towardsai.net/courses/8-hour-genai-primer?ref=1f9b29

    Master LLMs and Get Industry-ready Now: https://academy.towardsai.net/?ref=1f9b29

    Our ebook: https://academy.towardsai.net/courses/buildingllmsforproduction?ref=1f9b29


    Episode 2/6 of the "From Beginner to Advanced LLM Developer" course by Towards AI (linked above).


    This course is specifically designed as a 1 day bootcamp for Software Professionals (language agnostic). It is an efficient introduction to the Generative AI field. We teach the core LLM skills and techniques together with practical tips. This will prepare you to either use LLMs via natural language or to explore documentation for LLM model platforms and frameworks in the programming language of your choice and start developing your own customised LLM projects.

    Mostra di più Mostra meno
    8 min
  • AI Agents vs. Workflows: How to Spot Hype from Real "Agents"?
    Feb 2 2025

    What most people call agents aren’t agents. I’ve never really liked the term “agent”, until I saw this recent article by Anthropic, where I totally agree and now see how we can call something an agent. The vast majority is simply an API call to a language model. It’s this. A few lines of code and a prompt.

    This cannot act independently, make decisions or do anything. It simply replies to your users. Still, we call them agents. But this isn’t what we need. We need real agents, but what is a real agent?

    That what we dive in into this episode...


    Links;

    Anthropic’s blog on agents: https://www.anthropic.com/research/building-effective-agents

    Anthropic’s computer use: https://www.anthropic.com/news/3-5-models-and-computer-use

    Hamul Husain’s log on Devin: https://www.answer.ai/posts/2025-01-08-devin.html

    Mostra di più Mostra meno
    12 min
  • CAG vs RAG: Which One is Right for You?
    Jan 29 2025

    In the early days of LLMs, context windows, which is what we send them as text, were small, often capped at just 4,000 tokens (or 3,000 words), making it impossible to load all relevant context.


    This limitation gave rise to approaches like Retrieval-Augmented Generation (RAG) in 2023, which dynamically fetches the necessary context.


    As LLMs evolved to support much larger context windows—up to 100k or even millions of tokens—new approaches like caching, or CAG, began to emerge, offering a true alternative to RAG...



    ►Full article and references: https://www.louisbouchard.ai/cag-vs-rag/

    ►Build Your First Scalable Product with LLMs: https://academy.towardsai.net/courses/beginner-to-advanced-llm-dev?ref=1f9b29

    ►Master LLMs and Get Industry-ready Now: https://academy.towardsai.net/?ref=1f9b29

    ►Our ebook: https://academy.towardsai.net/courses/buildingllmsforproduction?ref=1f9b29

    ►Twitter: https://twitter.com/Whats_AI

    ►My Newsletter (My AI updates and news clearly explained): https://louisbouchard.substack.com/

    ►Join Our AI Discord: https://discord.gg/learnaitogether

    Mostra di più Mostra meno
    10 min
  • 7 Reasons Why Learning to Use LLMs Is a Game-Changer
    Jan 27 2025

    I think the first though about LLMs and generative AI, is often, “Cool tech buzzwords, but do I really need to know this?” YES. Here’s why diving into LLMs is practically essential... 🚀 1. They transform how we work Think about all the repetitive, boring tasks in your day. You can (almost) automate them, building tools that make you 10x more productive. That’s what LLMs can do. If you can't, someone else can. If it's too complex, it will be possible soon. 🧠 2. Reaching their full potential isn’t automatic LLMs don’t come with a magic "win button," even if ChatGPT by itself is fantastic. To use them effectively, you’ve got to understand what they’re good at, what they’re not, and how to make them work for you by adding features. ⚠️ 3. Misuse = trouble LLMs can mess up big time without the right skills—wrong answers, misinformation, or just plain inefficiency. Learning how to avoid these pitfalls is critical. ✍️ 4. Prompts are everything Crafting clear, precise instructions is half the battle. A well-thought-out prompt can turn mediocre results into game-changing insights. It's just the basics of good, clear and concise communication. 🎯 5. Knowing when to use them is key Not every problem needs AI, but knowing where LLMs can deliver the biggest impact? That’s a game-changer. The right tool at the right time = massive efficiency gains. 🔒 6. Privacy matters more than ever LLMs can accidentally expose sensitive information if you’re not careful. Learning to use them responsibly isn’t optional—it’s a must. (Unless you want to be the person who accidentally leaks proprietary data) ⏳ 7. Don’t get left behind Those who embrace and learn these tools early are already gaining a competitive edge. The ones who resist? Well... let’s say the AI train is moving fast, and you don’t want to be stuck at the station. I know LLMs can feel intimidating at first, but the rewards are worth it. Whether you’re a developer, a business leader, or just someone curious about the future, learning how to use these tools is a skill that’ll pay off in ways you can’t even imagine yet.

    Mostra di più Mostra meno
    9 min