Humans + AI copertina

Humans + AI

Humans + AI

Di: Ross Dawson
Ascolta gratuitamente

A proposito di questo titolo

Exploring and unlocking the potential of AI for individuals, organizations, and humanity Economia Gestione e leadership Leadership Management
  • Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency (AC Ep40)
    Apr 22 2026
    “Freedom no longer exists outside the systems, and it depends on the design. Coming back to the design, it’s about understanding that we need to distinguish between intelligent systems and agency.” –Dr Michael Gebert About Dr Michael Gebert Dr Michael Gebert is Chairman of the European Blockchain Association and co-founder of AI Expert Forum. He works at the intersection of artificial intelligence, digital sovereignty, and institutional responsibility. His book 2079 – Designing Freedom is just out. Website: 2079.life LinkedIn Profile: Dr Michael Gebert What you will learn How the concept of freedom extends beyond politics and economics to personal agency in an AI-driven worldWhy cognitive sovereignty is essential for maintaining individual responsibility and accountability as intelligent systems become more pervasiveThe shift from making decisions ourselves to designing the frameworks and conditions for decision-making with AI involvementHow to distinguish optimization from true human empowerment when integrating AI tools into personal and organizational lifePractical routines and metacognitive strategies for individuals to retain agency when collaborating with large language models and intelligent systemsWhy organizational leaders must prioritize cognitive sovereignty and human potential early in AI deployment, not just technical efficiencyInsights into the challenges and importance of embedding frameworks for freedom and cognitive sovereignty within corporate, governmental, and policy structuresThe critical need for ambassadors of freedom within institutions to promote reflection, ongoing discussion, and the integration of responsible AI practices across all levels Episode Resources Transcript Ross Dawson: Michael. It is awesome to have you on the show. Michael Gebert: Hey, great to be on the show. Thanks for having me. Ross Dawson: So we connected first, probably around 15 years ago, and we were both involved in crowds, creating value from many people. And I think, you know, there’s one of the interesting points now is, I guess, you know, we still live in a world of many people. We’re trying to create collective value. AI is laid over that. So it’s interesting to see that journey from where we’ve come to where we are today. Michael Gebert: Absolutely, and I really remember visually when we first had contact about this very exciting topic of crowdsourcing and empowerment of the crowd, and really making people believe, not only in themselves, but really in communities. And therefore, not only strengths in terms of crowdfunding, crowd investing, their financial gains, but also being empowered in what they do. And this is a very fundamental, I would say, even a right for humanity to reflect on and do that. I think the methodology and technology back then helped a lot. And to be honest, I’m still partly involved in some of those efforts. Even the big crowdfunding platforms, also here in Europe and in Germany, are vital and really active. Of course, not in that dramatic media shift hype that we experienced, but they’re still there, and it proves that it’s a concept that should stay. Ross Dawson: Yep, absolutely. You know, there’s obviously collective intelligence, amongst other facets. But this goes to, I think, the frame of your new book, 2079, Designing Freedom. So freedom is an interesting word, and something which I hope we all aspire to. Michael Gebert: Yeah, you know, freedom, of course, is one of those very multifaceted words, right? It could be translated in a political context. It could be translated in an economic concept, meaning monetary-wise. It could be translated—and this is my translation—in a very personal, one-to-one reflection about how do I as a human being see myself in that surrounding, bombarded not only by information but by intelligent systems, basically AI as we describe them, and all that is behind those systems. Ross Dawson: So there’s a few things I want to dig into here. And I guess there’s another word there: designing. Obviously, at a societal infrastructure layer, we want to be able to design the systems whereby we can all individually have that freedom of choice in how we live our lives. Michael Gebert: Yeah, and not always, I would say, looking at the world geopolitically, of course, there is sometimes no choice. And if you are able to generate those choices, first of all by understanding how to design them, that’s a very good first step. So when I wrote the book, the prior part was basically a research paper I did, a small research paper also on ResearchGate. This is the foundation where I started thinking and reflecting. Basically, the core there is about a question that I think is becoming unavoidable now and for the future. The question is: if more and more cognition or judgment and action are delegated to intelligent systems, what has to be true for human beings in order to remain genuinely free? So the ...
    Mostra di più Mostra meno
    38 min
  • Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning (AC Ep39)
    Apr 8 2026
    “The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.” –Marshall Kirkpatrick About Marshall Kirkpatrick Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research. Website: whatsupwiththat.app LinkedIn Profile: Marshall Kirkpatrick What you will learn How generative AI transforms cognitive tools and lowers barriers to advanced thinkingTechniques to combine human and AI-powered sensemaking for richer insightsPractical strategies for filtering and extracting value from infinite informationThe importance and application of diverse mental models in modern decision-makingMethods to balance manual cognitive work with AI assistance for optimal outcomesThe role of adaptive interfaces in enhancing individual cognitive capacityMetacognitive approaches to networks and how AI can foster organizational awarenessEthical and societal implications of democratizing access to AI-powered cognitive enhancements Episode Resources Transcript Ross Dawson: Marshall, it is awesome to have you back on the show. Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on. Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more. That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe? Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope. I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever. Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization. Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out? Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction. And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to climb up in some of that sensing....
    Mostra di più Mostra meno
    40 min
  • Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38)
    Apr 1 2026
    “Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Website: ninabegus.com LinkedIn Profile: Nina Begus Book: Artificial Humanities What you will learn How ancient myths and archetypes influence our understanding and design of AIWhy the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systemsThe dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginariesHow metaphors shape our interactions with AI products and the user experiences companies choose to enableThe challenges and possibilities of imagining forms of machine intelligence and language beyond human templatesWhy collaboration between technical experts and humanists opens new frontiers for creativity and responsible technologyWhat makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulsesPractical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans are not rational, how we have some ...
    Mostra di più Mostra meno
    35 min
Ancora nessuna recensione