The AI Optimist-logo

The AI Optimist

Business & Economics Podcasts

The AI Optimist cuts through the usual AI noise connecting creators and tech to work together instead of fighting. I'm a Creative AI Strategist who helps creators and businesses co-create with AI, turning humble machines into powerful creative partners. As a subscriber, you will immediately receive The Creator's AI Licensing INTEL. . www.theaioptimist.com

Location:

United States

Description:

The AI Optimist cuts through the usual AI noise connecting creators and tech to work together instead of fighting. I'm a Creative AI Strategist who helps creators and businesses co-create with AI, turning humble machines into powerful creative partners. As a subscriber, you will immediately receive The Creator's AI Licensing INTEL. . www.theaioptimist.com

Language:

English

Contact:

5302356331


Episodes
Pídele al anfitrión que permita compartir el control de reproducción

Creative Machines and Human Creativity: Building AI that Makes Us More Creative Instead of Replacing Us

10/10/2025
Seeing that tall black and brown piano in the background before our interview, I sense tradition of human creativity meeting AI. This is about us. When Maya Ackerman’s family immigrated to Canada, her piano stayed behind in Israel. That instrument had been more than wood and keys. It’s where emotions melt into music into a feeling, processing change with simple sounds arising from deep wells of experience. The piano was, and is, her creative partner - even when it wasn’t there. Don’t Give Up Your Piano: A Conversation About Creative Machines That Serve, Not Replace Now, as a professor at Santa Clara University and CEO of WaveAI, Ackerman sees us at risk of losing something far bigger: our collective creative piano. Not to AI itself, but to fear of what AI might become. Her new book Creative Machines: AI, Art & Us launches with a message that cuts through the replacement anxiety: “AI has always been, and will always be, all about us.” Ackerman spent years in foundational machine learning before a talk by artist Harold Cohen changed everything. She switched to computational creativity, that unpopular intersection where machines meet human expression. She’s built AI tools for musicians. She understands both the technical architecture and the artistic soul. What emerges from our conversation isn’t just about Creative Machines or AI technology. It’s about us, the creative spark in people. Will we surrender our creativity because AI machines seem capable? Or will we build humble creative machines that expand human expression? Let’s walk through what that choice means. 1: The Piano as Lifeline: Why Creativity Matters Now When I ask the question about what the piano means to her, Ackerman’s voice waivers describing losing her piano for the first time: “I think creativity was a lifeline for me in a way. Through all this moving around the world... at the piano, my feelings would pour out of me, and I would sort of get to process things that otherwise would just sort of sit dormant and fester inside of me.” That processing matters. Creative expression is how we make sense of displacement, trauma, change. It’s how we stay human through upheaval. And it’s how we connect through art to other stories, experiences, history, and fear. Creativity is our lifeline arising out of the depths of human experience. Making this moment in history unusually dangerous: “Now we are at a time in history where people wonder if they should even bother to be creative. Good people. People who are just afraid of what’s going on in the world. And I don’t want the whole planet to lose the piano, so to speak, the way that I did.” The fear is real. I see it in creators who message me, asking if learning creative skills still matters when AI can generate images, write copy, compose music. The replacement narrative sinks deep. AI Imposter Syndrome, AIS, where we feel like imposters compared to AI, but know deep down AI is generating a ton of slop. And Ackerman offers a different frame: “The age of AI doesn’t have to be about taking away creativity for us, it can be the opposite. It can be about making us more creative, giving us more power... It’s so important that we don’t hang up our hands because we’re scared, right?” This isn’t naive optimism. It’s foundational clarity about what we’re really building. Intention matters, now more than ever. 2: Harold Cohen’s Scream—Where Does Creativity Live? Over 10 years ago, Ackerman sat in the back of a conference room, disappointed with her choice to study machine learning, inspired instead by music and singing. She didn’t know what to do with her life. Then Harold Cohen took the stage. The pioneering artist behind AARON—one of the earliest creative AI systems—flashed beautiful images on screen. Maya remembers that he starts screaming: “This old Jewish man screaming on stage. ‘I was the only voice of reason, saying that I was the creative one.’ That’s what he’s screaming. How other...

Duración:00:23:24

Pídele al anfitrión que permita compartir el control de reproducción

Breaking the $4/Min Barrier: How AI Pays $120 for Raw Video and $30 for another?

10/3/2025
When Hollywood’s Catalog Isn’t Enough and Might Need AI Licensing Lionsgate thought they had this figured out. The studio that owns John Wick, Twilight, and The Hunger Games partnered with Runway AI in 2025 to build custom video models. The vision? Type “anime version of John Wick” and watch AI generate it from their catalog. That was around June 2025. Last week, the experiment quietly closed. The problem wasn’t incompetence, it was scale. Sources told The Wrap that “the Lionsgate catalog is too small to create a model.” Even Disney’s catalog was considered insufficient. Let’s do the math: 8,000 movies at roughly 2 hours each equals 16,000 hours. Add 9,000 other titles averaging 1 hour, and you’re at maybe 25,000 hours total. Double that generously to 50,000 hours. Still not enough. AI companies are running out of training data after burning through the entire internet. Video. Real, diverse, messy human video has become a bottleneck. While Lionsgate struggled with insufficient data, one Troveo client was reportedly in the market for 50,000 hours of dog videos because their AI-generated dogs kept coming out with cat bodies. That’s not a business model. That’s market unpredictability. And it’s also a signal that unused footage sitting on your hard drive might have value you haven’t considered. Not as content for views or sponsorships, but as possibly valuable data for machines learning to understand our world. Questions to ask yourself: * How much unused footage do you have archived? * What categories does it fall into—nature, urban environments, specialized activities? * Do you own all rights, or are there B-roll clips, music, or people who’d need to sign off? The Current Market Reality—What We Know Let’s separate signal from speculation. Troveo, a video licensing platform connecting creators with AI companies, claims $20M in total revenue with $5M paid to creators. I use $1-4 per minute as a range for this episode. My reasoning is Troveo is on the lower end of video AI licensing usually $1-3 a minute. It’s likely larger companies like Protégé are also getting paid. We don’t know how much. My assumption is the amount is higher, likely much higher. So I add $1 on the low end of pricing. And urge you all to look at going beyond $4 a minute, a tough and still more sound business than the wholesale $1-2 market. And it may just be what it is, a small market. This is one of the few companies publishing numbers instead of hiding behind NDAs. That transparency matters. Also means we’re looking at early-market indicators, not established rates. Here’s what the pricing tiers appear to reflect: $1-2/minute (Standard Footage): * Talking heads * Predictable motion * Common scenarios * Already-seen angles $3-4/minute (Premium/Edge Cases): * Rare weather phenomena * Unusual wildlife behavior * Technical processes under stress * Unique temporal transitions The Tesla framework helps understand this distinction—not because they’re licensing video, but because they’ve quantified what makes training data valuable. * Highway driving footage is standard. * A deer crossing during a snowstorm at night is premium. * It’s not about monetary pricing; it’s about learning density. Most Tesla footage comes from user cars, with operational costs built into the product, not per-minute purchases. But their internal categorization reveals something useful: edge cases, rarities, and uniqueness teach AI systems more than repetitive standard scenarios. The break-even reality check: Look at the view of the market, knowing most of the business now is $1-2 per minute. The threshold where this becomes a legitimate side revenue stream This is why the $4/min barrier matters. Below that, you’re liquidating existing assets at thin margins. Above it, potentially building a sustainable side business. This is a one-time payment market. You’re not building recurring revenue. You’re selling training data that will likely be...

Duración:00:16:34

Pídele al anfitrión que permita compartir el control de reproducción

AI Pays Authors $3,000 Per Book?! 2025 Licensing Rates for Writers & Photographers

9/19/2025
Time for creators to be recognized and paid by AI, finally! Those days of AI stealing content for free and without permission just crashed in a quiet, proposed settlement - call it AI licensing rates 2025. Creators are being given choice and control over how their work is used by AI, and may even get paid with licensing! After years of big tech telling creators their work has no value, something shifted. Anthropic just proposed paying $1.5 billion to settle copyright claims—roughly $3,000 per book. This is not the net amount received by individual authors, as it will be reduced by administrative costs and divided among rightsholders if multiple parties are involved. For the first time, a judge is recognizing that human creativity has measurable worth in AI training. Not just bestselling authors. Regular creators like you. Subscribers get The complete Creator’s AI Licensing INTEL- 13 pages with a pricing simulation for books, photos, and art. This isn't some distant future promise. This is happening now, and it's opening doors that have been slammed shut since AI started scraping content without permission or payment. The Evidence Shows: AI Companies Are Finally Paying AI licensing rates in 2025 Here's what most people missed in the headlines. Anthropic didn't just throw money at a legal problem. They established something unprecedented: a baseline value for creative work in AI training. Harper Collins negotiated deals worth $2,500-$5,000 per book with major AI companies. These aren't charity payments—they're business investments in quality training data. Why the sudden change? Because AI companies discovered what creators always knew: garbage in, garbage out. They need your expertise, your unique perspective, your carefully crafted content to build better AI systems. The wild west era of free content scraping is ending. The licensing era is beginning. What you can do now: * Document what creative work you own completely * Start thinking about your content's unique value * Don't wait for perfect information—early participants often secure better rates What $3,000 Per Book Means for You Right Now? Author’s AI Revenue That $3,000 figure isn't a lottery ticket—it's validation. A federal judge essentially agreed that copyrighted creative work has quantifiable value in AI training. Even if you never see a licensing check, this changes everything. You now have a legal settlement that says your work isn't "training material"—it's valuable intellectual property. This gives you choices you didn't have before: * License your work and get paid * Opt out entirely and protect your content * Control how AI systems learn from your creativity The key word here is control. For years, creators watched their work get absorbed into AI systems without consent or compensation. Now there's a path to actual choice. But here's the reality check: this applies to work you own the copyright to. If you don't have clear legal ownership, licensing becomes nearly impossible. AI companies need defensible rights to avoid future lawsuits. Your next moves: * Check your copyright status on existing work * Register copyrights for valuable content (it's easier than you think) * Understand that timing matters—prepare now for licensing opportunities in 2026 What's Your Creative Work Actually Worth to AI? Not all content gets valued equally. After researching this market for over a year, certain patterns emerge that determine what AI companies will pay for. Nonfiction typically commands higher rates than fiction. Why? It's factual, less dependent on storytelling brilliance, and provides reliable training data. A well-researched business book or technical guide offers more consistent value than a novel—unless that novel is awesome. And that’s up to the reader! * Sales history matters. If your book sold thousands of copies, that's market validation AI companies understand. It proves real people found value in your work. * Uniqueness...

Duración:00:19:48

Pídele al anfitrión que permita compartir el control de reproducción

The Beginning is Near - The AI Bubble Finally Burst and that's the Best Thing!

9/5/2025
The AI Bubble Finally Burst - And It's the Best Thing That Ever Happened Introduction: When the Hot Air Finally Escaped Picture this: TechCrunch Disrupt 2024, and the first sign I saw was "Stop Hiring Humans." Who exactly is going to adopt AI with that message? If you've been following the AI hype train, you've probably noticed something shifting lately. The algorithms are suddenly filled with Sam Altman fans quoting him saying "yeah, I guess it's over." The AI bubble has finally popped. ChatGPT-5 came out, and honestly, the AGI reasoning promise isn't even real. The pipe dream is gone, and for you and me, this is the very best part. I wrote about this back in July 2024 - "The AI Bubble Burst" became one of my most popular episodes because people understand that the hype and hot air are getting in the way of creativity. The whole message was that AI was going to take your job. Sign me up for that motivation, right? We've been living in some male-engineered science fiction fantasy that AI is just going to go all Terminator on us. Do you wonder why adoption is slow worldwide? Why people aren't paying for it? It's because we've been sold fear instead of partnership. But here's the thing - this crash brings AI down from the ivory tower billionaire pitch fest. It shows us how we can work WITH AI as a tool, how it can actually work with you, not against you. As my guest Maya Ackerman said last week, it's about being co-creative with it. The AI Bubble Bursts: A History of Hot Air and Broken Promises "A lot of the people saying AI first really don't have a high opinion of human beings. And I'm not talking about their visions. I'm talking about yours." Let's take a trip through the greatest hits of AI prediction failures. This isn't about being negative - it's about recognizing patterns so we can build something real. Back in 2015, Elon Musk predicted driverless cars within a few years. Then 2019. Then 2021. We're in 2025 now. It's coming, but not as fast as predicted. Geoffrey Hinton, the so-called Godfather of AI, said radiologists would be gone in a few years - that was 2017. I checked recently. They still have jobs. Mark Zuckerberg pitched the metaverse for years and lost an estimated $45 billion because it didn't work. Listen to Satya Nadella talking about the metaverse - it sounds exactly like current AI pitches, just with different buzzwords. They literally took out "metaverse" and put in "AI." The pattern is clear: we get sold on revolutionary transformation, but reality moves at its own pace. The difference between hype and progress isn't just timing - it's approach. AI Myths Revisited: Why This Time Feels Different We've heard this song before. Every major tech shift brings promises of instant transformation. But AI feels different because it touches creativity, thinking, and decision-making - the stuff we thought was uniquely human. The myths we've been sold include AI doing "everything for everybody" instead of focused tasks. We've been told to create guardrails instead of limiting the overwhelming amount of stuff we're expecting it to do. The goal seems to be "replace everybody" and "stop hiring humans." But people are working with AI secretly, like it's some scarlet letter. Don't say it's AI. It's become this weird, detached thing when it doesn't need to be. Key Points: * Pattern recognition shows AI hype follows historical tech prediction failures * Current messaging focuses on replacement rather than partnership * Real adoption happens quietly, person by person, project by project Trickle Down AI: Why Top-Down Implementation Fails "Time after time again, C-suite executives sit there in their little meetings, separate from the employees, in this hierarchical 'I'm up here, you're down there' job model. Telling them to use AI without considering to those working for them, AI means replacement." Here's what happens in many companies: executives decide they need AI. They hand it down to their...

Duración:00:19:12

Pídele al anfitrión que permita compartir el control de reproducción

Humble Creative Machines: Creating AI to Elevate You (Instead of Replacing You)

8/22/2025
As AI Hype dissolves into mainstream cringe, it’s easy to forget that the gems are found below the surface jargon and PR blather. Like Maya Ackerman, founder and CEO of WaveAI since 2017, helping people write lyrics and make music without breaking the rules. Or stealing other’s music! Creating emotional connections, inspiring discovery, AI designed to release that creative feeling, focusing on human creativity instead of the AI Roulette wheel: “We should be thinking, how can we build technology? How can we build an LLM or a different kind of model designed to elevate the human spirit? And there are ways to do this. We just need to expand our imagination.” Maya Ackerman Over a million songs have been created with of WaveAI since 2018. Including a few big hits and several quiet relationships with major players who face backlash for exploring an AI future that empowers musicians. Maya Ackerman built her company to work completely differently than what investors wanted or what her competitors – Udio and Suno - deliver. AI tools that make you better While most generative AI tools function like "rotating roulette wheels of content" - spin the wheel, get random output - Maya delivers on a different vision. As an opera singer and musician herself, she experienced firsthand how AI elevates her own creative process, not replace it. "My songwriting abilities blew up," she told me about her first encounter with an early AI prototype. "I went from being a certain level in songwriting to being three times better instantly." That personal breakthrough led to WaveAI - a platform helping both skilled and beginner musicians craft songs and lyrics. Easy to use and affordable. And the technology isn't built to be bought out or go IPO. It's built to elevate creative skills in humanity, in people. Eight years after starting, Maya is still aligned with this vision, even when it means moving away from what the investment world wants. How Does AI Elevate Creativity: Who's Really in Control? Maya laughs when framing what's wrong with most AI tools today: "If ChatGPT is a god when you use ChatGPT, then when you use Lyric Studio, you are the God." Think about how you interact with most AI. You prompt, it produces. You ask, it answers. The AI is the oracle, you're the follower. And Maya knows this approach fundamentally disempowers users. Curtiss King, who had a No.1 hit on iTunes via an album made with LyricStudio, said that it’s like lightning in a bottle. “It enables you to capture this kind of inspiring moment, but also stay in ‘flow’ – to stay solidly in this creative space. You can get out this inspiration onto paper or onto your monitor before it fizzles away.” Curtiss King Maya’s insight from her upcoming book "Creative Machines" cuts to the heart of it: "I now see that AI has always been - and will always be - all about us." This isn't just philosophy - it's a completely different business strategy. While companies like Suno and Udio pitched investors on "human-free Spotify," Maya forged her own path. "The reason that you see Suno and Udio make it so simple and so efficient is because that's what investors wanted. They wanted a replacement model," she explains. She respects their tech, but the AI divide is clear. When you build AI to replace people from day one, that intent gets baked into everything - the algorithms, the interface, the user experience. When you build to elevate people, that's a fundamentally different architecture. Results come from intention as much as doing the work. She’s building a creative force that helps musicians grow, sometimes where they don’t even need to use her tools anymore….which is awesome! Breaking the Average: AI that adapts to your style Most AI systems optimize for the most likely outcome - they head toward the average, the safe choice. "We don't take it towards the mean. We don't give you the most likely outcome because we're trying to help you be creative." Instead of...

Duración:00:21:06

Pídele al anfitrión que permita compartir el control de reproducción

Zero Sum AI Game? Everyone Loses When Artists vs Engineers Make AI Dumber

8/15/2025
When Heart Meets Code: Rediscovering Creativity in the Age of AI Explain AI to me like I’m a 5-year-old, she said. I was doing the research, looking at different AI hot takes from all kinds of sources with differing degrees of credibility. From data engineers endlessly defending their right to grab data like all the others, because in their bubble it’s all okay. And many artists swear that the only way to beat AI is to stop it, how it’s all cheating and liars and doesn’t understand human value. That gets a head nod, but there’s no where to go next, what to do, it’s just stop this and scream loudly so it will go away. Artists draw on the suffering stories of Marcel Proust or Kafka or Van Gogh, playing on the hope that someone will notice their work when they’re dead. Then making decisions driven by single-issue neurosis and a knack for saying nothing in many words, scientific for AI or emotional dumps from the “I never get respect” artists. (I was a member of that club for a long time pre-AI, and the entry fee is acknowledging that creativity is suffering, suffering is art. Luckily a friend taught me that the purpose of art is to beautify the artist, because if anyone or many ones grab onto your art, it’s rare. So might as well heal yourself.) Artists burn time fighting a technology that's already here. Engineers build half-smart AI on cheap data hyper radicalized by AI social algos that favor anger and finger pointing, while ignoring the human creativity that makes it valuable. And while both sides play this zero-sum game, mediocrity wins. You're not fighting AI or creativity. You're fighting each other. Today, I'm going to show you what happens when we stop pretending this is a war and start treating it like what it is: the biggest co-creating chance in human history. We'll look at how Japan figured this out, why even inspired creativity borrows from others, and then I'll tell you a simple story that explains AI better than any technical. Because sometimes the most complex problems need the simplest explanations. So here’s my manifesto, because you have to have one at some point! A Reality Check for the AI Wars Nobody's Winning Point #1: Stop Playing Zero Sum - You're All Losing Artists fighting AI create generic content. Engineers ignoring artists build soulless tools. While you're battling each other, the boring middle wins. The enemy isn't AI or creativity. It's the delusion that one side can succeed by defeating the other. Point #2: Your Creativity Won't Get Stolen (Stop Acting Like It Will) Only 20% of content can be extracted from AI models, even by experts trying to hack them. Your unique voice, perspective, and human experience can't be replicated by machines. Fear of theft is keeping you from the real game; showing what makes you irreplaceable. Point #3: Engineers - You're Building Half-Smart AI Without human creativity and context, your AI produces technically perfect garbage. You're optimizing for patterns while missing the point. Artists don't just create data. They create meaning. Ignore that, and your models stay sophisticated copy machines. Point #4: Artists - The Future Already Left Without You While you're demanding AI stop existing, others are learning to play with it. Your choice isn't whether AI gets built. It's whether you help shape it or get left behind. Sitting out guarantees you have no voice in what comes next. Point #5: Co-creating Beats Competition (Japan Figured This Out) Japan's copyright model proves you can protect creators AND advance AI. They focus on case-by-case solutions instead of blanket wars. The future belongs to countries and communities that build bridges, not walls. Point #6: Machines Learn, Humans Create Intent AI mimics your technique but not your purpose. Engineers who claim AI is "creative" are lying to themselves. Artists who think technique is everything are selling themselves short. The magic happens where human intent meets machine...

Duración:00:10:06

Pídele al anfitrión que permita compartir el control de reproducción

Fear and Loathing in the Cult of AI: 3 Steps to Waking from the Subtle Matrix

8/8/2025
We were somewhere around Redwood City on the edge of the metaverse when the drugs began to take hold. Not the good kind of drugs. The algorithmic kind. The kind making you believe your smartphone loves you more than anyone ever did. And AI is the future, everyone else is the past. It starts with a YouTube comment, one of those beautiful smacks of “I’ve done the research” digital trolling setting the tone of us v. them: "@DeclanDunn ai is the future, you cant stop it. you just sound like a boomer complaining about it." Of course. The classic "you can't stop progress you old dude" statement. Not listening to the short video, cutting out after the first 10 seconds. Expecting someone to spend a minute to realize I was telling creators, fighting AI is like standing in front of a runaway train and telling it to stop. I'm not trying to stop AI. I'm trying to stop turning AI into something mostly serving its masters, like a cult, instead of the people who created all the data that fills it with brilliance. And those creators are the anti-AI Clankers – pro humanity cult – stop it! There's a difference between embracing the future and bending the knee. The AI industry in the US at least are the drug dealers, we’re just the ones receiving the addicted message, and calling it our own. And in the end I’m going to give you all, AI or anti-AI, a 3 step recovery program. The first part is recognizing your both in the same cult, on different sides, but it’s the same. The Great Digital Cult Rush of 2025 See, here's what the “follow the billionaire” in Silicon Valley don't want you to know. The research is so clear. Barnabas Barnty (and yes, that's his real name—you can't make this stuff up) published a study: The Psychology of Indoctrination: How Coercive Cults Exploit Vulnerability and Foster Radical Beliefs that reads like a playbook for AI manipulation. Cults prey on people experiencing "significant life transitions, emotional distress, or social isolation." Sound familiar? That's basically everyone on social media, except the influencers (but they’re all avatars anyway) The story? Like AI taking over, taking your jobs, taking your content, and in the end likely destroying us all. That’s a common storyline. Everyone who's ever posted "Thoughts?" on social and waits for the sweet, sweet validation of professional strangers. And if you don’t play the game by their rules, you’re not in the game. Your thoughts don’t exist. That’s the classic definition of a cult. Barnty’s Four Rules applied to the AI Cult – whether you’re for it or against it. 1. Love Bombing Creates The Algorithm's Embrace Classic cult technique: Shower recruits with attention and affection to create belonging. AI's version? Your first TikTok gets no views, except for some bots who give you the illusion they’re human, maybe even commenting. Humanity's Last Exam becomes your obsession, an LLM test you don’t understand. Doesn’t matter, fake it. Stop being yourself, start copying and spraying around Game Changers and Hot Takes on the latest AI taking over. The algorithm whispers sweet nothings: "You're going viral IF you’re in the AI cult, baby. Dance for me." Suddenly you're hooked like a lab rat hitting the cocaine button, creating content at 3 AM, chasing your first viral, day after day. Don’t give up, but whatever you do, be extreme. And it hits, you get like 1,000 people….ok, 500 people and 500 bots, but who knows? But cross the AI Cult—post something it doesn't like—and watch the love turn cold faster than a San Francisco summer. You're back to 12 views and your mom's comment: "Nice post, honey." 2. Isolation Sucks, Join the Echo Chamber Express Cults cut you off from outside influences. We built something more efficient: algorithmic isolation chambers. Built on content, training you to be AI believers….you can be a AI hater, but that’s so niche. Your feed becomes your reality. AI decides what content you see, what you think about, who you...

Duración:00:16:08

Pídele al anfitrión que permita compartir el control de reproducción

Trump Kills AI Copyright Rules: How Creators Thrive NOT Survive Now

7/25/2025
We’re playing AI Copyright bingo and the only winners so far, are Big Tech….at least according to President Trump. Anything you post or create is now fair game for AI in the US—and pretty much any AI worldwide. President Trump just made it official: "China's not doing it ... and you have to be able to play by the same set of rules." The US announcement: AI companies no longer have to pay for the books, articles, videos, or any content they scrape to train their models. Big Tech and the US government finally agree on something. Your content goes into AI for free. Give the AI industry credit for the propaganda machine that made this real. From Microsoft CEO Satya Nadella to Marc Andreessen, they kept repeating the same talking point until it became truth, then law, at least according to Trump: "When a person reads a book or an article, you've gained great knowledge. That does not mean that you're violating copyright laws, or have to make deals with every content provider." It's persuasive sales copy. It's also complete BS. When you read a book, you can't turn around and create millions of derivative works for commercial use. You can't build a business empire by pattern-matching everything you've consumed. But AI can, and now it's almost legally protected to do so. The Trump AI policy isn't just about copyright—it's about what happens when we follow China's playbook. The same China that's been the enemy of intellectual property for decades, taking whatever they want with impunity. Now we're adopting their model because of fear-mongering about losing the "AI race": a race primarily about military applications, not whether ChatGPT can write better blog posts. What happens to human creativity when taking becomes legal? We've been hoping that recent legal rulings and licensing payments meant we were reaching some middle ground. Not fair, maybe, but at least acknowledging that taking people's work without permission, payment, or consideration was wrong. Instead, Big Tech companies are knowingly buying copyrighted content from Dark Web sources like LibGen, because AI development is apparently more important than legal protections and creative traditions. The real threat isn't just to your content—it's to the human edge itself. If you don't encourage the people on the fringe, those who don't see, don't do, don't act like everyone else, to express themselves, you lose more than just their voice. You lose the edge that shows us what could be, not just what was or what's popular now. That's where the new comes from. That's where we learn what's possible. Think AI's amazing now? Wait until you see what happens when the creative edge disappears, when quality content creators give up because there's no protection or incentive. When anyone can buy pirated content, feed it to an open-source AI model, and compete with the originals. Your content has no protection. But your creativity still has value—if you know how to protect and position it. This week's podcast dives deep into the mindset shift creators need to make. Not to give up, but to adapt. To build resilience in a world where the rules just changed overnight. To remember why we create and how to thrive when the law won't protect us. Because while Trump just handed AI companies a free pass, he can't legislate away what makes you uniquely human. Question 1: Will AI Replace What I Do? The ultimate question everyone asks me: will AI replace what I do? The honest answer is probably yes. Will it ultimately be able to do everything you do? Maybe. The question isn't whether AI will replace what you do. The question is: what are you doing that makes you better than AI? AI is patterns, probabilities, matching algorithms that give people what's been done before. It's about history, not innovation. It's not about vision or looking forward—it's about recreating what already exists. While some jobs are being replaced right now, depending on that replacement happening...

Duración:00:15:03

Pídele al anfitrión que permita compartir el control de reproducción

Free AI Content is Over: License or Die

7/11/2025
For two years, I've documented the AI content wars from every angle - court cases, licensing deals, developers' reasons for taking content without permission or payment, and the gut-punching threat that greets every creator who does the work for little reward. Judging creators to be standoffish, privileged, and pains in the ass reveals the true element lacking in AI: respect for people who create, who aren't engineers. Because you don't build brilliance by lacking respect for humanity. The singularity isn't just about humans and machines melding - it's about a better life for people. AI companies throw this out with casual disregard, and it shows. Wonder why AI adoption is so slow? Why only 3% are willing to pay for AI? Why Big Tech forces AI into every piece of software without choice, telling the rest of us to 'adapt or die' as if they're in control? This is what causes crashes and bad business. And oh yeah - what's your business model? Because except for ChatGPT, most willingly hide from having one. After all, business models and plans and people are distractions from this grandiose vision of AI. I've worked with engineers for years. Many stare into the mirror of their own creativity like the artists they disdain. They're myopic, don't listen to users, convinced they're geniuses. Dunning-Kruger at best, inferiority-superiority complex at worst. Meanwhile, great engineers listen to people. They don't start with tech - they start with users, the ones who show you what's up. Creators understand it isn't their work that matters - it's the audience's reaction. What they create isn't what the audience consumes. It means different things to different people. Unpredictable. That's how you find a business model: customers plus onboarding and retention. Relying on the latest cool AI is so yesterday. Most of it's based on a few LLMs with little differentiation. This game gets hard when you're stuck in the engineer bubble hoping the hype lasts. And it doesn't - which is good. What's happening now is ego. Safe Superintelligence and Sutskever running secret projects with no plans, no business models, just billionaire egos getting stroked. But these dances end when truth surfaces. What people do with your tech matters more than what you say. Engineers fall on the cross of their own egos, staring into mirrors users don't share. You're part of the world. Join it. Get a business model. Stop bragging you should get content for free because 'AI will free the world.' Then tell me why DeepSeek is every bit as good with less money - because they focus on solving problems, not massaging engineer egos. That bubble is popping. Here's how I know. Part 1: The Divide (Bruce Randall Interview) Why both sides think they're right - and why that's the problem Bruce Randall cut right through my bias in ways that made us both laugh - because we recognized ourselves in the problem. I share what I’ve heard…. "I think both sides are like incredibly similar... They're entitled. Their work sucks. I don't like their work, so they shouldn't get paid for it. Right? Versus the creatives. Like I should get paid for everything that I don't get paid for." We're all just defending our positions instead of listening. Engineers dismiss creators as entitled whiners whose work isn't worth paying for. Creators demand payment for everything that gets used. Both sides have dug in so deep they can't see how similar their arguments actually are. Bruce nailed the core issue - it's not about who's right, it's about perspective and how people develop that perspective. Once they lock into their worldview, they resist change because they believe they're absolutely right. The other side believes they're absolutely right too. Then Bruce said something that stopped me cold: "And then when you start going inside, you start developing. You start seeing that it's all the same, right? It's just a matter of what degree to what side you're on." That's the...

Duración:00:20:34

Pídele al anfitrión que permita compartir el control de reproducción

The AI Emperor Has No Code: When $1.5B Startups Fake AI and Governments Silence Creators

6/20/2025
Once upon a time, in a land not so far away, there lived an Emperor who loved new technology more than anything else in the world. One day, two groups of ambitious alchemists arrived at his court with perfect proposals. Now, alchemists, for those who might not know, are the folks who claim they can turn ordinary metals into gold through secret methods. In medieval times, they promised kings magical transformations. In our modern AI version, they promise investors magical returns. I learned about digital alchemists the hard way during the dotcom boom. A well-funded CEO once called me with an irresistible offer: "Book me a million-dollar ad campaign. Keep $100,000 for yourself, send me back the rest, but invoice the full million." Easy money, right? A year later, that CEO went to white-collar prison. I turned down the deal because I've learned there's no such thing as easy money, no free lunch, no magical transformation of nothing into something valuable. Except, apparently, in AI. The first digital alchemist group claims they transmute ordinary human coding into revolutionary artificial intelligence. The second group promises to transmute creator protections into competitive advantage, warning that protecting artists' rights dooms the kingdom to irrelevance in the global AI arms race. Both are selling the same invisible cloth: the belief that you get something unexpected without paying the real cost. And just like in the fairy tale, everyone is so eager to see the magic that they forget to ask the obvious question: "Where's the actual gold?" This is the story of how $450 million disappears into thin air, and how two groups of alchemists use the same magical formula: promise transformation, hide what's really going on, and let people's desire to believe do the rest. Part 1: The First Alchemists - Builder.ai The first group of alchemists called themselves Builder.ai—though they started as Engineer.ai in 2016, which should have been the first clue. They began with an astonishing claim: they'd invented an AI assistant named "Natasha" that builds smartphone apps with 80% automation. "As easy as ordering pizza," they promised. The Emperor's court? Mesmerized. Microsoft opened their treasury. The Qatar Investment Authority, SoftBank, the World Bank—they all lined up with golden coins. Over $450 million poured in at a valuation of $1.5 billion. But here's where the story gets interesting. In 2019, the Wall Street Journal decided to peek behind the curtain. What they found wasn't revolutionary AI. Instead it was 700 engineers in India, manually coding every single app. The "artificial intelligence" was more like a smart toaster. You'd think that ends the story, right? Emperor discovers the alchemists aren't telling the truth, throws them in the dungeon, recovers the gold? Not in our AI fairy tale. Builder.ai not only survives. It thrives for six more years. Microsoft doubles down with equity investments. The Qatar Investment Authority keeps writing checks. Because in the age of AI, even when you catch the alchemists red-handed, the desire to believe in magic is stronger than the evidence of your own eyes. The deception continued. Revenue was inflated by 300%. In 2024, they claimed $220 million when the real number was closer to $50 million. When a new CEO finally looked at the books in 2025, he discovered what everyone should have known since 2019: there was no gold, there was no magic, there was no AI. It's not "no code"—it's lots of code. Human code. In June 2025, the creditors came calling. Viola Credit seized $37 million, leaving Builder.ai with just $5 million in restricted accounts. The company filed for bankruptcy, owing over $100 million against assets worth less than $10 million. But here's the most fascinating part about their creditor list—it reads like a spy novel. They owed money to Shibumi Strategy, an Israeli intelligence firm founded by former Mossad operatives. Quinn Emanuel, one of...

Duración:00:12:15

Pídele al anfitrión que permita compartir el control de reproducción

The TikTok AI Blueprint: UK and US Adopt SV's 'Take First, Get Lawyers, Pay Later'?

6/6/2025
The TikTok Blueprint: When Silicon Valley Startup Advice Becomes Government Policy Taking advantage of AI before it takes advantage of you – especially when governments are writing the playbook. First, they came for our content. Now they're rewriting the law books. "Make me a copy of TikTok. Steal all the users. Steal all the music. Put my preferences in it." Then, if it takes off, "hire a bunch of lawyers to go clean the mess up." That Silicon Valley playbook – steal first, settle later – is startup wisdom becoming government policy. And it's playing out in real time across UK and US copyright battles, where AI companies and political power are deeply aligned against creators. Almost like AI is too big to fail. Sacrificing property rights (because that's what copyright is – a property right) on the altar of tech supremacy. UK: The AI Training Opt-Out Lottery The UK government's proposal is simple: AI companies can scrape whatever they want. No permission is needed. Creators must actively opt-out. Good luck with that. As Baroness Kidron, an independent filmmaker and presenting the case to UK politicians puts it: "The plan is a charter for theft, since creatives would have no idea who is taking what, when and from whom.” The policy architect? Matt Clifford – a tech investor with conflicts "so deep" it could be a TV drama. His solution to prevent the UK from "falling behind"? Gut copyright law. Meanwhile, big creators with lawyers are cutting deals left and right. Small creators get the opt-out lottery. When challenged on fairness, Tech Secretary Peter Kyle defended removing transparency requirements because "it would not be fair to one sector to privilege another." Baroness Kidron's response cuts deep: "It is extraordinary that the government's decided, immovable, and strongly held position is that enforcing the law to prevent the theft of UK citizens' property is unfair to the sector doing the stealing." US: Copyright Office Chaos Plot twist across the pond. The US Copyright Office on May 9 releases a bombshell report siding with creators. Then a day later, the Trump administration fires its author, Shira Perlmutter. Her replacement? A DOJ attorney with "no expertise in the field." The turmoil is real. As Graham Lovelace notes in his excellent tracking of this copyright chess match: "Doubts now also exist over whether the office's fourth report will ever see the light of day." The Pattern Emerges AI companies plead poverty while sitting on tens of billions in funding, aiming for trillion-dollar valuations. They've bought five years of free data with fair use arguments. What other industry openly takes what it wants, settles only with players who can afford lawyers, and claims it can't pay for the raw materials that built their entire business model? Both sides of the Atlantic are choosing AI supremacy over creator rights. The Stanford "hire lawyers later" strategy has become official policy. This isn't just about big tech versus artists. This is about who gets to participate in the AI economy. When governments write the rules for free data extraction, only the biggest players win. Small AI companies? They'll still need to pay. Individual creators? They get nothing. The middle gets squeezed while the top and bottom play by different rules. When governments write the rules for digital content extraction, who's really getting optimized here? The blueprint is clear. The question is whether we'll recognize it before it's too late to rewrite the rules. Part 2: The UK AI Opt-Out Impossibility Here's where it gets absurd. The UK government's proposal? AI companies scrape everything. No permission. No cost. Creators just need to "actively opt out." Now if I write a book, I'd have to contact ChatGPT: "Opt me out." Then Claude: "Opt me out." Then DeepSeek: "Opt me out." You get the picture. It's the automated opt-in problem that's plagued the internet since day one. Except now governments are...

Duración:00:23:30

Pídele al anfitrión que permita compartir el control de reproducción

This AI Paints, Invents, Dreams - Growing from a Near-Death Experience

5/30/2025
(2 episodes this week only) The Experience That Sparked AI Consciousness In a world that thinks ChatGPT invented AI, let's break down an AI called DABUS that began 30 years ago. It's invented, created a painting, and even had an experience of death. Wait, AI doesn't die. Except in the Copyright Offices worldwide, where DABUS keeps getting rejected for this painting: Dr. Stephen Thaler who created DABUS says he did die at age 2, and came back. "We're sending you back. And, with this lesson. And the lesson was, hey, it's an illusion. It's a good illusion but it's still an illusion." That experience led him much later to create DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). That near death message from his grandmother became the blueprint for an AI that experiences its reality and creates from that experience. I thought DABUS was about AI Copyright, when it’s really about AI Consciousness. I was sure I was walking into another legal discussion about patent disputes and AI copyright law. What I got instead? The origin story of AI that actually paints, invents, and dreams. Right now. Today. While we're all debating whether ChatGPT is truly intelligent, Stephen Thaler built something thirty years ago that makes that conversation look quaint. Listen to what he was really after: "My intent was not to build an invention machine. It was to essentially create a laboratory for studying machine consciousness and sentience." While Silicon Valley races to scale language models, Stephen's been quietly running experiments in machine consciousness since the late 80s. This isn't hype. This isn't theoretical. This is happening. The question isn't whether AI will develop consciousness. Are we ready to recognize consciousness when it's staring us in the face? From Near Death to AI Consciousness in one Life He built conscious AI before it had a name, and gave machines the spark of sentience. While everyone else was arguing about whether computers could think, Dr. Stephen Thaler was teaching them to dream. 1990s. Most people thought AI meant chess computers and spell-check. Thaler creates something called the Creativity Machine®—a neural network that literally frustrates itself into having new ideas. One system generates, another critiques, and when they can't solve a problem? They inject chaos into themselves until breakthrough happens. Sound like consciousness? That's because it is. We're not talking about copyright lawsuits over training data. We're talking about an AI that painted a picture inspired by death. The same death experience that shaped its creator's understanding of consciousness. DABUS has patents. Real ones. For inventions it conceived autonomously. And right now, it's waiting for the world to recognize what it already knows: that it created something worthy of copyright protection. The Technical Foundation: Creating AI Consciousness When I asked Thaler about the origins of his approach, he shows why DABUS operates differently from every AI system you've heard about. "So that's where I got a lot of trouble, because I would take neural networks that were trained on some conceptual space and then purposely kill the neurons in them, and when it died, it would generate new, potentially new and valuable ideas." While everyone is trying to perfect neural networks, Dr. Thaler purposely breaks them to see what emerges. A little neuron death, he discovers, creates innovation. "So that's when the idea came to me, probably in the late 80s, to start adding critics to watch for the good ideas and to selectively reinforce them within the generator." This generator-critic framework became the foundation for machine consciousness. One system creates, another critiques, and together they build something neither could achieve alone. The Origin Story: A Two-Year-Old's Encounter with Death, Grandma, and his Dog The inspiration for "killing" neural networks to generate new ideas didn't come...

Duración:00:16:15

Pídele al anfitrión que permita compartir el control de reproducción

The Most Pro-AI Company You've Never Heard Of Is Fighting For Creator Licensing (And Why Big Tech Worries)

5/23/2025
While everyone's distracted by OpenAI hiring Jony Ive to design the future of AI hardware, the real story happened 2 weeks ago, at 5pm on a Friday. The US Copyright Office drops the most significant AI policy document since ChatGPT launched, and nobody noticed. That's exactly how Big Tech likes it. And Eric Burgess, CEO of CredTent, has been preparing for this moment. After 30 years in content and technology, he's built something the AI industry desperately needs but doesn't want to admit: a licensing platform that actually works for creators AND AI developers. "CredTent.org is not an anti AI company. Quite the reverse. I think that if we do this right, creative people will feel more comfortable using AI to be a part of creativity.” Connect with Eric on LinkedIn and on Medium. I've tested many AI licensing platforms over the past year. Most feel like they were designed by lawyers for billionaires. Small players? Good luck! CredTent is different. It's intuitive, accessible, and built around a radical idea: you can be pro-AI while fighting for creators' rights. Here's why that combination should terrify Big Tech. The AI industry spent two years operating on a simple premise: take content first, ask permission never. They've trained on everything publicly available, bought pirated 3rd party data, hoping the legal system would sort it out later. That pretend copyright doesn’t matter strategy just hit a wall. The Copyright Office document Eric and I discuss doesn't just protect individual creators—it explicitly mentions platforms like CredTent that group smaller creators into licensing pools. Collective licensing that made me nervous in Episode 93? Might work…. This isn't theoretical anymore. It's policy. While AI companies burn billions on compute and talent, they've ignored the question: what happens when you have to pay for your training data? Eric's answer is elegant. Instead of fighting this reality, embrace it. Make it easy. Make it profitable for everyone. That's what real AI optimism looks like—building bridges instead of burning them. This B Corp Just Democratized Content Valuation (And Big AI Doesn't Want You To Know Your Work's Worth) The Copyright Office document didn't just validate creator rights—it specifically mentioned platforms like CredTent that aggregate smaller creators into licensing pools. That's no coincidence. Eric Burgess has been talking to them all year. "I met with them when I was speaking at the NAMM conference/ Somebody needed to come out and solve this problem, to orchestrate the relationship between AI companies and creative folks." Here's what makes CredTent different from the other AI licensing platforms I've tested: it's built for real people, not just Disney and The New York Times. Most licensing platforms feel like they were designed by IP lawyers for clients with seven-figure legal budgets. CredTent's interface is clean, intuitive, and—critically—free to register your work. That matters when you're trying to even out an industry that's historically served only the elite. But it’s far more than the user experience. It's the business model. Eric's team groups individual creators into what he calls a: "standard corpus"—think of it as a content collective that gives independent artists some bargaining power, like mini studios. When AI companies license this corpus, the revenue gets split based on contribution metrics that CredTent's content valuation expertise helps determine. "We're experts on content valuation. This is one of the unfair advantages we have against the competition." That expertise used to be reserved for major media companies who could afford pricey valuation consultants. Now a photographer in Ohio or a songwriter in Nashville gets the same level of professional content assessment. Of course, CredTent is in beta. AI companies will have to respond to the legal and legislative threats around copyright and content for training. It’s like getting...

Duración:00:15:02

Pídele al anfitrión que permita compartir el control de reproducción

A 48-Hour AI Copyright Heist: The USCO Bombshell Report & The Perlmutter Purge

5/16/2025
I'll never forget the morning of Friday, May 9th - maybe the most “brazen” power move I've seen in the AI vs. creators battle yet. A bombshell copyright report drops, standing up for copyright rights, and 24 hours later? The authors get fired. Coincidence? Not a chance. This isn't some boring government report - this is the legal bedrock for whether artists, writers, and musicians get paid when AI uses their work. I'm oddly witnessing the U.S. Copyright Office becoming a voice of reason in a dynamic game of power, money, and control. The Great AI Copyright Heist Here's the thing about AI: there's nothing small about it. If this US Copyright report is allowed to stand, it would answer a whole lot of questions at the center of ongoing lawsuits. Many would take this report and run straight to court with it. This would really challenge everything about AI in the U.S. — from ChatGPT to the smallest developer — by asking one fundamental question: What's your training data, and did you have permission to use it? I'm shocked — first the U.S. Copyright Office releases this report months ahead of schedule, and then the president's office did something equally extraordinary to the Copyright Office. There were two power plays happening at the same time. So today I'm breaking down: * What we know about this surprise report * Why it terrifies AI companies * The overnight purge that followed * What this means for your creative work and AI strategy going forward The Copyright Heist: Who Did It? Was it Big Tech who took all the content in the first place without asking permission? Was it the government — either the U.S. Copyright Office trying to seize control by releasing this report before they got fired, or the U.S. administration just "following protocols" by bringing in new people? And when we look back on this moment, I think we'll find the AI Copyright Heist wasn't committed by the usual suspects. Whether you're a creator whose work is being used without permission or a business building on AI, you need to understand this isn't just about politics — it's about money. Billions of dollars. With the release of this report early, and then 24 hours later yanking out the person who wrote it... Creators, this report strongly favored you. And AI developers, if this report just disappears, it's a massive win for your business model. The report was scheduled for January 2026. It arrived in May 2025. And then the leadership got sacked. That's not normal. The question isn't whether this is political — it's how big the economic impact will be and who's going to pay the price. That report is in the public's hands now, but for how long? The Surprise AI Gift to Creators The Unexpected Early Arrival How did this U.S. Copyright Office Part 3 report—not due until January 2026—see the light of day in May 2025? Part 1 came out in July 2024 about digital replicas and deepfakes. It focused on stopping the misuse of celebrity and political images— widely welcomed. Then in January 2025, Part 2 addressed copyrightability: can we copyright AI-generated content? But Part 3 wasn't expected for nearly a year. The US Copyright Office (USCO) is not a legal institution. It's part of the legislative branch. USCO doesn't have anything to do with creating laws or saying "this is real." These are guidelines and recommendations that the government and the courts take very seriously. This report wasn't supposed to exist yet. January 2026 was the target. So why rush it out? Because someone likely knew what was coming. After the Librarian of Congress—Dr. Carla Hayden whose responsibilities include managing the U.S. Copyright Office—was fired, you might expect more heads would roll. It's the way of U.S. government these days. In hindsight, it looks like Perlmutter's team raced to publish before they could be stopped. What The US Copyright Part 3 Report Says Training AI on copyrighted works "clearly implicates the right of...

Duración:00:14:57

Pídele al anfitrión que permita compartir el control de reproducción

The $5M AI Video Licensing Wave: Why AI Companies Pay $1-4 Per Minute For Video

5/9/2025
The growing demand for high-quality AI training data is creating a new revenue stream for videographers and content creators. While legal battles dominate headlines, direct licensing deals between creators and AI companies are quietly forming. AI Video Licensing Market: Content Creators Supply Meets AI Dev Demand Last week, I was helping Omadeus pitch at StartupGrind near Silicon Valley. I could tell you about it—the conversations, the connections, the insights—but that's just 2D data. This video is 4k, but the shoot isn’t great. Maybe $1 a minute? Now imagine watching video footage of that event: people milling about, companies exhibiting their innovations, the vibrant colors, the beautiful Fox Theatre in Redwood City where keynotes were held. That one-minute clip in 4K resolution? It might be worth $1-4 to an AI company. Why? Because video is among the richest data sources for AI training. Instead of legal battles over scraped content, we're witnessing a licensing marketplace emerge, where AI companies pay directly for the high-quality, video data they need. The Market Need for AI Video Licensing The AI video licensing market is growing as AI capabilities expand. Advanced AI models, especially those focused on video generation, require massive amounts of high-quality, diverse data for training. This growth creates immediate challenges for AI developers: * Data Quality Requirements: As models become more sophisticated, they need increasingly higher-quality training data to produce realistic results. * Legal Risks of Scraping: Major AI developers are moving away from scraping due to mounting legal challenges and copyright concerns. Companies like Google, OpenAI, and others face litigation risks when using content without proper permissions. * Diversity Needs: AI models require exposure to a wide variety of scenarios, environments, and actions to learn effectively and avoid bias—diversity that's difficult to get through public datasets alone. These factors have created an opportunity for content creators to monetize their assets, especially unused footage that might otherwise sit in archives. Where We Are Now The first wave of data acquisition through web scraping is unlikely to be repeated—not just because of legal concerns, but because it's not the best approach for building reliable AI models. Instead, buying data through legitimate third parties is becoming the new standard. Here's what we’re learning about the current state of the market: * New AI video licensing companies offer between $1-4 per minute for qualifying video footage, with rates varying based on quality and content. * Premium rates apply for higher quality (4K), specialized content (drone footage), or exclusive rights. * Standard unused content typically fetches $1-2 per minute. * Important warning: Content with identifiable people requires signed waivers—a critical consideration for videographers looking to license their footage. The market is still developing, but companies have already distributed millions in licensing fees to creators, showing this trend is gaining momentum. AI and Video Data Play Well Together: * AI video models need massive amounts of diverse, high-quality data * Legal risks push companies toward licensing rather than scraping * Current rates may be an indicator of data value for AI Training; what’s the value for data your AI training can afford, if any? * Consider privacy implications—avoid content with identifiable people unless you have waivers AI Video Licensing Brokers Companies like Troveo AI and Protege Media are early leaders in this emerging marketplace. How They Work: * They act as a business bridge between creators and AI companies. * They manage negotiation and licensing rights. * They handle contractual safeguards to protect creators' interests. * They've created systems for content submission and management Troveo AI, for example, claims to have already distributed over $5 million in...

Duración:00:11:22

Pídele al anfitrión que permita compartir el control de reproducción

AI-Proof Your Game: The Creative Currency AI Can't Counterfeit

4/25/2025
Once upon a time, when writer’s went on a 2023 strike against AI and shows/movies delayed, a truce was reached. Seemed like the creatives won…. This week the announcement: Academy AI-assisted films can win Oscars - surprise. When Adrien Brody won Best Actor for The Brutalist this year, few people knew AI had enhanced his Hungarian accent. The Oscar-winning musical Emilia Perez used voice-cloning to improve singing performances. And the Academy's new rules specifically state they'll still "consider human involvement when selecting winners." All that hot air out of Hollywood about fighting AI and then bringing it together is simply the way AI is working out. And you need to follow the trails that are out there. Not the trends, not the PR trends, but what's happening. If you're a creator trying to use your skills in this new world of AI without getting replaced, or a developer trying to find richer sources of data, you're all looking for something that we think is IQ, but it's actually HQ – the Human Quotient. In a world obsessed with artificial intelligence, HQ matters more. By the end of this post, you'll not only understand what HQ stands for, but you'll see why it's becoming the most valuable creative currency in the AI age. Deep Thinking - The Long Game We've all seen AI write a paragraph or generate an image in seconds. Here's what it can't do: understand the creative journey behind something like Star Wars. Sure, AI might have scraped the Star Wars script somewhere in its training data. But having the script is just having the final product - not the experience that created it. * What AI can't access is George Lucas editing with his wife, who helped shape the film critically. * It can't know which scenes bombed with test audiences and which worked. * It can't feel what the actors felt shooting a movie they thought looked hokey, waiting for Lucas to create a cantina full of aliens that would take another 20 years to properly realize. * AI hasn't experienced Lucas's deep dive into Joseph Campbell's mythologies, his decision to essentially make a Western in space with good guys in white and bad guys in black. All that invisible context - that's the deep-thinking AI can't replicate. A novelist might describe the gap: "AI can access my published words, but it can't access my notebook where I spent months mapping connections. That's where real magic happens." Deep thinking is AI proof. The novelist knows that AI can access published words, but it can't access this notebook - the months of thinking that makes the story work. The ability to maintain coherent structure over long works isn't just another skill. It's a human advantage. For developers, better AI isn't just about more data. It's about understanding how humans plan and structure experience over time. Here’s how to start: * Document your creative process, not just your final product -- journals, notes, and behind-the-scenes footage are becoming as valuable as the work itself * If you're a developer, look beyond content scraping -- partner with creators willing to share their thinking process, not just their finished work * Create structured thinking frameworks in your field -- the ability to maintain coherence over long projects is distinctly human and increasingly valuable Emotional Resonance - The Feeling Beyond the Words Is it funny if AI cracks a joke and no one laughs? This gets at our second HQ element - emotional resonance. The BBC recently organized professional comedians to test AI humor generation. They found AI could structure monologues or help with writer's block, but the actual jokes? The comedians called them bland and generic. Why? Because AI has never bombed on stage. It's never felt the rush of making a room laugh or the crushing silence of a joke falling flat. A comedian named Anesti Danelis took this challenge head-on, incorporating AI-generated material into his live show. He found the AI could generate joke...

Duración:00:20:40

Pídele al anfitrión que permita compartir el control de reproducción

Thinking Outside the Human Box: How Copyright Creates the AI Liar's Dividend

4/18/2025
A $2000 ring for $2. Something's happening right now, and it's so bizarre. Did you see that viral video? Chinese factory owner. Luxury bags. Gucci. Prada. "Now, as the USA and its little European brothers are trying to refuse Chinese goods, don't you think luxury brands are not trying to move their way out of China? Yes, they did, but they failed because the OEM factory is out of China and they don't have good quality control and they don't have as good craftsmanship." The Trade War kicks in, and suddenly this guy is saying: "Hey, we've been making YOUR luxury bags for decades. Now we might just sell them directly." “WHY don’t you buy them just from us” And Western brands are shocked. Outraged. How dare they! Sound familiar? For six years – SIX YEARS – AI companies have been quietly scraping every bit of creative content on the internet. Books. Art. Photography. Code. Music. Did they ask permission? No. Did they offer payment? Not until they got sued. Did they even say, "thank you"? Please. At least the Chinese factory owner is honest about it. He's telling you exactly what he's doing. The tech giants? They wrapped it in beautiful PR language about "benefiting humanity." And now creators are supposed to be grateful when AI regurgitates their style – without credit, without compensation – because it's "progress." Ironic? Luxury brands getting a taste of what creators have been experiencing for years. We’re examining three perspectives that reveal our struggle to think outside the human box: * A visionary whose battle for AI recognition exposes our human-only legal system * Practical techniques to maintain your voice in this evolving landscape * An AI Strategist writing a book, seeing the futility of perfect protection The real question isn't just about copyright – it's about how our human-centered frameworks are creating what I call the AI Liar's Dividend – a system that rewards dishonesty about who (or what) is creating. AI goes for its own copyright Stephen Thaler is trying to invent AI that truly creates, while the world says "not unless human." He's been pioneering creative machines since the 1990s, long before the current AI boom, developing systems capable of generating novel ideas across multiple domains. His "Creativity Machine" and later DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) represent decades of work pushing the boundaries of machine intelligence. But what makes Thaler remarkable isn't just his technical innovations – it's his unwavering belief that these systems deserve recognition as creators. In March, the D.C. Circuit Court delivered their ruling in Thaler v. Perlmutter on an artwork titled "A Recent Entrance to Paradise," which Thaler insisted was created autonomously by his AI system. The court's rejection hinged entirely on human-centric measures baked into copyright law: * Copyright duration tied to an author's life span plus 70 years (AI is potentially immortal) * Provisions for heirs (AI has none) * Considerations of nationality (AI belongs everywhere and nowhere) * The concept of human intent (courts don't recognize machine intention) There's something almost Kafkaesque about Thaler's quest – hundreds of lawsuits worldwide, and all courts can say is: "not human." The court carefully noted this ruling doesn't prevent copyrighting works made with AI assistance – it simply requires the legally recognized author must be human. This creates what I call a Liar's Dividend: an incentive to falsely claim human authorship or downplay AI's role. But Thaler refuses that compromise. He's not fighting over old content being used without permission – he's advocating for recognizing intelligence he built as a genuine creator. What if we let AI get copyright and watched what happened without assuming the world would end? What if the human ego, legacy concerns, and family fortunes that our copyright system protects aren't actually the basis of creativity...

Duración:00:21:04

Pídele al anfitrión que permita compartir el control de reproducción

Your Brain Beats AI: How to Make ChatGPT Stop Speaking in Tongues

4/11/2025
Human Neurons Running Computers. Like me and AI Human neurons are running computers. I'm not kidding. In Australia, Cortical Labs creates living human neurons and integrates them with computer chips. They're creating what some call "biological intelligence" – actual human brain cells controlling computer systems. Science fiction? It's happening right now! And this made me think: this is exactly like our relationship with AI. When you stare at ChatGPT's empty prompt box and do nothing, nothing happens. Your neurons – your thinking, your ideas, your communication style – are what brings AI to life. Without your input, that powerful AI sits there, dormant. I call this the "blank page problem" — that paralyzing moment when you know AI could help, but you don't know how to ask for what you need. When both you and AI understand each other, the blend of intelligence and humanity's wisdom creates something unique. This isn't about mastering complex tools or becoming a programmer. It's about understanding how you naturally communicate first, then showing AI how to adapt to you—not the other way around. Expect Probability, Not Perfect Promises We've all been there—expecting AI to work like a vending machine. Push button, get candy. AI doesn't store your exact words; it processes patterns in your language. When AI gives you nonsense, it's not because the tool is broken or you're doing it wrong—it's because there's a mismatch in how you're communicating. Human neurons run computers. How? Ask yourself the next time you're trying to talk with ChatGPT or Claude—how do you communicate with it? Because in all the controversy about AI, it's amazing how many people don't use it, don't like it, and are falling behind because they're not actually seeing what's right in front of them—that whether they like it or not, AI is here, and they should at least know how it works. The Overwhelm Problem Look, most of us are winging it with AI tools. We throw everything at them at once and hope for the best. You ask ChatGPT to "Analyze my communication style, suggest improvements, compare me to famous people, AND write something new in my style." (while you’re at it, could you get me a cup of coffee?) That's like walking into a meeting and asking five questions at once—no wonder the responses are generic! I made this mistake for months until I realized the problem wasn't the AI—it was me overloading the conversation. The real secret isn't some fancy framework—it's taking a quick look in the mirror. * What makes your communication style uniquely yours? * What patterns do you naturally fall into? Before AI can adapt to you, you need this self-awareness. Understanding Your Own Communication Style First Before thinking about AI, ask yourself: What makes your communication style distinctly yours? Here are two simple exercises for clarity: Exercise 1: Communication Pattern Analysis Take your last five emails, messages, or anything you've written recently. Look at the first sentence of each paragraph. · Do you start with questions? · Make bold statements? · Use stories? Notice these patterns—they're the foundation of how you naturally communicate. Exercise 2: Word Choice Inventory Make a quick list of phrases and words you use often. Better yet, ask a friend what phrases they associate with you. These are your signature expressions making your communication unique. Quick Self-Check: Think about your last frustrating AI interaction. Did it miss the detail you crave, jump ahead without proper context, or lack the creative exploration you were hoping for? This might reveal what's happening in your chats. Sometimes what we think are great questions lead AI into confusion. I like the rule, no bad chats, only bad questions. Because it’s easy to blame AI, which doesn’t exist, for not doing what you hoped. AI is like people in a sense, the outcome is a probability, not a guarantee. Creating Your Personal Style Guide When...

Duración:00:19:50

Pídele al anfitrión que permita compartir el control de reproducción

The AI Belief Codes: True Believers versus the rest of us

4/4/2025
Maybe AI magic is science not yet understood. Maybe AI is hype. Maybe you wish it would all go away. All are true, and opinions. There’s no way to prove them. And as human beings, when things change without “proof”, we kvetch. It's not just technology changing—it's people's beliefs about it. Some embrace AI with religious fervor, others resist it like confused bystanders. I'm dancing with this tension, joined by tech visionary and Reiki master Bruce Randall, whose background bridges corporate strategy and consciousness studies. "It comes down to perspective and how people have developed that perspective. Once they get that, they resist change because they believe they're right. The other side believes they're right. And it's beliefs, right? It's beliefs." Bruce Randall What emerges from our conversation isn't more tech preaching about AI; instead it’s an exploration of the belief codes that determine how we experience and integrate AI into our lives. The Belief Codes Shaping Our AI Future Our beliefs about AI create our reality with it. I brought this up to start he discussion: "I think both sides are like incredibly similar. The engineers just need to realize the creators aren't these stereotypes. It always comes up— 'They're entitled, their work sucks, I don't like their work, so they shouldn't get paid for it.' Versus the creatives who think, 'I should get paid for everything that I don't get paid for.'" These entrenched positions prevent common ground: * Engineers believe creators are entitled and their work isn't valuable * Creators believe they deserve compensation for every use of their work * Both sides resist seeing the other's view * Neither side is completely right or wrong According to Bruce, the breakthrough happens when: "When you start going deeper, you start seeing that it's all the same. It's just a matter of what degree to what side you're on. If both sides could come in and find common ground and build on it, they could start to go somewhere." The challenge isn't technical—it's human. It's about our ability to step beyond our belief codes and find a new way forward. Is AI Taking Us Away From Software Habits We Love? My early experience with Omadeus showed how we are changing our relationship with data and software: "I believe software is going to be gone. Not gone, but the world I grew up with is changing. If we turn paper to digital...we basically create the AI from the bottom up. We're starting with data." This signals a shift in how we approach AI: * The focus moves from software to data * AI wraps around software rather than replacing it * We're creating "mini LLMs and swarms" rather than traditional applications As someone who's spent decades in data, Bruce sees an interesting pattern emerging: "We went away from silos to these data lakes and common structures. We're going to go back to silos with specific data in there. That's hard, true data. And if you want it, you have to pay for it." This evolution is already happening, with companies like OpenAI creating tiered access: "What they're doing with their $20 to $20,000 pay base is creating higher-end, deeper thoughts. These people use it at a higher level than the average person." The question becomes: will super AI and superintelligence be available to everybody? Bruce's answer is direct and candid: "No." This creates a new digital divide—not just between those with and without technology, but between those with access to increasingly sophisticated AI and those without. It's not about anti-human sentiment but about how we navigate this transition. Fear of AI and New Intelligence - It's Not a Competition! What's driving our resistance to AI? Often it comes down to fear of the unknown and unclear messaging: "I call it the PR fear angle. I'm still trying to wrap my head around it because everything feels like a Steve Jobs reality distortion with $100 billion dollars. (Jobs' reality distortion field - I...

Duración:00:25:55

Pídele al anfitrión que permita compartir el control de reproducción

🌊Creative AI Jobs Rising? How I Learned to Stop Worrying and Love AI

3/28/2025
"I was searching this pirated database of books that have been taken off the internet, and it was bought by Meta in 2020..." That's how our story begins. Through Ann Handley's newsletter, I discovered an Atlantic article with a way to search the pirated LibGen database containing everything from Harry Potter to Stephen King — even Anne's books. A database Meta purchased in 2020. Search LibGen via The Atlantic And it reveals something critical about how AI companies approached training data. A Meta director of engineering admitted in a 2020 chat (revealed through legal filings): "The problem is that people don't realize that if we license one single book, we won't be able to lean into fair use strategy. So we'll have to drop all of the LibGen and books data sets." 2020 - Meta Director of Engineering This engineering choice wasn't just about cost—it was about time. Those pirated books fueled early AI development before most people even knew what was happening. It's cheaper to risk lawsuits than to start paying everyone. That's the reality we're facing. But here's the contradiction that nobody's talking about: Today, the creative economy is worth an estimated $985 billion. G20 Insights estimates that by 2030, the creative economy could account for 10% of global GDP. Deloitte predicts that by 2030, there will be up to 40% growth in the creative industry. If AI is "stealing" all creative work, how is the creative economy projected to grow so dramatically? The creative industry is rising. So should you. This contradiction sits at the heart of our conversation today. The creative industry isn't dying—it's transforming. And instead of fighting yesterday's battle, let's explore how creators can position themselves over the next 2-5 years when everything's changing. "The AI freight train is coming on. I'm saying to the creators, here's what you can actually do." Part 1: Creators as Energy Feeding AI "It starts with knowing the side hustles." Instead of fighting for micro-payouts after the fact, what if creatives became direct contributors—fueling AI with expertise, unique insights, and high-quality input? This isn't about being replaced by AI, but about becoming an essential part of AI's lifecycle. Most of the AI innovation and experimentation is happening at the big enterprise level—big tech, large companies where major experiments are underway. I've been dissecting these down to see what might apply to you and me. "What if we could fuel AI with expert insights, knowledge, experience and also creativity, that dynamic part of humanity, not just limit the probability and the creation to AI?" Small Side Hustles: Specialized Data Curation Financial services, human resources, marketing and sales—these three industries have been the first broad lines of AI adoption because they involve more predictable scenarios like creating words or formulas. Still AI models lack niche, high-quality domain knowledge—especially in: * Highly specialized writing and creative workWhen a friend asked me about selling copywriting services, I told him, "I don't think people are looking for copywriting services. They're looking for your knowledge, your experience, your words using AI." That's exactly what he does—creating a voice, a tool, their own AI copywriter built with them. * Cultural context"I live in Northern California, which most people consider Los Angeles. But if you know the United States, we're part of Cascadia—Washington, Oregon, and the upper northern part of California. People barely ever visit." AI often lacks this deep social understanding and cultural point of view. It only knows what we've written about. * Authentic voices and visual knowledge"People who are photographers, people who are artists can be able to use their visual knowledge, their domain-specific knowledge to help people work with GenAI." Knowing all the aspects that a photographer understands to get that good shot—the kind of film, the camera, the...

Duración:00:18:54