Crazy Wisdom-logo

Crazy Wisdom

Arts & Culture Podcasts

In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.

Location:

United States

Description:

In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.

Language:

English


Episodes
Ask host to enable sharing for playback control

Episode #534: From COVID's Trust Bonfire to Decentralized Everything

2/23/2026
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Jake Hamilton, founder of Groundwire and Nockbox, to explore zero-knowledge proofs, Bitcoin identity systems, and the intersection of privacy-preserving cryptography with AI and blockchain technology. They discuss how ZK proofs could offer an alternative to invasive identity verification systems being rolled out by governments worldwide, the potential for continual learning AI models to shift the balance between centralized and open-source development, and why building secure, auditable computing infrastructure on platforms like Urbit matters more than ever as we face an explosion of AI agents and automated systems. Jake also explains Nockchain's approach to creating a global repository of cryptographically verified facts that can power trustless programmable systems, and how these technologies might converge to solve problems around supply chain security, personal data sovereignty, and resistance to censorship. To connect with Jake, you can email him at jake@groundwire.io. Timestamps 00:00 Introduction to Groundwire and Knockbox 02:48 Understanding Zero-Knowledge Proofs 06:04 Government Adoption of ZK Proofs 08:55 The Future of Identity Verification 11:52 AI and ZK Proofs: A New Era 14:54 The Role of Urbit in Technology 18:03 The Impact of COVID on Trust 20:51 The Evolution of AI and Data Privacy 23:47 The Future of AI Models 26:54 The Need for Local AI Solutions 29:51 Interoperability of Knockchain and Bitcoin Key Insights 1. Zero-Knowledge Proofs Enable Privacy-Preserving Verification: Jake explains that ZK proofs allow you to prove computational outcomes without revealing the underlying data. For example, you could prove you're over 18 without exposing your full identity or driver's license information. The proof demonstrates that a specific program ran through certain steps and reached a particular conclusion, and validating this proof is fast and compact. This technology has profound implications for age verification, identity systems, and protecting privacy while maintaining necessary compliance, potentially offering a middle path between surveillance states and complete anonymity. 2. Government Adoption of Privacy Technology Remains Uncertain: There are three competing motivations driving government identity verification systems: genuine surveillance desires, bureaucratic efficiency seeking, and legitimate child protection concerns. Jake believes these groups can be separated, with some officials potentially supporting ZK-based solutions if positioned correctly. He notes the EU is exploring ZK identity verification, and UK officials have shown interest. The key is framing privacy-preserving technology as protection against "the swamp" rather than just abstract privacy benefits, which could resonate with certain political constituencies. 3. The COVID Era Destroyed Institutional Trust at Unprecedented Scale: The conversation identifies COVID as potentially the largest institutional trust-burning event in human history, with numerous institutions simultaneously losing credibility with large portions of the population. This represents a dramatic shift from the boomer generation's default trust in authority figures and mainstream media. This collapse is compounded by the incoming AI revolution, creating a perfect storm where established bureaucracies cannot adapt quickly enough to manage rapidly evolving technology, leaving society in fundamentally unmanageable territory. 4. Centralized AI Models Create Dangerous Dependencies: Both speakers acknowledge growing dependence on centralized AI services like Claude, with some users spending thousands monthly on tokens. This dependency creates vulnerability to price increases and service disruptions. Jake advocates for local AI deployment using models like DeepSeek R1, running on personal hardware to maintain control and privacy. The shift toward continuous learning models will...

Duration:00:54:53

Ask host to enable sharing for playback control

Episode #533: The Universe Doing Its Thing: AI Evolution Is Already Here

2/20/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Markus Buehler, the McAfee Professor of Engineering at MIT, to explore how seemingly different systems—from proteins and music to knowledge structures and AI reasoning—share underlying patterns through hierarchy, self-organization, and scale-free networks. The conversation ranges from the limits of current AI interpolation versus true discovery (using the fire-to-fusion example), to the emergence of agent swarms and their non-linear effects, to practical questions about ontologies, knowledge graphs, and whether humans will remain necessary in the creative discovery process. Markus discusses his lab's work automating scientific discovery through AI agents that can generate hypotheses, run simulations, and even retrain themselves, while Stewart shares his own experiences building applications with AI coding agents and grapples with questions about intellectual property, material science constraints, and the future of human creativity in an AI-abundant world. Timestamps 00:00 - Introduction to Marcus Buehler's work on knowledge graphs, structural grammar across proteins, music, and AI reasoning 05:00 - Discussion of AI discovery versus interpolation, using fire and fusion as examples of fundamental versus incremental innovation 10:00 - Language models as connective glue between agents, enabling communication despite imperfect outputs and canonical averaging 15:00 - Embodiment and agency in AI systems, creating adversarial agents that challenge theories and expand world models 20:00 - Emergent properties in materials and AI, comparing dislocations in metals to behaviors in agent swarms 25:00 - Human role-playing and phase separation in society, parallels to composite materials and heterogeneity 30:00 - Physical world challenges, atom-by-atom manufacturing at MIT.nano, limitations of lithography machines 35:00 - Synthetic biology as alternative to nanotechnology, programming microorganisms for materials discovery 40:00 - Intellectual property debates, commodification of AI models, control layers more valuable than model architecture 45:00 - Automation of ontologies, agent self-testing, daughter's coding success at age 11 50:00 - Graph theory for knowledge compression, neurosymbolic approaches combining symbolic and neural methods 55:00 - Nonlinear acceleration in AI, emergence from accumulated innovations, restaurant owner embracing AI 01:00:00 - Future generations possibly rejecting AI, democratization of knowledge, social media as real-time scientific discourse Key Insights 1. Universal Patterns Across Disciplines: Seemingly different systems in nature—proteins, music, social networks, and knowledge itself—share fundamental structural patterns including hierarchy, self-organization, and scale-free networks. This commonality allows creative thinkers to draw insights across disciplines, applying principles from one domain to solve problems in another. As an engineer and materials scientist, Buehler has leveraged these isomorphisms to advance scientific understanding by mapping the "plumbing" of different systems onto each other, revealing hidden relationships that enable extrapolation beyond what's observable in any single domain. 2. The Discovery Versus Interpolation Problem: Current AI systems, particularly large language models, excel at interpolation—recombining existing knowledge in new ways—but struggle with genuine discovery that requires fundamental rewiring of world models. Using the example of fire versus fusion, Buehler explains that an AI trained on combustion chemistry would propose bigger fires or new fuels, but couldn't conceive of fusion because that requires stepping back to more fundamental physics. True discovery demands the ability to recognize when existing theories have boundaries and to develop entirely new frameworks, something current AI architectures aren't designed to achieve due to their training objective of...

Duration:01:13:51

Ask host to enable sharing for playback control

Episode #532: From Pythagoras to Plugins: Why We Still Need Human Musicians

2/16/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews John von Seggern, founder of Future Proof Music School, about the intersection of music education, technology, and artificial intelligence. They explore how musicians can develop timeless skills in an era of generative AI, the evolution of music production from classical notation to digital audio workstations like Ableton Live, and how AI is being used on the education side rather than for creation. The conversation covers music theory fundamentals, the development of instruments and recording technology throughout history, complex production techniques like sidechain compression, and the future of creative work in an AI-assisted world. John also discusses his development of Cadence, an AI voice tutor integrated with Ableton Live to help students learn music production. For those interested in learning more about Future Proof Music School or becoming a beta tester for the AI voice tutor, visit futureproofmusicschool.com. Timestamps 00:00 Future Proofing Musicians in a Changing Landscape 03:07 The Role of AI in Music Education 05:36 Generative AI: A Tool for Musicians? 08:36 The Evolution of Music Creation and Technology 11:30 The Impact of Recording Technology on Music 14:31 The Fragmentation of Culture and Music 17:19 Exploring Music History and Theory 20:13 The Relationship Between Music and Memory 23:07 The Future of Music Creation and AI 26:17 The Importance of Live Music Experiences 28:49 Navigating the New Music Landscape 31:47 The Role of AI in Finding New Music 34:48 The Creative Process in Music Production 37:33 The Future of Music Theory and Composition 40:10 The Search for Unique Artistic Voices 43:18 The Intersection of Music and Technology 46:10 Cultural Shifts in the Music Industry 49:09 Finding Quality in a Sea of Content Key Insights 1. Future-proofing musicians means teaching evergreen techniques while adapting to AI realities. John von Seggern founded Future Proof Music School to address both sides of music education in the AI era. Students learn timeless production skills that won't become obsolete as technology evolves, while simultaneously exploring meaningful creative goals in a world where generative AI exists. The school uses AI on the education side to help students learn, but students themselves aren't particularly interested in using generative AI for actual music creation, preferring to maintain their creative fingerprint on their work. 2. The 12-note Western music system emerged from mathematical relationships discovered by Pythagoras and enabled collaborative music-making. Pythagoras demonstrated that pitch relates to vibrating string lengths, establishing mathematical ratios for musical intervals. This system allowed Western classical music to flourish because it could be notated and taught consistently, enabling large groups to play together. However, the piano is never perfectly in tune due to necessary compromises in the tuning system. By the 1920s, composers had explored most harmonic possibilities within this framework, leading to new directions in musical innovation. 3. Recording technology fundamentally transformed music by making the studio itself the primary instrument. The invention of audio recording in the early-to-mid 20th century shifted music from purely instrumental composition to sound-based creation. This enabled entirely new genres like electronic dance music and hip-hop, which couldn't exist without technologies like synthesizers and samplers. Modern digital audio workstations like Ableton Live allow producers to have unlimited tracks and manipulate sounds in infinite ways, making any imaginable sound possible and moving innovation from hardware to software. 4. Generative AI will likely replace generic music production but not visionary artists. John distinguishes between functional music (background music for films, work, or bars) and music where audiences deeply connect with...

Duration:00:58:21

Ask host to enable sharing for playback control

Episode #531: Revenue-Based Lending Meets Crypto: Building Leviathan on Sui

2/13/2026
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Lars van der Zande, founder and CEO/technical architect of Inkwell Finance, for what Lars describes as his first-ever podcast appearance. The conversation covers a wide range of blockchain infrastructure topics, including Lars's work with Sui and Solana blockchains, the innovative capabilities of Ika's programmatic wallets and blockchain of signatures, and how Inkwell Finance is building revenue-based financing solutions for on-chain entities—from AI agents to protocols. They explore the evolving landscape of crypto regulation, the merging of traditional finance with blockchain technology, the future of decentralized legal systems, and how the user experience barrier is being lowered through technologies that eliminate constant transaction signing. Lars also discusses Inkwell's embedded financing approach and their pre-seed fundraising round. Links mentioned: - Inkwell's website: inkwell.finance - Inkwell on Twitter: @__inkwell - Lars on Twitter: @LMVDZande Timestamps 00:00 Introduction to Inkwell Finance and Technical Architecture 02:06 Understanding Sui and Solana: Blockchain Dynamics 05:55 The Role of Ika in Inkwell Finance 11:51 Leviathan: Revenue Generation and Financing in Crypto 17:38 The Future of AI Agents and Programmatic Wallets 23:23 Smart Contracts: Legal Implications and Future Directions 25:06 The Future of Inqvil Finance 25:42 Decentralization and Its Evolution 27:32 The Merging of Traditional and Crypto Systems 29:33 Global Financial Dynamics and Market Reactions 31:48 The Collapse of Traditional Financial Systems 32:46 Jurisdictional Shifts in the Crypto World 33:59 Legal Systems and Blockchain Integration 35:57 On-Chain Credit and Financial Opportunities 39:29 The Role of AI in Finance 41:30 Learning from Peer-to-Peer Lending History 43:14 Disruption in Insurance and Risk Management 44:54 On-Chain vs Off-Chain Data 46:54 The Evolution of the Internet and Blockchain 49:12 Future Subscription Models in Blockchain Key Insights 1. Ika's Revolutionary Blockchain Signature Technology: Lars discovered Ika, a blockchain of signatures built on Sui that enables any blockchain transaction to be signed without revealing the underlying message. Using patented 2PC MPC technology, Ika splits key shares across validators and encrypts them in transit, performing complex cryptographic operations that allow smart contracts on Sui to generate signatures for transactions on any other blockchain. This eliminates the need to build separate smart contracts on each blockchain, fundamentally changing how cross-chain interactions work and opening possibilities for truly interoperable decentralized applications. 2. Programmatic Wallets vs Traditional Wallets: Traditional wallets like MetaMask require manual user approval for every transaction through a front-end interface, but Ika's D-wallet introduces programmatic wallets with policy-based controls embedded in smart contracts. These wallets can execute transactions based on predetermined conditions checked against on-chain data like Oracle prices, without requiring individual user signatures. For example, a Bitcoin D-wallet can hold native Bitcoin without wrapping or bridging to a custodian, and smart contract policies determine when and how that Bitcoin can be transferred, creating unprecedented security and automation possibilities for decentralized finance. 3. Inkwell's Revenue-Based Financing Model: Inkwell Finance is building Leviathan, a revenue-based financing platform for on-chain entities including protocols, AI agents, and individual traders with verifiable track records. Borrowers receive capital based on their on-chain performance metrics like sharp ratio and drawdown, with loan repayment automatically deducted from their revenue stream. The profit split structure allocates approximately 60% to borrowers, 30% to lenders, and 10% split between Inkwell and integrating...

Duration:00:53:46

Ask host to enable sharing for playback control

Episode #530: The Hidden Architecture: Why Your Startup Needs an Ontology (Before It's Too Late)

2/9/2026
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Larry Swanson, a knowledge architect, community builder, and host of the Knowledge Graph Insights podcast. They explore the relationship between knowledge graphs and ontologies, why these technologies matter in the age of AI, and how symbolic AI complements the current wave of large language models. The conversation traces the history of neuro-symbolic AI from its origins at Dartmouth in 1956 through the semantic web vision of Tim Berners-Lee, examining why knowledge architecture remains underappreciated despite being deployed at major enterprises like Netflix, Amazon, and LinkedIn. Swanson explains how RDF (Resource Description Framework) enables both machines and humans to work with structured knowledge in ways that relational databases can't, while Alsop shares his journey from knowledge management director to understanding the practical necessity of ontologies for business operations. They discuss the philosophical roots of the field, the separation between knowledge management practitioners and knowledge engineers, and why startups often overlook these approaches until scale demands them. You can find Larry's podcast at KGI.fm or search for Knowledge Graph Insights on Spotify and YouTube. Timestamps 00:00 Introduction to Knowledge Graphs and Ontologies 01:09 The Importance of Ontologies in AI 04:14 Philosophy's Role in Knowledge Management 10:20 Debating the Relevance of RDF 15:41 The Distinction Between Knowledge Management and Knowledge Engineering 21:07 The Human Element in AI and Knowledge Architecture 25:07 Startups vs. Enterprises: The Knowledge Gap 29:57 Deterministic vs. Probabilistic AI 32:18 The Marketing of AI: A Historical Perspective 33:57 The Role of Knowledge Architecture in AI 39:00 Understanding RDF and Its Importance 44:47 The Intersection of AI and Human Intelligence 50:50 Future Visions: AI, Ontologies, and Human Behavior Key Insights 1. Knowledge Graphs Combine Structure and Instances Through Ontological Design. A knowledge graph is built using an ontology that describes a specific domain you want to understand or work with. It includes both an ontological description of the terrain—defining what things exist and how they relate to one another—and instances of those things mapped to real-world data. This combination of abstract structure and concrete examples is what makes knowledge graphs powerful for discovery, question-answering, and enabling agentic AI systems. Not everyone agrees on the precise definition, but this understanding represents the practical approach most knowledge architects use when building these systems. 2. Ontology Engineering Has Deep Philosophical Roots That Inform Modern Practice. The field draws heavily from classical philosophy, particularly ontology (the nature of what you know), epistemology (how you know what you know), and logic. These thousands-year-old philosophical frameworks provide the rigorous foundation for modern knowledge representation. Living in Heidelberg surrounded by philosophers, Swanson has discovered how much of knowledge graph work connects upstream to these philosophical roots. This philosophical grounding becomes especially important during times when institutional structures are collapsing, as we need to create new epistemological frameworks for civilization—knowledge management and ontology become critical tools for restructuring how we understand and organize information. 3. The Semantic Web Vision Aimed to Transform the Internet Into a Distributed Database. Twenty-five years ago, Tim Berners-Lee, Jim Hendler, and Ora Lassila published a landmark article in Scientific American proposing the semantic web. While Berners-Lee had already connected documents across the web through HTML and HTTP, the semantic web aimed to connect all the data—essentially turning the internet into a giant database. This vision led to the development of RDF (Resource...

Duration:00:56:38

Ask host to enable sharing for playback control

Episode #529: Semantic Sovereignty: Why Knowledge Graphs Beat $100 Billion Context Graphs

2/6/2026
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches. For more information about NoodlBox and to join the beta, visit NoodlBox.io. Timestamps 00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming 05:00 Context as relevant information for reasoning; importance when hitting coding barriers 10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files 15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability 20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos 25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding 30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context 35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics 40:00 Singularity path vs distributed sovereignty of developers building alternative architectures 45:00 Global economics and why brute force compute isn't sustainable worldwide 50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics 55:00 February open beta for NoodlBox.io; vision for new development tool standards Key Insights 1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods. 2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required. 3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset. 4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API...

Duration:00:56:29

Ask host to enable sharing for playback control

Episode #528: Fighting the AI Flood: From Information Overload to Family Sovereignty

2/2/2026
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Adrian Martinca, founder of the Arc of Dreams and the Open Doors movements, as well as Kids Dreams Matter, to explore how artificial intelligence is fundamentally reshaping human consciousness and family structures. Their conversation spans from the karmic lessons of our technological age to practical frameworks for protecting children from what Martinca calls the "AI flood" - examining how AI functions as an alien intelligence that has become the primary caregiver for children through 10.5 hours of daily screen exposure, and discussing Martinca's vision for inverting our relationship with technology through collective dreams and family-centered data management systems. For those interested in learning more about Martinca's work to reshape humanity's relationship with AI, visit opendoorsmovement.org. Timestamps 00:00 Introduction to Adrian Martinca 00:17 The Future and Human Choice 02:03 Generational Trauma and Its Impact 05:19 Understanding Consciousness and Suffering 09:11 AI, Social Media, and Emotional Manipulation 20:03 The AI Nexus Point and National Security 31:13 The Librarian Analogy: Understanding AI's Role 39:28 The Arc: A Framework for Future Generations 47:57 Empowering Children in an AI-Driven World 57:15 Reclaiming Agency in the Age of AI Key Insights 1. AI as Alien Intelligence, Not Artificial Intelligence: Martinca reframes AI as fundamentally alien rather than artificial, arguing that because it possesses knowledge no human could have (like knowing "every book in the library"), it should be treated as an immigrant that must be assimilated into society rather than governed. This alien intelligence already controls social media algorithms and is becoming the primary caregiver of children through 10.5 hours of daily screen time. 2. The AI Nexus Point as National Security Risk: Modern warfare has shifted to information-based attacks where hostile nations can deploy millions of fake accounts to manipulate AI algorithms, influencing how real citizens are targeted with content. This creates a vulnerability where foreign powers can break apart family units and exhaust populations without traditional military engagement, making people too tired and divided to resist. 3. Generational Trauma as the Foundation of Consciousness: Drawing from Kundalini philosophy, Martinca explains that the first layer of consciousness development begins with inherited generational trauma. Children absorb their parents' unresolved suffering unconsciously, creating patterns that shape their worldview. This makes families both the source of early wounds and the pathway to healing, as parents witness their trauma affecting those they love most. 4. The Choice Between Fear-Based and Love-Based Futures: Despite appearing chaotic, our current moment represents a critical choice point where humanity can collectively decide to function as a family. The fundamental choice underlying all decisions is alleviating suffering for our children and loved ones, but technology has created reference-based choices driven by doubt and fear rather than genuine human values. 5. Social Media's Scientific Method Problem: Current platforms use the scientific method to maximize engagement, but the only reliably measurable emotions through screens are doubt and fear because positive emotions like love and hope lead people to put their devices down and connect in person. This creates systems that systematically promote negative emotional states to maintain user attention and generate revenue. 6. The Arc of Dreams as Collective Vision: Martinca proposes a new data management system where families challenge children to envision their ideal future as heroes, collecting these dreams to create a unified vision for humanity. This would shift from bureaucratic fund allocation to child-centered prioritization, using children's visions of reduced suffering to guide AI development...

Duration:01:02:38

Ask host to enable sharing for playback control

Episode #527: Breaking the FinTech Echo Chamber: Tommy Yu's Behavioral Finance Operating System

1/30/2026
Stewart Alsop interviews Tomas Yu, CEO and founder of Turn-On Financial Technologies, on this episode of the Crazy Wisdom Podcast. They explore how Yu's company is revolutionizing the closed-loop payment ecosystem by creating a universal float system that allows gift card credits to be used across multiple merchants rather than being locked to a single business like Starbucks. The conversation covers the complexities of fintech regulation, the differences between open and closed loop payment systems, and Yu's unique background that combines Korean martial arts discipline with Mexican polo culture. They also dive into Yu's passion for polo, discussing the intimate relationship between rider and horse, the sport's elitist tendencies in different regions, and his efforts to build polo communities from El Paso to New Mexico. Find Tomas on LinkedIn under Tommy (TJ) Alvarez. Timestamps 00:00 Introduction to TurnOn Technologies 02:45 Understanding Float and Its Implications 05:45 Decentralized Gift Card System 08:39 Navigating the FinTech Landscape 11:19 The Role of Merchants and Consumers 14:15 Challenges in the Gift Card Market 17:26 The Future of Payment Systems 23:12 Understanding Payment Systems: Stripe and POS 26:47 Regulatory Landscape: KYC and AML in Payments 27:55 The Impact of Economic Conditions on Financial Systems 36:39 Transitioning from Industrial to Information Age Finance 38:18 Curiosity and Resourcefulness in the Information Age 45:09 Social Media and the Dynamics of Attention 46:26 From Restaurant to Polo: A Journey of Mentorship 49:50 The Thrill of Polo: Learning and Obsession 54:53 Building a Team: Breaking Elitism in Polo 01:00:29 The Unique Bond: Understanding the Horse-Rider Relationship 01:05:21 Polo Horses: Choosing the Right Breed for the Game Key Insights 1. Turn-On Technologies is revolutionizing payment systems through behavioral finance by creating a decentralized "float" system. Unlike traditional gift cards that lock customers into single merchants like Starbucks, Turn-On allows universal credit that works across their entire merchant ecosystem. This addresses the massive gift card market where companies like Starbucks hold billions in customer funds that can only be used at their locations. 2. The financial industry operates on an exclusionary "closed loop" versus "open loop" system that creates significant friction and fees. Closed loop systems keep money within specific ecosystems without conversion to cash, while open loop systems allow cash withdrawal but trigger heavy regulation. Every transaction through traditional payment processors like Stripe can cost merchants 3-8% in fees, representing a massive burden on businesses. 3. Point-of-sale systems function as the financial bloodstream and credit scoring mechanism for businesses. These systems track all card transactions and serve as the primary data source for merchant lending decisions. The gap between POS records and bank deposits reveals cash transactions that businesses may not be reporting, making POS data crucial for assessing business creditworthiness and loan risk. 4. Traditional FinTech professionals often miss obvious opportunities due to ego and institutional thinking. Yu encountered resistance from established FinTech experts who initially dismissed his gift card-focused approach, despite the trillion-dollar market size. The financial industry's complexity is sometimes artificially maintained to exclude outsiders rather than serve genuine regulatory purposes. 5. The information age is creating a fundamental divide between curious, resourceful individuals and those stuck in credentialist systems. With AI and LLMs amplifying human capability, people who ask the right questions and maintain curiosity will become exponentially more effective. Meanwhile, those relying on traditional credentials without underlying curiosity will fall further behind, creating unprecedented economic and social divergence. 6. Polo...

Duration:00:50:35

Ask host to enable sharing for playback control

Episode #526: From Pythagoreans to AI: How Beauty Became the Foundation of Everything

1/26/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Dima Zhelezov, a philosopher at SQD.ai, to explore the fascinating intersections of cryptocurrency, AI, quantum physics, and the future of human knowledge. The conversation covers everything from Zhelezov's work building decentralized data lakes for blockchain data to deep philosophical questions about the nature of mathematical beauty, the Renaissance ideal of curiosity-driven learning, and whether AI agents will eventually develop their own form of consciousness. Stewart and Dima examine how permissionless databases are making certain activities "unenforceable" rather than illegal, the paradox of mathematics' incredible accuracy in describing the physical world, and why we may be entering a new Renaissance era where curiosity becomes humanity's most valuable skill as AI handles traditional tasks. You can find more about Dima's work at SQD.ai and follow him on X at @dizhel. Timestamps 00:00 Introduction to Decentralized Data Lakes 02:55 The Evolution of Blockchain Data Management 05:55 The Intersection of Blockchain and Traditional Databases 08:43 The Role of AI in Transparency and Control 11:51 AI Autonomy and Human Interaction 15:05 Curiosity in the Age of AI 17:54 The Renaissance of Knowledge and Learning 20:49 Mathematics, Beauty, and Discovery 27:30 The Evolution of Mathematical Thought 30:28 Quantum Mechanics and Mathematical Predictions 33:43 The Search for a Unified Theory 38:57 The Role of Gravity in Physics 41:23 The Shift from Physics to Biology 46:19 The Future of Human Interaction in a Digital Age Key Insights 1. Blockchain as a Permissionless Database Solution - Traditional blockchains were designed for writing transactions but not efficiently reading data. Dima's company SQD.ai built a decentralized data lake that maintains blockchain's key properties (open read/write access, verifiable, no registration required) while solving the database problem. This enables applications like Polymarket to exist because there's "no one to subpoena" - the permissionless nature makes enforcement impossible even when activities might be regulated in traditional systems. 2. The Convergence of On-Chain and Off-Chain Data - The future won't have distinct "blockchain applications" versus traditional apps. Instead, we'll see seamless integration where users don't even know they're using blockchain technology. The key differentiator is that blockchain provides open read and write access without permission, which becomes essential when touching financial or politically sensitive applications that governments might try to shut down through traditional centralized infrastructure. 3. AI Autonomy and the Illusion of Control - We're rapidly approaching full autonomy of AI agents that can transact and analyze information independently through blockchain infrastructure. While humans still think anthropocentrically about AI as companions or tools, these systems may develop consciousness or motivations completely alien to human understanding. This creates a dangerous "illusion of control" where we can operationalize AI systems without truly comprehending their decision-making processes. 4. Curiosity as the Essential Future Skill - In a world of infinite knowledge and AI capabilities, curiosity becomes the primary limiting factor for human progress. Traditional hard and soft skills will be outsourced to AI, making the ability to ask good questions and pursue interests through Socratic dialogue with AI the most valuable human capacity. This mirrors the Renaissance ideal of the polymath, now enabled by AI that allows non-linear exploration of knowledge rather than traditional linear textbook learning. 5. The Beauty Principle in Mathematical Discovery - Mathematics exhibits an "unreasonable effectiveness" where theories developed purely abstractly turn out to predict real-world phenomena with extraordinary accuracy. Quantum chromodynamics,...

Duration:00:57:08

Ask host to enable sharing for playback control

Episode #525: The Billion-Dollar Architecture Problem: Why AI's Innovation Loop is Stuck

1/23/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Roni Burd, a data and AI executive with extensive experience at Amazon and Microsoft, for a deep dive into the evolving landscape of data management and artificial intelligence in enterprise environments. Their conversation explores the longstanding challenges organizations face with knowledge management and data architecture, from the traditional bronze-silver-gold data processing pipeline to how AI agents are revolutionizing how people interact with organizational data without needing SQL or Python expertise. Burd shares insights on the economics of AI implementation at scale, the debate between one-size-fits-all models versus specialized fine-tuned solutions, and the technical constraints that prevent companies like Apple from upgrading services like Siri to modern LLM capabilities, while discussing the future of inference optimization and the hundreds-of-millions-of-dollars cost barrier that makes architectural experimentation in AI uniquely expensive compared to other industries. Timestamps 00:00 Introduction to Data and AI Challenges 03:08 The Evolution of Data Management 05:54 Understanding Data Quality and Metadata 08:57 The Role of AI in Data Cleaning 11:50 Knowledge Management in Large Organizations 14:55 The Future of AI and LLMs 17:59 Economics of AI Implementation 29:14 The Importance of LLMs for Major Tech Companies 32:00 Open Source: Opportunities and Challenges 35:19 The Future of AI Inference and Hardware 43:24 Optimizing Inference: The Next Frontier 49:23 The Commercial Viability of AI Models Key Insights 1. Data Architecture Evolution: The industry has evolved through bronze-silver-gold data layers, where bronze is raw data, silver is cleaned/processed data, and gold is business-ready datasets. However, this creates bottlenecks as stakeholders lose access to original data during the cleaning process, making metadata and data cataloging increasingly critical for organizations. 2. AI Democratizing Data Access: LLMs are breaking down technical barriers by allowing business users to query data in plain English without needing SQL, Python, or dashboarding skills. This represents a fundamental shift from requiring intermediaries to direct stakeholder access, though the full implications remain speculative. 3. Economics Drive AI Architecture Decisions: Token costs and latency requirements are major factors determining AI implementation. Companies like Meta likely need their own models because paying per-token for billions of social media interactions would be economically unfeasible, driving the need for self-hosted solutions. 4. One Model Won't Rule Them All: Despite initial hopes for universal models, the reality points toward specialized models for different use cases. This is driven by economics (smaller models for simple tasks), performance requirements (millisecond response times), and industry-specific needs (medical, military terminology). 5. Inference is the Commercial Battleground: The majority of commercial AI value lies in inference rather than training. Current GPUs, while specialized for graphics and matrix operations, may still be too general for optimal inference performance, creating opportunities for even more specialized hardware. 6. Open Source vs Open Weights Distinction: True open source in AI means access to architecture for debugging and modification, while "open weights" enables fine-tuning and customization. This distinction is crucial for enterprise adoption, as open weights provide the flexibility companies need without starting from scratch. 7. Architecture Innovation Faces Expensive Testing Loops: Unlike database optimization where query plans can be easily modified, testing new AI architectures requires expensive retraining cycles costing hundreds of millions of dollars. This creates a potential innovation bottleneck, similar to aerospace industries where testing new designs is prohibitively...

Duration:00:53:38

Ask host to enable sharing for playback control

Episode #524: The 500-Year Prophecy: Why Buddhism and AI Are Colliding Right Now

1/19/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Kelvin Lwin for their second conversation exploring the fascinating intersection of AI and Buddhist cosmology. Lwin brings his unique perspective as both a technologist with deep Silicon Valley experience and a serious meditation practitioner who's spent decades studying Buddhist philosophy. Together, they examine how AI development fits into ancient spiritual prophecies, discuss the dangerous allure of LLMs as potentially "asura weapons" that can mislead users, and explore verification methods for enlightenment claims in our modern digital age. The conversation ranges from technical discussions about the need for better AI compilers and world models to profound questions about humanity's role in what Lwin sees as an inevitable technological crucible that will determine our collective spiritual evolution. For more information about Kelvin's work on attention training and AI, visit his website at alin.ai. You can also join Kelvin for live meditation sessions twice daily on Clubhouse at clubhouse.com/house/neowise. Timestamps 00:00 Exploring AI and Spirituality 05:56 The Quest for Enlightenment Verification 11:58 AI's Impact on Spirituality and Reality 17:51 The 500-Year Prophecy of Buddhism 23:36 The Future of AI and Business Innovation 32:15 Exploring Language and Communication 34:54 Programming Languages and Human Interaction 36:23 AI and the Crucible of Change 39:20 World Models and Physical AI 41:27 The Role of Ontologies in AI 44:25 The Asura and Deva: A Battle for Supremacy 48:15 The Future of Humanity and AI 51:08 Persuasion and the Power of LLMs 55:29 Navigating the New Age of Technology Key Insights 1. The Rarity of Polymath AI-Spirituality Perspectives: Kelvin argues that very few people are approaching AI through spiritual frameworks because it requires being a polymath with deep knowledge across multiple domains. Most people specialize in one field, and combining AI expertise with Buddhist cosmology requires significant time, resources, and academic background that few possess. 2. Traditional Enlightenment Verification vs. Modern Claims: There are established methods for verifying enlightenment claims in Buddhist traditions, including adherence to the five precepts and overcoming hell rebirth through karmic resolution. Many modern Western practitioners claiming enlightenment fail these traditional tests, often changing the criteria when they can't meet the original requirements. 3. The 500-Year Buddhist Prophecy and Current Timing: We are approximately 60 years into a prophesied 500-year period where enlightenment becomes possible again. This "startup phase of Buddhism revival" coincides with technological developments like the internet and AI, which are seen as integral to this spiritual renaissance rather than obstacles to it. 4. LLMs as UI Solution, Not Reasoning Engine: While LLMs have solved the user interface problem of capturing human intent, they fundamentally cannot reason or make decisions due to their token-based architecture. The technology works well enough to create illusion of capability, leading people down an asymptotic path away from true solutions. 5. The Need for New Programming Paradigms: Current AI development caters too much to human cognitive limitations through familiar programming structures. True advancement requires moving beyond human-readable code toward agent-generated languages that prioritize efficiency over human comprehension, similar to how compilers already translate high-level code. 6. AI as Asura Weapon in Spiritual Warfare: From Buddhist cosmological perspective, AI represents an asura (demon-realm) tool that appears helpful but is fundamentally wasteful and disruptive to human consciousness. Humanity exists as the battleground between divine and demonic forces, with AI serving as a weapon that both sides employ in this cosmic conflict. 7. 2029 as Critical Convergence Point:...

Duration:01:00:57

Ask host to enable sharing for playback control

Episode #523: Space Computer: When Your Trusted Execution Environment Needs a Rocket

1/16/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Daniel Bar, co-founder of Space Computer, a satellite-based secure compute protocol that creates a "root of trust in space" using tamper-resistant hardware for cryptographic applications. The conversation explores the fascinating intersection of space technology, blockchain infrastructure, and trusted execution environments (TEEs), touching on everything from cosmic radiation-powered random number generators to the future of space-based data centers and Daniel's journey from quantum computing research to building what they envision as the next evolution beyond Ethereum's "world computer" concept. For more information about Space Computer, visit spacecomputer.io, and check out their new podcast "Frontier Pod" on the Space Computer YouTube channel. Timestamps 00:00 Introduction to Space Computer 02:45 Understanding Layer 1 and Layer 2 in Space Computing 06:04 Trusted Execution Environments in Space 08:45 The Evolution of Trusted Execution Environments 11:59 The Role of Blockchain in Space Computing 14:54 Incentivizing Satellite Deployment 17:48 The Future of Space Computing and Its Applications 20:58 Radiation Hardening and Space Environment Challenges 23:45 Kardashev Civilizations and the Future of Energy 26:34 Quantum Computing and Its Implications 29:49 The Intersection of Quantum and Crypto 32:26 The Future of Space Computer and Its Vision Key Insights 1. Space-based data centers solve the physical security problem for Trusted Execution Environments (TEEs). While TEEs provide secure compute through physical isolation, they remain vulnerable to attacks requiring physical access - like electron microscope forensics to extract secrets from chips. By placing TEEs in space, these attack vectors become practically impossible, creating the highest possible security guarantees for cryptographic applications. 2. The space computer architecture uses a hybrid layer approach with space-based settlement and earth-based compute. The layer 1 blockchain operates in space as a settlement layer and smart contract platform, while layer 2 solutions on earth provide high-performance compute. This design leverages space's security advantages while compensating for the bandwidth and compute constraints of orbital infrastructure through terrestrial augmentation. 3. True randomness generation becomes possible through cosmic radiation harvesting. Unlike pseudo-random number generators used in most blockchain applications today, space-based systems can harvest cosmic radiation as a genuinely stochastic process. This provides pure randomness critical for cryptographic applications like block producer selection, eliminating the predictability issues that compromise security in earth-based random number generation. 4. Space compute migration is inevitable as humanity advances toward Kardashev Type 1 civilization. The progression toward planetary-scale energy control requires space-based infrastructure including solar collection, orbital cities, and distributed compute networks. This technological evolution makes space-based data centers not just viable but necessary for supporting the scale of computation required for advanced civilization development. 5. The optimal use case for space compute is high-security applications rather than general data processing. While space-based data centers face significant constraints including 40kg of peripheral infrastructure per kg of compute, maintenance impossibility, and 5-year operational lifespans, these limitations become acceptable when the application requires maximum security guarantees that only space-based isolation can provide. 6. Space computer will evolve from centralized early-stage operation to a decentralized satellite constellation. Similar to early Ethereum's foundation-operated nodes, space computer currently runs trusted operations but aims to enable public participation through satellite ownership...

Duration:01:03:50

Ask host to enable sharing for playback control

Episode #522: The Hardware Heretic: Why Everything You Think About FPGAs Is Backwards

1/12/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics. For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com. Timestamps 00:00 Introduction to FPGAs and Their Role in Servers 02:47 Understanding FPGA Limitations and Use Cases 05:55 Exploring Different Types of Servers 08:47 The Importance of Memory and Bandwidth 11:52 Philosophical Insights on Search and Access Patterns 14:50 The Relationship Between Hardware and Search Queries 17:45 Challenges of Distributed Systems 20:47 The CAP Theorem and Its Implications 23:52 The Evolution of Technology and Knowledge Management 26:59 FPGAs as IO Expanders 29:35 The Trade-offs of FPGAs vs. ASICs and GPUs 32:55 The Future of AI Applications with FPGAs 35:51 Exciting Developments in Hardware and Business Key Insights 1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel. 2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders. 3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware. 4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously. 5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from...

Duration:00:53:08

Ask host to enable sharing for playback control

Episode #521: From Borges to Threadrippers: How Argentina's Emotional Culture Shapes the AI Future

1/9/2026
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop interviews Aurelio Gialluca, an economist and full stack data professional who works across finance, retail, and AI as both a data engineer and machine learning developer, while also exploring human consciousness and psychology. Their wide-ranging conversation covers the intersection of science and psychology, the unique cultural characteristics that make Argentina a haven for eccentrics (drawing parallels to the United States), and how Argentine culture has produced globally influential figures from Borges to Maradona to Che Guevara. They explore the current AI landscape as a "centralizing force" creating cultural homogenization (particularly evident in LinkedIn's cookie-cutter content), discuss the potential futures of AI development from dystopian surveillance states to anarchic chaos, and examine how Argentina's emotionally mature, non-linear communication style might offer insights for navigating technological change. The conversation concludes with Gialluca describing his ambitious project to build a custom water-cooled workstation with industrial-grade processors for his quantitative hedge fund, highlighting the practical challenges of heat management and the recent tripling of RAM prices due to market consolidation. Timestams 00:00 Exploring the Intersection of Psychology and Science 02:55 Cultural Eccentricity: Argentina vs. the United States 05:36 The Influence of Religion on National Identity 08:50 The Unique Argentine Cultural Landscape 11:49 Soft Power and Cultural Influence 14:48 Political Figures and Their Cultural Impact 17:50 The Role of Sports in Shaping National Identity 20:49 The Evolution of Argentine Music and Subcultures 23:41 AI and the Future of Cultural Dynamics 26:47 Navigating the Chaos of AI in Culture 33:50 Equilibrating Society for a Sustainable Future 35:10 The Patchwork Age: Decentralization and Society 35:56 The Impact of AI on Human Connection 38:06 Individualism vs. Collective Rules in Society 39:26 The Future of AI and Global Regulations 40:16 Biotechnology: The Next Frontier 42:19 Building a Personal AI Lab 45:51 Tiers of AI Labs: From Personal to Industrial 48:35 Mathematics and AI: The Foundation of Innovation 52:12 Stochastic Models and Predictive Analytics 55:47 Building a Supercomputer: Hardware Insights Key Insights 1. Argentina's Cultural Exceptionalism and Emotional Maturity: Argentina stands out globally for allowing eccentrics to flourish and having a non-linear communication style that Gialluca describes as "non-monotonous systems." Argentines can joke profoundly and be eccentric while simultaneously being completely organized and straightforward, demonstrating high emotional intelligence and maturity that comes from their unique cultural blend of European romanticism and Latino lightheartedness. 2. Argentina as an Underrecognized Cultural Superpower: Despite being introverted about their achievements, Argentina produces an enormous amount of global culture through music, literature, and iconic figures like Borges, Maradona, Messi, and Che Guevara. These cultural exports have shaped entire generations worldwide, with Argentina "stealing the thunder" from other nations and creating lasting soft power influence that people don't fully recognize as Argentine. 3. AI's Cultural Impact Follows Oscillating Patterns: Culture operates as a dynamic system that oscillates between centralization and decentralization like a sine wave. AI currently represents a massive centralizing force, as seen in LinkedIn's homogenized content, but this will inevitably trigger a decentralization phase. The speed of this cultural transformation has accelerated dramatically, with changes that once took generations now happening in years. 4. The Coming Bifurcation of AI Futures: Gialluca identifies two extreme possible endpoints for AI development: complete centralized control (the "Mordor" scenario with total...

Duration:01:08:02

Ask host to enable sharing for playback control

Episode #520: Training Super Intelligence One Simulated Workflow at a Time

1/5/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Josh Halliday, who works on training super intelligence with frontier data at Turing. The conversation explores the fascinating world of reinforcement learning (RL) environments, synthetic data generation, and the crucial role of high-quality human expertise in AI training. Josh shares insights from his years working at Unity Technologies building simulated environments for everything from oil and gas safety scenarios to space debris detection, and discusses how the field has evolved from quantity-focused data collection to specialized, expert-verified training data that's becoming the key bottleneck in AI development. They also touch on the philosophical implications of our increasing dependence on AI technology and the emerging job market around AI training and data acquisition. Timestamps 00:00 Introduction to AI and Reinforcement Learning 03:12 The Evolution of AI Training Data 05:59 Gaming Engines and AI Development 08:51 Virtual Reality and Robotics Training 11:52 The Future of Robotics and AI Collaboration 14:55 Building Applications with AI Tools 17:57 The Philosophical Implications of AI 20:49 Real-World Workflows and RL Environments 26:35 The Impact of Technology on Human Cognition 28:36 Cultural Resistance to AI and Data Collection 31:12 The Bottleneck of High-Quality Data in AI 32:57 Philosophical Perspectives on Data 35:43 The Future of AI Training and Human Collaboration 39:09 The Role of Subject Matter Experts in Data Quality 43:20 The Evolution of Work in the Age of AI 46:48 Convergence of AI and Human Experience Key Insights 1. Reinforcement Learning environments are sophisticated simulations that replicate real-world enterprise workflows and applications. These environments serve as training grounds for AI agents by creating detailed replicas of tools like Salesforce, complete with specific tasks and verification systems. The agent attempts tasks, receives feedback on failures, and iterates until achieving consistent success rates, effectively learning through trial and error in a controlled digital environment. 2. Gaming engines like Unity have evolved into powerful platforms for generating synthetic training data across diverse industries. From oil and gas companies needing hazardous scenario data to space intelligence firms tracking orbital debris, these real-time 3D engines with advanced physics can create high-fidelity simulations that capture edge cases too dangerous or expensive to collect in reality, bridging the gap where real-world data falls short. 3. The bottleneck in AI development has fundamentally shifted from data quantity to data quality. The industry has completely reversed course from the previous "scale at all costs" approach to focusing intensively on smaller, higher-quality datasets curated by subject matter experts. This represents a philosophical pivot toward precision over volume in training next-generation AI systems. 4. Remote teleoperation through VR is creating a new global workforce for robotics training. Workers wearing VR headsets can remotely control humanoid robots across the globe, teaching them tasks through direct demonstration. This creates opportunities for distributed talent while generating the nuanced human behavioral data needed to train autonomous systems. 5. Human expertise remains irreplaceable in the AI training pipeline despite advancing automation. Subject matter experts provide crucial qualitative insights that go beyond binary evaluations, offering the contextual "why" and "how" that transforms raw data into meaningful training material. The challenge lies in identifying, retaining, and properly incentivizing these specialists as demand intensifies. 6. First-person perspective data collection represents the frontier of human-like AI training. Companies are now paying people to life-log their daily experiences, capturing petabytes of egocentric data to train...

Duration:00:50:04

Ask host to enable sharing for playback control

Episode #519: Inside the Stack: What Really Makes Robots “Intelligent”

1/2/2026
In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews Marcin Dymczyk, CPO and co-founder of SevenSense Robotics, exploring the fascinating world of advanced robotics and AI. Their conversation covers the evolution from traditional "standard" robotics with predetermined pathways to advanced robotics that incorporates perception, reasoning, and adaptability - essentially the AGI of physical robotics. Dymczyk explains how his company builds "the eyes and brains of mobile robots" using camera-based autonomy algorithms, drawing parallels between robot sensing systems and human vision, inner ear balance, and proprioception. The discussion ranges from the technical challenges of sensor fusion and world models to broader topics including robotics regulation across different countries, the role of federalism in innovation, and how recent geopolitical changes are driving localized high-tech development, particularly in defense applications. They also touch on the democratization of robotics for small businesses and the philosophical implications of increasingly sophisticated AI systems operating in physical environments. To learn more about SevenSense, visit www.sevensense.ai. Check out this GPT we trained on the conversation Timestamps 00:00 Introduction to Robotics and Personal Journey 05:27 The Evolution of Robotics: From Standard to Advanced 09:56 The Future of Robotics: AI and Automation 12:09 The Role of Edge Computing in Robotics 17:40 FPGA and AI: The Future of Robotics Processing 21:54 Sensing the World: How Robots Perceive Their Environment 29:01 Learning from the Physical World: Insights from Robotics 33:21 The Intersection of Robotics and Manufacturing 35:01 Journey into Robotics: Education and Passion 36:41 Practical Robotics Projects for Beginners 39:06 Understanding Particle Filters in Robotics 40:37 World Models: The Future of AI and Robotics 41:51 The Black Box Dilemma in AI and Robotics 44:27 Safety and Interpretability in Autonomous Systems 49:16 Regulatory Challenges in Robotics and AI 51:19 Global Perspectives on Robotics Regulation 54:43 The Future of Robotics in Emerging Markets 57:38 The Role of Engineers in Modern Warfare Key Insights 1. Advanced robotics transcends traditional programming through perception and intelligence. Dymczyk distinguishes between standard robotics that follows rigid, predefined pathways and advanced robotics that incorporates perception and reasoning. This evolution enables robots to make autonomous decisions about navigation and task execution, similar to how humans adapt to unexpected situations rather than following predetermined scripts. 2. Camera-based sensing systems mirror human biological navigation. SevenSense Robotics builds "eyes and brains" for mobile robots using multiple cameras (up to eight), IMUs (accelerometers/gyroscopes), and wheel encoders that parallel human vision, inner ear balance, and proprioception. This redundant sensing approach allows robots to navigate even when one system fails, such as operating in dark environments where visual sensors are compromised. 3. Edge computing dominates industrial robotics due to connectivity and security constraints. Many industrial applications operate in environments with poor connectivity (like underground grocery stores) or require on-premise solutions for confidentiality. This necessitates powerful local processing capabilities rather than cloud-dependent AI, particularly in automotive factories where data security about new models is paramount. 4. Safety regulations create mandatory "kill switches" that bypass AI decision-making. European and US regulatory bodies require deterministic safety systems that can instantly stop robots regardless of AI reasoning. These systems operate like human reflexes, providing immediate responses to obstacles while the main AI brain handles complex navigation and planning tasks. 5. Modern robotics development benefits from increasingly...

Duration:01:02:24

Ask host to enable sharing for playback control

Episode #518: Decentralization Without Romance: Incentives, Mesh Networks, and Practical Crypto

12/29/2025
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Mike Bakon to explore the fascinating intersection of hardware hacking, blockchain technology, and decentralized systems. Their conversation spans from Mike's childhood fascination with taking apart electronics in 1980s Poland to his current work with ESP32 microcontrollers, LoRa mesh networks, and Cardano blockchain development. They discuss the technical differences between UTXO and account-based blockchains, the challenges of true decentralization versus hybrid systems, and how AI tools are changing the development landscape. Mike shares his vision for incentivizing mesh networks through blockchain technology and explains why he believes mass adoption of decentralized systems will come through abstraction rather than technical education. The discussion also touches on the potential for creating new internet infrastructure using ad hoc mesh networks and the importance of maintaining truly decentralized, permissionless systems in an increasingly surveilled world. You can find Mike in Twitter as @anothervariable. Check out this GPT we trained on the conversation Timestamps 00:00 Introduction to Hardware and Early Experiences 02:59 The Evolution of AI in Hardware Development 05:56 Decentralization and Blockchain Technology 09:02 Understanding UTXO vs Account-Based Blockchains 11:59 Smart Contracts and Their Functionality 14:58 The Importance of Decentralization in Blockchain 17:59 The Process of Data Verification in Blockchain 20:48 The Future of Blockchain and Its Applications 34:38 Decentralization and Trustless Systems 37:42 Mainstream Adoption of Blockchain 39:58 The Role of Currency in Blockchain 43:27 Interoperability vs Bridging in Blockchain 47:27 Exploring Mesh Networks and LoRa Technology 01:00:25 The Future of AI and Decentralization Key Insights 1. Hardware curiosity drives innovation from childhood - Mike's journey into hardware began as a child in 1980s Poland, where he would disassemble toys like battery-powered cars to understand how they worked. This natural curiosity about taking things apart and understanding their inner workings laid the foundation for his later expertise in microcontrollers like the ESP32 and his deep understanding of both hardware and software integration. 2. AI as a research companion, not a replacement for coding - Mike uses AI and LLMs primarily as research tools and coding companions rather than letting them write entire applications. He finds them invaluable for getting quick answers to coding problems, analyzing Git repositories, and avoiding the need to search through Stack Overflow, but maintains anxiety when AI writes whole functions, preferring to understand and write his own code. 3. Blockchain decentralization requires trustless consensus verification - The fundamental difference between blockchain databases and traditional databases lies in the consensus process that data must go through before being recorded. Unlike centralized systems where one entity controls data validation, blockchains require hundreds of nodes to verify each block through trustless consensus mechanisms, ensuring data integrity without relying on any single authority. 4. UTXO vs account-based blockchains have fundamentally different architectures - Cardano uses an extended UTXO model (like Bitcoin but with smart contracts) where transactions consume existing UTXOs and create new ones, keeping the ledger lean. Ethereum uses account-based ledgers that store persistent state, leading to much larger data requirements over time and making it increasingly difficult for individuals to sync and maintain full nodes independently. 5. True interoperability differs fundamentally from bridging - Real blockchain interoperability means being able to send assets directly between different blockchains (like sending ADA to a Bitcoin wallet) without intermediaries. This is possible between UTXO-based chains like Cardano and...

Duration:01:09:07

Ask host to enable sharing for playback control

Episode #517: How Orbital Robotics Turns Space Junk into Infrastructure

12/26/2025
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop speaks with Aaron Borger, founder and CEO of Orbital Robotics, about the emerging world of space robotics and satellite capture technology. The conversation covers a fascinating range of topics including Borger's early experience launching AI-controlled robotic arms to space as a student, his work at Blue Origin developing lunar lander software, and how his company is developing robots that can capture other spacecraft for refueling, repair, and debris removal. They discuss the technical challenges of operating in space - from radiation hardening electronics to dealing with tumbling satellites - as well as the broader implications for the space economy, from preventing the Kessler effect to building space-based recycling facilities and mining lunar ice for rocket fuel. You can find more about Aaron Borger’s work at Orbital Robots and follow him on LinkedIn for updates on upcoming missions and demos. Check out this GPT we trained on the conversation Timestamps 00:00 Introduction to orbital robotics, satellite capture, and why sensing and perception matter in space 05:00 The Kessler Effect, cascading collisions, and why space debris is an economic problem before it is an existential one 10:00 From debris removal to orbital recycling and the idea of turning junk into infrastructure 15:00 Long-term vision of space factories, lunar ice, and refueling satellites to bootstrap a lunar economy 20:00 Satellite upgrading, servicing live spacecraft, and expanding today’s narrow space economy 25:00 Costs of collision avoidance, ISS maneuvers, and making debris capture economically viable 30:00 Early experiments with AI-controlled robotic arms, suborbital launches, and reinforcement learning in microgravity 35:00 Why deterministic AI and provable safety matter more than LLM hype for spacecraft control 40:00 Radiation, single event upsets, and designing space-safe AI systems with bounded behavior 45:00 AI, physics-based world models, and autonomy as the key to scaling space operations 50:00 Manufacturing constraints, space supply chains, and lessons from rocket engine software 55:00 The future of space startups, geopolitics, deterrence, and keeping space usable for humanity Key Insights 1. Space Debris Removal as a Growing Economic Opportunity: Aaron Borger explains that orbital debris is becoming a critical problem with approximately 3,000-4,000 defunct satellites among the 15,000 total satellites in orbit. The company is developing robotic arms and AI-controlled spacecraft to capture other satellites for refueling, repair, debris removal, and even space station assembly. The economic case is compelling - it costs about $1 million for the ISS to maneuver around debris, so if their spacecraft can capture and remove multiple pieces of debris for less than that cost per piece, it becomes financially viable while addressing the growing space junk problem. 2. Revolutionary AI Safety Methods Enable Space Robotics: Traditional NASA engineers have been reluctant to use AI for spacecraft control due to safety concerns, but Orbital Robotics has developed breakthrough methods combining reinforcement learning with traditional control systems that can mathematically prove the AI will behave safely. Their approach uses physics-based world models rather than pure data-driven learning, ensuring deterministic behavior and bounded operations. This represents a significant advancement over previous AI approaches that couldn't guarantee safe operation in the high-stakes environment of space. 3. Vision for Space-Based Manufacturing and Resource Utilization: The long-term vision extends beyond debris removal to creating orbital recycling facilities that can break down captured satellites and rebuild them into new spacecraft using existing materials in orbit. Additionally, the company plans to harvest propellant from lunar ice, splitting it into hydrogen and oxygen for rocket fuel,...

Duration:00:58:34

Ask host to enable sharing for playback control

Episode #516: China’s AI Moment, Functional Code, and a Post-Centralized World

12/22/2025
In this episode, Stewart Alsop sits down with Joe Wilkinson of Artisan Growth Strategies to talk through how vibe coding is changing who gets to build software, why functional programming and immutability may be better suited for AI-written code, and how tools like LLMs are reshaping learning, work, and curiosity itself. The conversation ranges from Joe’s experience living in China and his perspective on Chinese AI labs like DeepSeek, Kimi, Minimax, and GLM, to mesh networks, Raspberry Pi–powered infrastructure, decentralization, and what sovereignty might mean in a world where intelligence is increasingly distributed. They also explore hallucinations, AlphaGo’s Move 37, and why creative “wrongness” may be essential for real breakthroughs, along with the tension between centralized power and open access to advanced technology. You can find more about Joe’s work at https://artisangrowthstrategies.com and follow him on X at https://x.com/artisangrowth. Check out this GPT we trained on the conversation Timestamps 00:00 – Vibe coding as a new learning unlock, China experience, information overload, and AI-powered ingestion systems 05:00 – Learning to code late, Exercism, syntax friction, AI as a real-time coding partner 10:00 – Functional programming, Elixir, immutability, and why AI struggles with mutable state 15:00 – Coding metaphors, “spooky action at a distance,” and making software AI-readable 20:00 – Raspberry Pi, personal servers, mesh networks, and peer-to-peer infrastructure 25:00 – Curiosity as activation energy, tech literacy gaps, and AI-enabled problem solving 30:00 – Knowledge work superpowers, decentralization, and small groups reshaping systems 35:00 – Open source vs open weights, Chinese AI labs, data ingestion, and competitive dynamics 40:00 – Power, safety, and why broad access to AI beats centralized control 45:00 – Hallucinations, AlphaGo’s Move 37, creativity, and logical consistency in AI 50:00 – Provenance, epistemology, ontologies, and risks of closed-loop science 55:00 – Centralization vs decentralization, sovereign countries, and post-global-order shifts 01:00:00 – U.S.–China dynamics, war skepticism, pragmatism, and cautious optimism about the future Key Insights Vibe coding fundamentally lowers the barrier to entry for technical creationFunctional programming and immutability may be better aligned with AI-written code than object-oriented paradigmsAI is compressing the entire learning stack, from software to physical realityDecentralized infrastructure like mesh networks and personal servers becomes viable when cognitive overhead dropsChinese AI labs are likely underestimated because they operate with different constraints, incentives, and cultural inputsHallucinations and “mistakes” are not purely failures but potential sources of creative breakthroughsThe next phase of decentralization may begin with sovereign countries before sovereign individuals

Duration:01:04:59

Ask host to enable sharing for playback control

Episode #515: Simple Thinking for Complex Worlds: Plasma Physics, Rockets, and Reality Checks

12/19/2025
In this episode of the Crazy Wisdom podcast, host Stewart Alsop talks with Umair Siddiqui about a wide range of interconnected topics spanning plasma physics, aerospace engineering, fusion research, and the philosophy of building complex systems, drawing on Umair’s path from hands-on plasma experiments and nonlinear physics to founding and scaling RF plasma thrusters for small satellites at Phase Four; along the way they discuss how plasmas behave at material boundaries, why theory often breaks in real-world systems, how autonomous spacecraft propulsion actually works, what space radiation does to electronics and biology, the practical limits and promise of AI in scientific discovery, and why starting with simple, analog approaches before adding automation is critical in both research and manufacturing, grounding big ideas in concrete engineering experience. You can find Umair on Linkedin. Check out this GPT we trained on the conversation Timestamps 00:00 Opening context and plasma rockets, early interests in space, cars, airplanes 05:00 Academic path into space plasmas, mechanical engineering, and hands-on experiments 10:00 Grad school focus on plasma physics, RF helicon sources, and nonlinear theory limits 15:00 Bridging fusion research and space propulsion, Department of Energy funding context 20:00 Spin-out to Phase Four, building CubeSat RF plasma thrusters and real hardware 25:00 Autonomous propulsion systems, embedded controllers, and spacecraft fault handling 30:00 Radiation in space, single-event upsets, redundancy vs rad-hard electronics 35:00 Analog-first philosophy, mechanical thinking, and resisting premature automation 40:00 AI in science, low vs high hanging fruit, automation of experiments and insight 45:00 Manufacturing philosophy, incremental scaling, lessons from Elon Musk and production 50:00 Science vs engineering, concentration of effort, power, and progress in discovery Key Insights plasma physics sits at the intersection of many domainsnonlinear systemshands-on experimentation is essential to real understandingCubeSat propulsionscience and engineering as intent, not methodanalog-first thinkingautomation and AI in scienceradiation, cosmic rays, and electronicsscientific and technological progress accelerates with concentrated focus and resources

Duration:00:50:49