Hashtag Trending-logo

Hashtag Trending

Technology Podcasts

A daily news program covering the top stories in technology with a weekend in depth interview.

Location:

Canada

Description:

A daily news program covering the top stories in technology with a weekend in depth interview.

Twitter:

@itworldca

Language:

English

Contact:

416-290-2015


Episodes
Ask host to enable sharing for playback control

Hollywood vs. AI Video, Data Loss in Gemini, and Perplexity's New Terms | Project Synapse

2/21/2026
The episode opens with sponsor Meter and a conversation about Saturday morning cartoons before shifting to recent breakthroughs in AI video generation from ByteDance's "SeaDance" (with "SeeDream" as its image generator). Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The hosts describe SeaDance's cinematic quality, accurate physics, and realistic recreations of actors and IP (including examples like Tom Cruise vs. Brad Pitt and Keanu Reeves as Neo/John Wick), and discuss the implications for film production, commercials, and local film economies such as Toronto and Vancouver. They cover backlash and gatekeeping, including an AI-made Thanksgiving-themed animated short that won a contest tied to AMC theaters' pre-show but reportedly wasn't shown, and compare resistance to historical Luddite reactions. The discussion broadens to productivity and labor impacts, arguing that AI adoption may mirror the 1980s computer productivity dip before process re-engineering in the 1990s, while also raising concerns that AI leaders are forecasting major white-collar job losses. The hosts highlight the rise of agentic benchmarks (TerminalBench, Apex Agents, BrowseComp) and how AI search helps find information faster than traditional search, but emphasize that trust, reliability, and infrastructure are not keeping pace. They raise major concerns about platform terms and data ownership, focusing on Perplexity's updated terms (non-commercial use only even for paid tiers, mandatory attribution, broad licensing rights over user content, and liability limits). They also discuss reliability failures: a widespread Google Gemini issue where users' chat histories disappeared (only visible as activity records with limited usability), and missing document links in ChatGPT chats. The hosts argue users must back up their own data and criticize unclear policies and weak support. Security risks are illustrated through a story about the AI-enabled robot vacuum "Romo," where a developer used Claude to reverse engineer its app and reportedly gained access to control thousands of devices across multiple countries before responsibly disclosing the issues. They also reference broader concerns like connected home devices, Ring neighborhood features, and Microsoft's Recall concept. In rapid-fire news, they mention Anthropic releasing Sonnet 4.6 as a strong, cheaper option near Opus-level performance, a new Grok release branded "4.20," and a clip from an AI summit in India where Sam Altman and Dario Amodei appeared to refuse to hold hands on stage, which the hosts cite as a sign of immaturity among AI industry leaders. The episode closes with sponsor Meter. 00:00 Sponsor + Welcome to Project Synapse 00:21 Saturday Morning Cartoons… Reimagined by AI 01:16 What is 'SeaDance'? Cinematic AI Video Goes Viral 03:17 Keanu Reeves, Neo vs. John Wick & the End of VFX as We Know It 06:43 From Movies to Ads: How AI Video Hits Commercial Production 07:41 The Hidden Economy of Commercials (and Why Cities Like Toronto/Vancouver Care) 09:56 AMC Won't Screen an AI-Made Short: Early Luddite Backlash 12:54 Artists, AI, and the 'Starving Creator' Reality 16:17 AI Adoption Parallels: The 1980s Computer Wave & the Productivity Dip 24:09 Agentic AI Benchmarks: TerminalBench, Apex Agents & BrowseComp 26:04 AI Search That Actually Saves Time (and Your Memory) 30:36 Perplexity's New Terms of Service: Non-Commercial Use & Ownership Shock 35:40 Liability Caps, More Corporate Gripes… and a Coke Zero 'Sponsor' Bit 37:36 Gemini 3.1's big leap—and why it still doesn't feel trustworthy 38:08 Gemini chat history vanishes: what happened and why users are furious 40:19 OpenAI document links disappearing too: what "saved" really means 42:04 Cloud AI's shaky...

Duration:01:13:55

Ask host to enable sharing for playback control

Can You Jailbreak An F35

2/20/2026
F-35 'Jailbreak' Talk, AMC Rejects AI Film, Gmail Training Confusion, and the AI Productivity Paradox Host Jim Love covers four stories: Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt Dutch Defense Secretary Gijs Tuinman suggests the F-35's software could be "jailbroken," highlighting allied concerns about U.S.-controlled update pipelines and mission systems (formerly ALIS, now ODIN) and arguing the main barriers are contractual and operational rather than purely technical. An AI-generated short film, "Thanksgiving Day" by Igor OV, wins Screen Vision Media's Frame Forward AI Animated Film Festival and a promised two-week theatrical run, but AMC declines to screen it, reflecting ongoing Hollywood sensitivities around generative AI, authorship, and labor. Google responds to reports that it uses Gmail content to train Gemini by stating it does not use Gmail content for training, while confusion stems from wording and placement of Gmail "smart features" settings; the episode critiques the lack of plain-language clarity. Finally, a survey of 6,000 executives (reported via Tom's Hardware) finds over 80% of companies see no measurable productivity gains from AI, drawing parallels to the historic "productivity paradox" and suggesting organizations aren't redesigning processes; the show previews a deeper discussion on Project Synapse. 00:00 Trending Headlines + Sponsor: Meter 00:45 Can You 'Jailbreak' the F-35? Software Sovereignty & Ally Unease 02:48 AI Film Wins a Festival—AMC Says No: The Distribution Bottleneck 05:01 Does Google Train Gemini on Your Gmail? The Settings Confusion Explained 07:29 Why 80% See No AI Productivity Gains: The New 'Productivity Paradox' 09:47 Wrap-Up, Project Synapse Tease + Sponsor Thanks

Duration:00:10:54

Ask host to enable sharing for playback control

Tesla Robotaxis: Four Times the Number of Accidents As Human Drivers

2/19/2026
Discord Age Verification Backlash, Tesla Robotaxi Crash Rate, YouTube Outage & Amazon Kills Blue Jay Robot Host Jim Love covers several tech headlines: Discord's age verification rollout prompts user defections that push TeamSpeak beyond capacity, with concerns centering on third-party verification (including Persona) and broader trust issues; Discord says limited Persona use in the UK has ended and many checks won't require ID or facial scans. Data cited by Gizmodo suggests Tesla robotaxis crash about 1.3 times per million miles versus 0.3 for human drivers, with many incidents being low-speed rear-end collisions, intersection hesitation/entry errors, and strikes on stationary objects. YouTube experiences a major outage affecting hundreds of thousands of users across multiple services, traced to a malfunction in its recommendation system and later restored. Amazon reportedly shuts down its internal Blue Jay warehouse robotics project in under six months while continuing broader fulfillment-center automation. Finally, reports about an Apple AI pendant—a camera-and-microphone wearable likely paired to an iPhone—raise questions about whether Apple's secrecy has weakened or the details are a controlled leak. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Hashtag Trending Kickoff + Sponsor: Meter 00:45 Discord Age Verification Backlash Sparks TeamSpeak Surge 03:00 Tesla Robotaxis: Crash Rate vs Human Drivers (What the Data Shows) 05:18 YouTube Outage: Recommendation System Failure Explained 06:24 Amazon Scraps 'Blue Jay' Warehouse Robot After 6 Months 08:04 Apple's Mysterious AI Pendant: Leak, Strategy, or Secrecy Cracking? 09:57 Wrap-Up, Listener Thanks + Sponsor Message

Duration:00:11:13

Ask host to enable sharing for playback control

Pentagon Threatens To Cut Ties With Anthropic

2/18/2026
In this episode of Hashtag Trending, host Jim Love covers reports that the Pentagon may cut ties with Anthropic over Claude's usage restrictions, after a $200M Department of Defense contract and disagreements about limits related to weapons development, surveillance, and violence. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The episode also examines warnings from Phison Electronics CEO KS Pua that an AI-driven memory crunch could push smaller consumer electronics makers toward bankruptcy or product exits by 2026 as high-end memory supply is prioritized for data centers, with potential ripple effects across devices and even automotive systems. MacWorld's critique of Apple's prolonged Siri overhaul is discussed, including delayed Apple Intelligence promises and reports Apple may integrate Google's Gemini into iOS, raising questions about Apple's premium brand perception amid broader software criticism. Finally, the show highlights a Meta patent describing AI that could continue posting and responding on behalf of deceased users by learning from their historical content, raising concerns about consent, control, authenticity, and identity online. 00:00 Hashtag Trending + Sponsor Message (Meter) 00:46 Pentagon vs. Anthropic: AI Guardrails and Military Use 03:05 AI Memory Crunch: Storage Shortages Threaten Consumer Tech 04:58 Is Siri Now an Apple Liability? Delays, Gemini, and Brand Risk 07:38 Meta's Patent: AI Posting After You Die (Digital Afterlife) 08:59 Wrap-Up, How to Support the Show + Sponsor Thanks

Duration:00:10:24

Ask host to enable sharing for playback control

OpenClaw Joins OpenAI

2/17/2026
Host Jim Love returns after the holidays. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt The episode covers ByteDance's Seedance 2.0 AI video generator, which is producing highly realistic, film-quality scenes and prompting alarm in Hollywood, including comments from screenwriter Rhett Reese and renewed concerns about likeness rights and AI use in entertainment; ByteDance says it is strengthening safeguards to prevent unauthorized use of intellectual property and likenesses. The show reports that Peter Steinberger, creator of the open-source agent tool OpenClaw, is joining OpenAI and the project is becoming part of a foundation for future agent-based AI, while also highlighting OpenClaw's widely discussed security weaknesses and the implications for OpenAI and competitor Anthropic. Western Digital is reported to be sold out of certain hard drive models as AI-related demand absorbs supply, following earlier GPU and memory price pressures. Finally, Ring's Super Bowl ad about finding a lost dog drew criticism for promoting neighborhood camera networks that resemble

Duration:00:10:32

Ask host to enable sharing for playback control

Agentic AI Is Out Of Control - Holiday Edition of Project Synapse

2/16/2026
In this episode of Project Synapse, the hosts discuss how "agentic" AI has rapidly accelerated and become widely distributed, using the explosion of OpenClaw (with claims of ~160,000 instances) as a sign that autonomous agent tools are now in anyone's hands. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt They compare the speed and societal impact of current AI progress to COVID-19's early days, arguing the pace may be even more destabilizing. They cover Anthropic's Claude 4.6 and OpenAI's Codex 5.3, including claims that Claude 4.6 helped produce a functional C compiler for about $20,000, and that a Cowork-like tool could be replicated in a day with Codex 5.3 after Claude reportedly took two weeks to build Cowork. The conversation highlights improved long-context memory performance (needle-in-haystack-style metrics reportedly in the 90% range) and increasingly autonomous behavior such as self-testing, self-correction, and coordinating teams of agents. The hosts then focus on security: MCP (Model Context Protocol) as a widely adopted but "fundamentally insecure" connector requiring broad permissions; the risk of malicious tools/skills and malware in agent ecosystems; and the rise of "shadow AI," where employees or individuals deploy agents without organizational vetting—potentially leaking sensitive data or running up massive token bills. They discuss incentives that push both humans and models toward fast answers and risky deployment, referencing burnout and an HBR study on rising expectations without proportional hiring. The episode also touches on realism and deepfakes, citing impressive new AI video generation (including a Chinese model "SEEDANCE 2.0" example) and how this erodes trust in what's real. They conclude with practical advice for organizations—don't just say "no," create safe outlets and governance ("say how")—and briefly discuss wearables/AR, Meta's continued AI efforts (including the Meta AI app and "Vibes"), and the coming integration of AI into always-on devices. Sponsor: Meter, an integrated wired/wireless/cellular networking stack (meter.com/htt). 00:00 Cold Open + Sponsor: Meter Networking Stack 00:18 Welcome to Project Synapse (and immediate chaos) 00:57 'Something Big Is Happening': AI feels like COVID-speed disruption 02:57 OpenClaw goes viral: 160k instances and easy DIY clones 04:03 Claude Code 'Cowork' on Windows… and why it's broken 06:47 Rebuilding Cowork in a day with OpenAI Codex 5.3 08:18 Why Opus 4.6 feels like a step-change: memory, autonomy, agent teams 11:24 Model leapfrogging + the end of 'can AI write code?' debates 14:45 Hallucinations, 'I don't know,' and self-correction in modern models 18:42 Autonomous agents in practice: cron-like loops, tool use, and fallout 21:00 MCP security: powerful connectors, scary permissions, and 500 zero-days 24:33 Shadow AI & skill marketplaces: the app-store malware analogy 32:02 Incentives drive risk: move fast culture, confident wrong answers, burnout 34:16 AI Agents Boost Productivity… and Raise the Bar at Work 35:14 Warnings of a Coming AI-Driven Crash (and Why We're Not Steering Away) 36:28 "I Quit to Write Poetry": Existential Dread & On the Beach Vibes 37:21 Tech Safety Is Reactive: Seatbelts, Crashes, and the AI Double-Edged Sword 39:42 Fast-Moving Threats: Agents Hacking Infrastructure & Security Debt 40:54 From Doom to Adaptation: Using the Same Tools to Survive the Disruption 42:21 Why We're Numb to AI Warnings + The 'Free Energy' Thought Experiment 46:43 AGI Is Already Here? Prompts, Ego, and the 'If It Quacks Like a Duck' Test 48:56 Deepfake Video Leap: Seedance, Perfect Voices, and What's Real Anymore 52:39 Contain the Damage: 'Don't Say No—Say How' and Shadow AI in Companies 54:58 Holodeck on...

Duration:01:05:52

Ask host to enable sharing for playback control

Agentic AI Is Getting Out of Control: OpenClaw, Claude 4.6 vs Codex 5.3, and the Security Crisis

2/15/2026
In this episode of Project Synapse, the hosts discuss how "agentic" AI has rapidly accelerated and become widely distributed, using the explosion of OpenClaw (with claims of ~160,000 instances) as a sign that autonomous agent tools are now in anyone's hands. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt They compare the speed and societal impact of current AI progress to COVID-19's early days, arguing the pace may be even more destabilizing. They cover Anthropic's Claude 4.6 and OpenAI's Codex 5.3, including claims that Claude 4.6 helped produce a functional C compiler for about $20,000, and that a Cowork-like tool could be replicated in a day with Codex 5.3 after Claude reportedly took two weeks to build Cowork. The conversation highlights improved long-context memory performance (needle-in-haystack-style metrics reportedly in the 90% range) and increasingly autonomous behavior such as self-testing, self-correction, and coordinating teams of agents. The hosts then focus on security: MCP (Model Context Protocol) as a widely adopted but "fundamentally insecure" connector requiring broad permissions; the risk of malicious tools/skills and malware in agent ecosystems; and the rise of "shadow AI," where employees or individuals deploy agents without organizational vetting—potentially leaking sensitive data or running up massive token bills. They discuss incentives that push both humans and models toward fast answers and risky deployment, referencing burnout and an HBR study on rising expectations without proportional hiring. The episode also touches on realism and deepfakes, citing impressive new AI video generation (including a Chinese model "SEEDANCE 2.0" example) and how this erodes trust in what's real. They conclude with practical advice for organizations—don't just say "no," create safe outlets and governance ("say how")—and briefly discuss wearables/AR, Meta's continued AI efforts (including the Meta AI app and "Vibes"), and the coming integration of AI into always-on devices. Sponsor: Meter, an integrated wired/wireless/cellular networking stack (meter.com/htt). 00:00 Cold Open + Sponsor: Meter Networking Stack 00:18 Welcome to Project Synapse (and immediate chaos) 00:57 'Something Big Is Happening': AI feels like COVID-speed disruption 02:57 OpenClaw goes viral: 160k instances and easy DIY clones 04:03 Claude Code 'Cowork' on Windows… and why it's broken 06:47 Rebuilding Cowork in a day with OpenAI Codex 5.3 08:18 Why Opus 4.6 feels like a step-change: memory, autonomy, agent teams 11:24 Model leapfrogging + the end of 'can AI write code?' debates 14:45 Hallucinations, 'I don't know,' and self-correction in modern models 18:42 Autonomous agents in practice: cron-like loops, tool use, and fallout 21:00 MCP security: powerful connectors, scary permissions, and 500 zero-days 24:33 Shadow AI & skill marketplaces: the app-store malware analogy 32:02 Incentives drive risk: move fast culture, confident wrong answers, burnout 34:16 AI Agents Boost Productivity… and Raise the Bar at Work 35:14 Warnings of a Coming AI-Driven Crash (and Why We're Not Steering Away) 36:28 "I Quit to Write Poetry": Existential Dread & On the Beach Vibes 37:21 Tech Safety Is Reactive: Seatbelts, Crashes, and the AI Double-Edged Sword 39:42 Fast-Moving Threats: Agents Hacking Infrastructure & Security Debt 40:54 From Doom to Adaptation: Using the Same Tools to Survive the Disruption 42:21 Why We're Numb to AI Warnings + The 'Free Energy' Thought Experiment 46:43 AGI Is Already Here? Prompts, Ego, and the 'If It Quacks Like a Duck' Test 48:56 Deepfake Video Leap: Seedance, Perfect Voices, and What's Real Anymore 52:39 Contain the Damage: 'Don't Say No—Say How' and Shadow AI in Companies 54:58 Holodeck on...

Duration:01:05:40

Ask host to enable sharing for playback control

Agentic AI Is Out of Control

2/14/2026
In this episode of Project Synapse, the hosts discuss how "agentic" AI has rapidly accelerated and become widely distributed, using the explosion of OpenClaw (with claims of ~160,000 instances) as a sign that autonomous agent tools are now in anyone's hands. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt They compare the speed and societal impact of current AI progress to COVID-19's early days, arguing the pace may be even more destabilizing. They cover Anthropic's Claude 4.6 and OpenAI's Codex 5.3, including claims that Claude 4.6 helped produce a functional C compiler for about $20,000, and that a Cowork-like tool could be replicated in a day with Codex 5.3 after Claude reportedly took two weeks to build Cowork. The conversation highlights improved long-context memory performance (needle-in-haystack-style metrics reportedly in the 90% range) and increasingly autonomous behavior such as self-testing, self-correction, and coordinating teams of agents. The hosts then focus on security: MCP (Model Context Protocol) as a widely adopted but "fundamentally insecure" connector requiring broad permissions; the risk of malicious tools/skills and malware in agent ecosystems; and the rise of "shadow AI," where employees or individuals deploy agents without organizational vetting—potentially leaking sensitive data or running up massive token bills. They discuss incentives that push both humans and models toward fast answers and risky deployment, referencing burnout and an HBR study on rising expectations without proportional hiring. The episode also touches on realism and deepfakes, citing impressive new AI video generation (including a Chinese model "SEEDANCE 2.0" example) and how this erodes trust in what's real. They conclude with practical advice for organizations—don't just say "no," create safe outlets and governance ("say how")—and briefly discuss wearables/AR, Meta's continued AI efforts (including the Meta AI app and "Vibes"), and the coming integration of AI into always-on devices. Sponsor: Meter, an integrated wired/wireless/cellular networking stack (meter.com/htt). 00:00 Cold Open + Sponsor: Meter Networking Stack 00:18 Welcome to Project Synapse (and immediate chaos) 00:57 'Something Big Is Happening': AI feels like COVID-speed disruption 02:57 OpenClaw goes viral: 160k instances and easy DIY clones 04:03 Claude Code 'Cowork' on Windows… and why it's broken 06:47 Rebuilding Cowork in a day with OpenAI Codex 5.3 08:18 Why Opus 4.6 feels like a step-change: memory, autonomy, agent teams 11:24 Model leapfrogging + the end of 'can AI write code?' debates 14:45 Hallucinations, 'I don't know,' and self-correction in modern models 18:42 Autonomous agents in practice: cron-like loops, tool use, and fallout 21:00 MCP security: powerful connectors, scary permissions, and 500 zero-days 24:33 Shadow AI & skill marketplaces: the app-store malware analogy 32:02 Incentives drive risk: move fast culture, confident wrong answers, burnout 34:16 AI Agents Boost Productivity… and Raise the Bar at Work 35:14 Warnings of a Coming AI-Driven Crash (and Why We're Not Steering Away) 36:28 "I Quit to Write Poetry": Existential Dread & On the Beach Vibes 37:21 Tech Safety Is Reactive: Seatbelts, Crashes, and the AI Double-Edged Sword 39:42 Fast-Moving Threats: Agents Hacking Infrastructure & Security Debt 40:54 From Doom to Adaptation: Using the Same Tools to Survive the Disruption 42:21 Why We're Numb to AI Warnings + The 'Free Energy' Thought Experiment 46:43 AGI Is Already Here? Prompts, Ego, and the 'If It Quacks Like a Duck' Test 48:56 Deepfake Video Leap: Seedance, Perfect Voices, and What's Real Anymore 52:39 Contain the Damage: 'Don't Say No—Say How' and Shadow AI in Companies 54:58 Holodeck on...

Duration:01:05:40

Ask host to enable sharing for playback control

Quit GPT Movement, Massive Data Center Investments & AI Burnout: Hashtag Trending

2/13/2026
In this episode of Hashtag Trending, host Jim Love discusses the Quit GPT campaign, which urges users to cancel their ChatGPT subscriptions due to concerns over OpenAI's evolving mission and political entanglements. We examine Anthropic's $20 million donation to a US political group advocating for AI regulation. The podcast also highlights hyperscalers like Meta, Microsoft, Amazon, and Google significantly investing in AI infrastructure and data centers amidst growing community resistance. Finally, we explore new research suggesting that AI tools, while increasing productivity, may also be contributing to worker burnout by intensifying workloads. Tune in for a deep dive into these pressing issues and more. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:45 Quit GPT Movement: AI and Politics 03:12 Anthropic's Political Donation 04:43 Hyperscalers' Massive AI Investments 05:39 Community Resistance to Data Centers 07:40 AI's Impact on Workload and Burnout 11:23 Conclusion and Weekend Panel Preview

Duration:00:12:28

Ask host to enable sharing for playback control

TikTok Tracking, AI Adoption, and Discord's Identity Crisis

2/12/2026
In this episode of Hashtag Trending, host Jim Love discusses the latest news on TikTok's tracking practices across the web, regardless of app use, and how it compares to similar methods used by other companies like Meta. The podcast also covers the rapid adoption of AI in enterprises, highlighting the increasing competition between OpenAI's ChatGPT and Anthropic's Claude. Additionally, Discord faces backlash over new global age verification requirements, sparking concerns about user privacy and past data breaches. The episode concludes with notable leadership changes in major tech firms, indicating ongoing turbulence in the AI industry. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:20 TikTok's Web Tracking Controversy 02:39 AI Adoption in the Workplace 04:30 Discord's Identity Crisis 07:23 Leadership Changes in Tech and AI 09:43 Conclusion and Sign-off

Duration:00:10:14

Ask host to enable sharing for playback control

Hashtag Trending: Bitcoin's Collapse, Google's Quantum Breakthrough, OpenAI's Strategic Shift

2/11/2026
In this episode of Hashtag Trending, host Jim Love discusses the potential downfall of Bitcoin, from its October highs of $125,000 to recent drops around $60,000, including some analysts predicting a possible collapse to zero. Google claims to have overcome a significant quantum computing hurdle involving error rates and qubit stability. OpenAI narrows its focus amidst intense competition, resulting in significant staff departures, including key figures in AI reasoning and model policy research. Finally, Linux's version numbering system is humorously revealed by founder Linus Torvalds to be based on an anthropological, rather than technical, rationale. 00:00 Introduction and Sponsor Message 00:44 Bitcoin's Volatility: Boom to Bust? 03:21 Google's Quantum Computing Breakthrough 06:26 OpenAI's Strategic Shifts and Departures 11:28 The Quirky World of Software Version Numbers 14:11 Conclusion and Call to Action

Duration:00:16:04

Ask host to enable sharing for playback control

AI Job Impact, Wearables Comeback, and New York's Pushback on AI - Hashtag Trending

2/10/2026
In this episode of Hashtag Trending, host Jim Love discusses the impact of AI on job layoffs, highlighting that 55,000 layoffs in 2025 have been linked to AI, with tech hiring slowing down. The episode also explores the potential resurgence of wearables, with Meta and Apple making strategic moves in this space. Additionally, New York's legislative push against unregulated AI content and the unchecked growth of data centers is covered. The episode concludes with Taiwan's firm stance on not shifting semiconductor production to the US, emphasizing the strategic importance of maintaining its chip-making capacity. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:50 AI-Driven Layoffs and Job Market Shifts 06:36 The Future of Wearables 09:24 New York's AI Legislation and Data Center Pushback 14:14 Taiwan's Semiconductor Strategy 18:24 Conclusion and Sponsor Message

Duration:00:19:23

Ask host to enable sharing for playback control

AI Hype Collapses, Waymo Admits Human Assistance, and TikTok Creators Shift Platforms

2/9/2026
In this episode of Hashtag Trending, host Jim Love explores the recent security failures in AI platforms, specifically focusing on a database exposure at MoltBook. Waymo's admission of relying on remote human assistance for its autonomous taxis is discussed, contrasting with Tesla's approach. The episode also highlights the differing responses from Canada and the US regarding Chinese EV software, and examines the migration of TikTok creators to the new platform 'up scrolled' due to censorship concerns. The show concludes with insights into the challenges new social media platforms face in gaining and maintaining user base. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:21 AI Hype and Security Failures 04:51 Waymo's Remote Human Assistance 06:57 US and Canada Diverge on Chinese EV Software 09:07 TikTok Creators Migrate to Up Scrolled 11:53 Conclusion and Sponsor Message

Duration:00:12:56

Ask host to enable sharing for playback control

Navigating the Rapid Developments in AI and Technology

2/7/2026
In this episode of Hashtag Trending, the hosts discuss the rapid advancements in AI and robotics, and how they are transforming various industries. They explore the practical applications of new AI models from Anthropic and OpenAI, touching on the concept of agents and tool use. The conversation also delves into the importance of governance in AI, the implications for healthcare, and the excitement surrounding humanoid robots. With discussions on historical context, mindful practices, and business strategies for incorporating AI, this episode offers a comprehensive look at staying sane amidst technological upheavals. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:21 Welcome to Project Synapse 00:45 The Astonishing Robot 02:22 Comparing Robots and Their Capabilities 04:39 Robots in Household and Healthcare 08:03 Challenges in Healthcare Technology 22:10 AI Policies and Corporate Adoption 29:12 The Viral AI Agent Phenomenon 36:16 AI Advancements: Claude vs. OpenAI 36:33 Speed and Efficiency of Attach P 5.3 36:50 Sub-Agents and Their Roles 39:08 Error Correction and Zero Hallucinations 39:40 The Public Perception of AI 41:21 OpenAI's Internal Struggles 42:04 Anthropic's No Ads Policy 43:37 AI Writing New AIs: The Tipping Point 44:15 The Future of AI in Business 57:48 Governance and Shadow AI 01:07:23 Meditation and Mindfulness in a Fast-Paced World 01:08:55 Concluding Thoughts and Reflections

Duration:01:10:14

Ask host to enable sharing for playback control

Google Goes Big On AI Investment. Bubble? What Bubble?

2/6/2026
Google's AI Investment, Wikipedia's Role, and the Struggles of Copilot In this episode of Hashtag Trending, host Jim Love discusses Google's significant investment in data centres and AI infrastructure, highlighting the company's strategic emphasis on its own TPUs over Nvidia hardware. The episode also covers Wikipedia's vital role as a reliable source of information amidst increasing AI-generated content. Furthermore, it looks at Microsoft's Copilot, currently struggling with user retention despite its widespread integration. Lastly, Jim analyzes the impact of Anthropic's Super Bowl ads on Sam Altman, who surprisingly reacted strongly to them. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:48 Google's Massive Investment in AI Infrastructure 03:17 Wikipedia: The Last Line of Defense Against AI Misinformation 06:36 Microsoft's Copilot Struggles to Gain Traction 09:33 Sam Altman vs. Anthropic: A Super Bowl Ad Controversy 12:37 Conclusion and Sponsor Message

Duration:00:13:45

Ask host to enable sharing for playback control

341 Malicious Skills In OpenClaw Marketplace

2/5/2026
Malware in AI Marketplaces, X Office Raided Over Deepfakes, and Anthropic's Super Bowl Stand In this episode of Hashtag Trending, host Jim Love covers the discovery of 341 malicious skills in the Open Claw AI agent marketplace, a French raid on X offices concerning AI-generated deepfakes, and Anthropic's bold Super Bowl pledge that Claude will remain ad-free. The episode delves into the complex landscape of AI and platform accountability, comparing Anthropic's stance to Apple's historic 'Think Different' campaign. Tune in for a unique insight into the intersection of technology, regulation, and culture shaping the future of AI. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:44 Malicious AI Skills Discovered 03:44 French Authorities Raid X Offices 07:13 Anthropic's Super Bowl Ad Stance 09:42 Cultural Shifts in Tech 11:19 Closing Remarks and Sponsor Message

Duration:00:13:30

Ask host to enable sharing for playback control

Hashtag Trending: AI Panic Wipes Out $300 Billion, OpenAI vs. NVIDIA, and Intel's GPU Plans

2/4/2026
In today's episode of Hashtag Trending, host Jim Love discusses the recent AI-driven market panic that wiped $300 billion from software stocks. Top stories include Anthropic's new legal software and OpenClaw's AI agent linking. Jim also covers the competitive landscape between Chinese and US AI models, OpenAI's frustrations with NVIDIA, and Intel's ramped-up efforts in building GPUs. Additionally, Jim provides an in-depth look at the security flaws of Open Claw and SpaceX's restrictions on Russian Starlink use. Stay tuned for detailed analysis and more industry news. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:19 Podcast Availability Issues 00:59 AI Panic in the Markets 03:51 Chinese AI Models Leading the Way 05:07 OpenAI and Nvidia Relationship Strains 06:12 Intel's GPU Ambitions 07:19 Open Claw Security Concerns 17:10 OpenAI's Advertising Beta 18:11 SpaceX Restricts Russian Starlink Use 19:03 Nvidia and OpenAI Investment Questions 20:22 Conclusion and Sponsor Message

Duration:00:21:19

Ask host to enable sharing for playback control

OpenClaw Is A Security Disaster

2/3/2026
Critical Security Flaws in Open Claw & SpaceX Limits Russian Starlink Use | Hashtag Trending In today's episode of Hashtag Trending, host Jim Love discusses the critical security vulnerabilities found in Open Claw, an AI command hub. Open Claw scored a concerning 2 out of 100 in an independent security analysis, making it a high-risk tool. Additionally, OpenAI sets a $200,000 minimum for ChatGPT advertising, while SpaceX restricts Russian use of Starlink due to military exploitations. Lastly, Nvidia's potential $100 billion investment in OpenAI is under scrutiny, causing a dip in their shares. Tune in for these tech updates and more! 00:00 Introduction and Sponsor Message 00:47 Open Claw Security Concerns 10:35 OpenAI's Advertising Beta 11:20 SpaceX's Response to Russian Starlink Use 12:29 NVIDIA's Investment in OpenAI 13:47 Conclusion and Sponsor Message

Duration:00:14:47

Ask host to enable sharing for playback control

Elon Musk to Add 1 Million Satellites To Crowded Lower Orbit

2/2/2026
Hashtag Trending: Elon Musk's Satellite Plan, Google AI Game Tool, OpenAI's GPT-4 Retirement, China's AI Chip Surge In this episode of Hashtag Trending, host Jim Love covers Elon Musk's proposal to launch 1 million satellites, Google's experimental AI game system Project Genie causing market turbulence, OpenAI retiring its GPT-4 model due to low usage, and the rapid growth of China's domestic AI chip industry led by Alibaba. Tune in to explore the latest tech developments and their broader implications. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:20 Elon Musk's Satellite Proposal 04:29 Google's AI Game Tool Impact 06:19 OpenAI Retires GPT-4.0 Model 08:04 China's AI Chip Market Heats Up 11:30 Conclusion and Sponsor Message

Duration:00:12:30

Ask host to enable sharing for playback control

Project Synapse - Moltbot, Clawdbot and the Overview Effect

1/31/2026
Exploring AI, Cybersecurity, and the Overview Effect - Project Synapse In this episode of Project Synapse, the panel, including Jim Love, Marcel Gagne and John Pinard, along with special guest Kevin Russell from the Human Space Program, dive into the impact of AI on our world, the explosive rise of Clawdbot, and the meaningful implications of the Overview Effect. They discuss the robust capabilities of new AI agents, their applications in both security and daily life, and the transformative potential of perceiving Earth from space. The episode culminates in a deep philosophical conversation on how AI can be leveraged to enhance humanity and foster greater ethical responsibilities. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:18 Welcome to Project Synapse 01:00 Introducing Kevin Russell 01:54 Discussion on AI and Perception 02:52 The Rise of Claude Bot 06:17 Security Concerns with AI Agents 15:09 Anthropic's Approach to AI Development 20:36 The Role of AI in Industry 25:50 Personal Experiences with Corporate Culture 30:15 The Origin of Human Resources 30:39 Frederick Taylor: The First Management Consultant 31:26 The Philosophy of Human Resources 32:26 The Overview Effect: A Transformative Experience 33:33 AI and the Human Space Program 37:31 The Future of Space Exploration and AI 43:08 Challenges and Opportunities in Space and AI 53:56 Concluding Thoughts and Future Directions

Duration:00:59:23