The New Stack Podcast-logo

The New Stack Podcast

Technology Podcasts

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

Location:

United States

Description:

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

Twitter:

@thenewstack

Language:

English


Episodes
Ask host to enable sharing for playback control

Why Linear Built an API For Agents

9/19/2025
Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor's background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear's new agent-specific API played a key role in enabling this integration, providing agents like Cursor with context-aware sessions to interact efficiently with the platform. Developers can now offload tasks such as fixing issues, updating documentation, or managing dependencies to the Cursor agent. However, both Linear’s Tom Moor and Cursor’s Andrew Milich emphasized the importance of giving agents clear, thoughtful input. Simply assigning vague tasks like “@cursor, fix this” isn’t effective—developers still need to guide the agent with relevant context, such as links to similar pull requests. Milich and Moor also discussed the growing value and adoption of autonomous agents, and hinted at a future where more companies build agent-specific APIs to support these tools. The full interview is available via podcast or YouTube. Learn more from The New Stack about the latest in AI and development in Cursor AI and Linear: Install Cursor and Learn Programming With AI Help Using Cursor AI as Part of Your Development Workflow Anti-Agile Project Tracker Linear the Latest to Take on Jira Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Duration:00:48:11

Ask host to enable sharing for playback control

Pat Casey

9/12/2025
In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team’s strong Kubernetes and Triton expertise. The company recently switched from GitHub Copilot to its own AI coding assistant, Windsurf, yielding a 10% productivity boost among 7,000 engineers. However, use of such tools isn’t mandatory—performance remains the main metric. Casey also addresses the impact of AI on junior developers, acknowledging that AI tools often handle tasks traditionally assigned to them. While ServiceNow still hires many interns, he sees the entry-level tech job market as increasingly vulnerable. Despite these concerns, Casey remains optimistic, viewing the AI revolution as transformative and ultimately beneficial, though not without disruption or risk. Learn more from The New Stack about the latest in AI and development in ServiceNow ServiceNow Launches a Control Tower for AI Agents ServiceNow Acquires Data.World To Expand Its AI Data Strategy Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:57:39

Ask host to enable sharing for playback control

How the EU’s Cyber Act Burdens Lone Open Source Developers

9/11/2025
The European Union’s upcoming Cyber Resilience Act (CRA), set to take effect in October, introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher "Crob" Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources. Developers often have no visibility into how their code is used, yet they’re increasingly burdened by legal and compliance demands from downstream users, such as requests for Software Bills of Materials (SBOMs) and conformity assessments. The CRA raises the stakes, with potential penalties in the billions for noncompliance, putting immense pressure on the open source ecosystem. Learn more from The New Stack about Open Source Security: Open Source Propels the Fall of Security by Obscurity There Is Just One Way To Do Open Source Security: Together Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:19:30

Ask host to enable sharing for playback control

How Warp Went From Terminal To Agentic Development Environment

9/5/2025
In this week’sThe New Stack Agents, Zach Lloyd, founder and CEO of Warp, discussed the launch of Warp Code, the latest evolution of the Warp terminal into a full agentic development environment. Originally launched in 2022 to modernize the terminal, Warp now integrates powerful AI agents to help developers write, debug, and ship code. Key new features include a built-in file editor, project-structuring tools, agent-driven code review, and WARP.md files that guide agent behavior. Recognizing developers’ hesitation to trust AI-generated code, Warp emphasizes transparency and control, enabling users to inspect and steer the agent’s work in real time through "persistent input" and task list updates. While Warp supports terminal workflows, Lloyd says it’s now better viewed as an AI coding platform. Interestingly, the launch announcement was delivered from horseback in a Western-themed ad, reflecting Warp’s desire to stand out in a crowded field of conventional tech product rollouts. The quirky “Code on Warp” (C.O.W.) branding captured attention and embodied their unique approach. Learn more from The New Stack about the latest in AI and Warp: Warp Goes Agentic: A Developer Walk-Through of Warp 2.0 Developer Review of Warp for Windows, an AI Terminal App How AI Can Help You Learn the Art of Programming Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:53:24

Ask host to enable sharing for playback control

The Linux Foundation In The Age Of AI

9/2/2025
In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there's no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI & Data, but AI development is still fragmented across models, tooling, and standards. Zemlin highlighted the industry's shift from foundational models to open-weight models and now toward inference stacks and agentic AI. He suggested a collective effort may eventually form but cautioned against forcing structure too early, stressing the importance of not hindering innovation. Foundations, he said, must balance scale with agility. On the debate over what qualifies as "open source" in AI, Zemlin adopted a pragmatic view, acknowledging the costs of creating frontier models. He supports open-weight models and believes fully open models, from data to deployment, may emerge over time. Learn more from The New Stack about the latest in AI and open source, AI in China, Europe's AI and security regulations, and more: Open Source Is Not Local Source, and the Case for Global Cooperation US Blocks Open Source ‘Help’ From These Countries Open Source Is Worth Defending Join our community of newsletter subscribers to stay on top of the news and at the top of your game./

Duration:00:29:04

Ask host to enable sharing for playback control

Is Your Data Strategy Ready for the Agentic AI Era?

8/28/2025
Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. However, to reach that point, enterprises must align their data strategies with their AI ambitions. Many have rushed into AI fearing obsolescence, but without preparing their data infrastructure, they're at risk of failure. Current legacy systems are not designed for the massive concurrency demands of agentic AI, potentially leading to underperformance. Verma emphasizes the need to move beyond siloed or "swim lane" databases toward unified, high-performance data platforms tailored for the scale and complexity of the AI era. Learn more from The New Stack about the latest evolution in AI infrastructure: How To Use AI To Design Intelligent, Adaptable Infrastructure How to Support Developers in Building AI Workloads Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:27:58

Ask host to enable sharing for playback control

MCP Security Risks Multiply With Each New Agent Connection

8/22/2025
Anthropic's Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In The New Stack Agents podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. Pynt’s research found that 72% of MCP plugins expose high-risk operations, like code execution or accessing privileged APIs, often without proper approval or validation. The danger compounds when untrusted inputs from one agent influence another with elevated permissions. Unlike traditional APIs, MCP calls are made by non-deterministic agents, making it harder to enforce security guardrails. While MCP exploits remain rare for now, most companies lack mature security strategies for it. Shneider believes MCP merely highlights existing API vulnerabilities, and organizations are only beginning to address these risks. Learn more from The New Stack about the latest in Model Context Protocol: Model Context Protocol: A Primer for the Developers Building With MCP? Mind the Security Gaps MCP-UI Creators on Why AI Agents Need Rich User Interfaces Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:47:25

Ask host to enable sharing for playback control

From Old AI to Agentic AI

8/21/2025
Rahul Auradkar, executive VP and GM at Salesforce, grew up in India with a deep passion for cricket, where his love for the game sparked an early interest in data. This fascination with statistics laid the foundation for his current work leading Salesforce’s Data Cloud and Einstein (Unified Data Services) team. Auradkar reflects on how structured data has evolved—from relational databases in enterprise applications to data warehouses, data lakes, and lakehouses. He explains how initial efforts focused on analyzing structured data, which later fed back into business processes. Eventually, businesses realized that the byproducts of data—what he calls "data exhaust"—were themselves valuable. The rise of "old AI," or predictive AI, shifted perceptions, showing that data exhaust could define the application itself. As varied systems emerged with distinct protocols and SQL variants, data silos formed, trapping valuable insights. Auradkar emphasizes that the ongoing challenge is unifying these silos to enable seamless, meaningful business interactions—something Salesforce aims to solve with its Data Cloud and agentic AI platform. Learn more from The New Stack about the evolution of structured data and agent AI: How Enterprises and Startups Can Master AI With Smarter Data Practices Enterprise AI Success Demands Real-Time Data Platforms Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:30:42

Ask host to enable sharing for playback control

Scott Carey Podcast

8/15/2025
In this week’s episode ofThe New Stack Agents, Scott Carey, editor-in-chief of LeadDev, discussed their first AI Impact Report, which explores how engineering teams are adopting AI tools. The report shows that two-thirds of developers are actively using AI, with another 20% in pilot stages and only 2% having no plans to use AI — a group Carey finds particularly intriguing. Popular tools include Cursor (43%) and GitHub Copilot (37%), with others like OpenAI, Gemini, and Claude following, while Amazon Q and Replit lag behind. Most developers use AI for code generation, documentation, and research, but usage for DevOps tasks like testing, deployment, and IT automation remains low. Carey finds this underutilization frustrating, given AI's potential impact in these areas. The report also highlights concern for junior developers, with 54% of respondents expecting fewer future hires at that level. While many believe AI boosts productivity, some remain unsure — a sign that organizations still struggle to measure developer performance effectively. Learn more from The New Stack about the latest insights about the AI tool adoption: AI Adoption: Why Businesses Struggle to Move from Development to Production 3 Strategies for Speeding Up AI Adoption Among Developers AI Everywhere: Overcoming Barriers to Adoption Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:36:47

Ask host to enable sharing for playback control

Confronting AI’s Next Big Challenge: Inference Compute

8/6/2025
While AI training garners most of the spotlight — and investment — the demands ofAI inferenceare shaping up to be an even bigger challenge. In this episode ofThe New Stack Makers, Sid Sheth, founder and CEO of d-Matrix, argues that inference is anything but one-size-fits-all. Different use cases — from low-cost to high-interactivity or throughput-optimized — require tailored hardware, and existing GPU architectures aren’t built to address all these needs simultaneously. “The world of inference is going to be truly heterogeneous,” Sheth said, meaning specialized hardware will be required to meet diverse performance profiles. A major bottleneck? The distance between memory and compute. Inference, especially in generative AI and agentic workflows, requires constant memory access, so minimizing the distance data must travel is key to improving performance and reducing cost. To address this, d-Matrix developed Corsair, a modular platform where memory and compute are vertically stacked — “like pancakes” — enabling faster, more efficient inference. The result is scalable, flexible AI infrastructure purpose-built for inference at scale. Learn more from The New Stack about inference compute and AI Scaling AI Inference at the Edge with Distributed PostgreSQL Deep Infra Is Building an AI Inference Cloud for Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game

Duration:00:24:14

Ask host to enable sharing for playback control

Databricks VP: Don’t Try to Speed AI Evolution through Brute Force

8/4/2025
In the latest episode ofThe New Stack Agents, Naveen Rao, VP of AI at Databricks and a former neuroscientist, reflects on the evolution of AI, neural networks, and the energy constraints that define both biological and artificial intelligence. Rao, who once built circuit systems as a child and later studied the brain’s 20-watt efficiency at Duke and Brown, argues that current AI development—relying on massive energy-intensive data centers—is unsustainable. He believes true intelligence should emerge from low-power, efficient systems, more aligned with biological computing. Rao warns that the industry is headed toward “model collapse,” where large language models (LLMs) begin training on AI-generated content instead of real-world data, leading to compounding inaccuracies and hallucinations. He stresses the importance of grounding AI in reality and moving beyond brute-force scaling. Rao sees intelligence not just as a function of computing power, but as a distributed, observational system—“life is a learning machine,” he says—hinting at a need to fundamentally rethink how we build AI. Learn more from The New Stack about the latest insights about the evolution of AI and neural networks: The 50-Year Story of the Rise, Fall, and Rebirth of Neural Networks The Evolution of the AI Stack: From Foundation to Agents Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:38:39

Ask host to enable sharing for playback control

How Fal.ai Went From Inference Optimization to Hosting Image and Video Models

7/25/2025
Fal.ai, once focused on machine learning infrastructure, has evolved into a major player in generative media. In this episode of The New Stack Agents, hosts speak with Fal.ai CEO Burkay Gur and investor Glenn Solomon of Notable Capital. Originally aiming to optimize Python runtimes, Fal.ai shifted direction as generative AI exploded, driven by tools like DALL·E and ChatGPT. Today, Fal.ai hosts hundreds of models—from image to audio and video—and emphasizes fast, optimized inference to meet growing demand. Speed became Fal.ai’s competitive edge, especially as newer generative models require GPU power not just for training but also for inference. Solomon noted that while optimization alone isn't a sustainable business model, Fal’s value lies in speed and developer experience. Fal.ai offers both an easy-to-use web interface and developer-focused APIs, appealing to both technical and non-technical users. Gur also addressed generative AI’s impact on creatives, arguing that while the cost of creation has plummeted, the cost of creativity remains—and may even increase as content becomes easier to produce. Learn more from The New Stack about AI’s impact on creatives: AI Will Steal Developer Jobs (But Not How You Think) How AI Agents Will Change the Web for Users and Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:52:41

Ask host to enable sharing for playback control

Why AI Agents Need a New Kind of Browser

7/18/2025
Traditional headless browsers weren’t built for AI agents, often breaking when web elements shift even slightly. Paul Klein IV, founder of Browserbase and its open-source tool Stagehand, is tackling this by creating a browser infrastructure designed specifically for AI control. On The New Stack Agents podcast, Klein explained that Stagehand enables AI agents to interpret vague, natural-language instructions and still function reliably—even when web pages change. This flexibility contrasts with brittle legacy tools built for deterministic testing. Instead of writing 100 scripts for 100 websites, one AI-powered script can now handle thousands. Klein’s broader vision is a world where AI can fully operate the web on behalf of users—automating tasks like filing taxes without human input. He acknowledges the technical challenges, from running browsers on servers to handling edge cases like time zones and emojis. The episode also touches on Klein’s concerns with AWS, which he says held a “partnership” meeting that felt more like corporate espionage. Still, Klein remains confident in Browserbase’s community-driven edge. Learn more from The New Stack about the latest insights in AI browser based tools: Why Headless Browsers Are a Key Technology for AI Agents Ladybird: That Rare Breed of Browser Based on Web Standards Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:48:56

Ask host to enable sharing for playback control

Antje Barth Podcast

7/11/2025
In a recent episode of The New Stack Agents livestream, Antje Barth, AWS Developer Advocate for Generative AI, discussed the growing developer interest in building agentic and multi-agent systems. While foundational model knowledge is now common, Barth noted that developers are increasingly focused on tools, frameworks, and protocols for scaling agent-based applications. She emphasized the complexity of deploying such systems, particularly around navigating human-centric interfaces and minimizing latency in multi-agent communication. Barth highlighted AWS’s support for developers through tools like Amazon Q CLI and the newly launched open-source Strands SDK, which AWS used internally to accelerate development cycles. Strands enables faster, flexible agentic system development, while services like Bedrock Agents offer a managed, enterprise-ready solution. Security was another key theme. Barth stressed that safety must be a “day one” priority, with built-in support for authentication, secure communication, and observability. She encouraged developers to leverage AWS’s GenAI Innovation Center and active open-source communities to build robust, scalable, and secure agentic systems. Learn more from The New Stack about AWS' support for developers through tools that support multiple agents: Code in Your Native Tongue: Amazon Q Developer Goes Global AWS Launches Its Take on an Open Source AI Agents SDK Amazon's Bedrock Can Now 'Check' AI for Hallucinations Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:40:49

Ask host to enable sharing for playback control

How Shortwave Wants To Reinvent Email With AI

7/3/2025
In this episode of The New Stack Agents, Andrew Lee, co-founder of Shortwave and Firebase, discusses the evolution of his Gmail-centric email client into an AI-first platform. Initially launched in 2020 with traditional improvements like better threading and search, Shortwave pivoted to agentic AI after the rise of large language models (LLMs). Early features like summarization and translation garnered hype but lacked deep utility. However, as models improved in 2023—especially Anthropic’s Claude Sonnet 3.5—Shortwave leaned heavily into tool-calling agents that could execute complex, multi-step tasks autonomously. Lee notes Anthropic’s lead in this area, especially in chaining tools intelligently, unlike earlier models from OpenAI. Still, challenges remain with managing large numbers of tools without breaking model reasoning. Looking ahead, Lee envisions AI that can take proactive actions—like responding to emails—and dynamically generate interfaces tailored to tasks in real-time. This shift could fundamentally reshape how productivity apps work, with Shortwave aiming to be at the forefront of that transformation. Learn more from The New Stack about the latest insights of the power AI at scale: Why Streaming Is the Power Grid for AI-Native Data Platforms Companies Must Embrace BeSpoke AI Designed for IT Workflows Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:36:17

Ask host to enable sharing for playback control

Cracking the Complexity: Teleport CEO Pushes Identity-First Security

6/18/2025
In this on-the-road episode of The New Stack Makers, Editor in Chief Heather Joslyn speaks with Ev Kontsevoy, CEO and co-founder of Teleport, from the floor of KubeCon + CloudNativeCon Europe in London. The discussion centers on infrastructure security and the growing need for robust identity management. Citing alarming cybersecurity statistics—such as the $5 million average cost of a breach and rising attack frequency—Kontsevoy stresses that complexity is the root challenge in securing infrastructure. Today’s environments involve countless layers and technologies, each with its own identity and access controls, increasing the risk of human error and breaches. Kontsevoy argues for treating all entities—humans, laptops, servers, AI agents—as identities managed under a unified framework. Teleport provides a zero trust access platform that enforces strong, cryptographically-backed identity across systems. He also highlights Teleport’s version 17 release, which boosts support for non-human identities and integrates deeply with AWS. Looking ahead, Teleport is exploring support for emerging AI agent protocols like MCP to extend its identity-first approach. Learn more from The New Stack about the latest insights about Teleport: Removing the Complexity to Securely Access the Infrastructure Why AI Can’t Protect You from AI-Generated Attacks Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:21:07

Ask host to enable sharing for playback control

No SSH? What is Talos, this Linux Distro for Kubernetes?

6/12/2025
Container-based Linux distributions are gaining traction, especially for edge deployments that demand lightweight and secure operating systems. Talos Linux, developed by Sidero Labs, is purpose-built for Kubernetes with security-first features like a fully immutable file system and disabled SSH access. In a demo, Sidero CTO Andrew Rynhard and Head of Product Justin Garrison explained Talos’s design philosophy, highlighting its minimalism and focus on automation. Inspired by CoreOS, Talos removes traditional tools like systemd and Bash, replacing them with machineD, a custom process manager written in Go. Talos emphasizes API-driven management rather than SSH, making Kubernetes cluster operations more scalable and consistent. Its design supports cloud, bare metal, Docker, and edge devices like Raspberry Pi. Kernel immutability is reinforced by ephemeral signing keys. Through Sidero's Omni SaaS, Talos nodes connect securely via WireGuard. The operating system handles all certificates and network connectivity internally, streamlining security and deployment. As Garrison notes, Talos delivers a portable API for “big iron, small iron—no matter what.” Learn more from The New Stack about Sidero Labs: Is Cluster API Really the Future of Kubernetes Deployment? Choosing a Linux Distribution Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Duration:00:19:23

Ask host to enable sharing for playback control

Aptori Is Building an Agentic AI Security Engineer

6/3/2025
AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we're still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization’s codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. At Google Cloud Next, Aptori CEO Sumeet Singh discussed how earlier tools merely alerted developers to issues—often overwhelming them—but newer models like Gemini 2.5 Flash and Claude Sonnet 4 are improving automated code fixes, making them more practical. Singh and co-founder Travis Newhouse previously built AppFormix, which automated OpenStack cloud operations before being acquired by Juniper Networks. Their experiences with slow release cycles due to security bottlenecks inspired Aptori’s focus. While the goal is autonomous agents, Singh emphasizes the need for transparency and deterministic elements in AI tools to ensure trust and reliability in enterprise security workflows. Learn more from The New Stack about the latest insights in AI application security: AI Is Changing Cybersecurity Fast and Most Analysts Aren’t Ready AI Security Agents Combat AI-Generated Code Risks Developers Are Embracing AI To Streamline Threat Detection and Stay Ahead Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:18:01

Ask host to enable sharing for playback control

The AI Code Generation Problem Nobody's Talking About

5/29/2025
In this episode ofThe New Stack Makers, Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers. Demchuk emphasizes that Nitric doesn't remove platform team control but enforces it consistently. Guardrails defined by platform teams guide infrastructure provisioning, ensuring security and compliance — even as developers use AI tools to rapidly generate code. The result is a streamlined workflow where developers move faster, AI enhances productivity, and platform teams retain oversight. This episode offers engineering leaders insight into a paradigm shift in how cloud infrastructure is managed in the AI era. Learn more from The New Stack about the latest insights about Nitric: Building a Serverless Meme Generator With Nitric and OpenAI Why Most Companies Are Struggling With Infrastructure as Code Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:19:28

Ask host to enable sharing for playback control

The New Bottleneck: AI That Codes Faster Than Humans Can Review

5/27/2025
CodeRabbit, led by founder Harjot Gill, is tackling one of software development's biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of The New Stack Makers, Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews. Unlike rigid, rule-based static analysis tools, Code Rabbit builds rich context at scale by spinning up sandbox environments for pull requests and allowing AI agents to navigate codebases like human reviewers. These agents can run CLI commands, analyze syntax trees, and pull in external context from Jira or vulnerability databases. Gill envisions a hybrid future where AI handles the grunt work of code review, empowering humans to focus on architecture and intent—ultimately reducing bugs, delays, and development costs. Learn more from The New Stack about the latest insights about AI code reviews: CodeRabbit's AI Code Reviews Now Live Free in VS Code, Cursor AI Coding Agents Level Up from Helpers to Team Players Augment Code: An AI Coding Tool for 'Real' Development Work Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:20:17