The New Stack Podcast-logo

The New Stack Podcast

Technology Podcasts

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

Location:

United States

Description:

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

Twitter:

@thenewstack

Language:

English


Episodes

Nvidia’s Superchips for AI: ‘Radical,’ but a Work in Progress

3/14/2024
This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia's GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system. Mallya noted that despite its innovative design, challenges remain in adoption due to interface issues and the need for software to catch up with hardware advancements. However, optimism persists for the future of AI-focused chips, with Nvidia leading the charge in creating large-scale coherent memory systems. Meanwhile, Flip AI, a DevOps large language model, aims to interpret observability data to troubleshoot incidents effectively across various cloud platforms. While discussing the latest chip innovations and challenges in training large language models, the episode sheds light on the evolving landscape of AI hardware and software integration. Learn more from The New Stack about Nvidia and the future of chip design Nvidia Wants to Rewrite the Software Development Stack Nvidia GPU Dominance at a Crossroads Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:39:45

Is GitHub Copilot Dependable? These Demos Aren’t Promising

3/7/2024
This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks. However, she also revealed its limitations, particularly in reliability. Despite being designed to assist with tasks across Microsoft 365, Copilot's performance fell short during Westenberg's tests, failing to retrieve necessary information from her email and Microsoft Teams meetings. While Copilot proves useful for coding, providing helpful code snippets, its effectiveness diminishes for more complex projects. Westenberg's demonstrations underscored both the strengths and weaknesses of Copilot, emphasizing the need for improvement, especially in reliability, to fulfill its promise as a versatile work companion. Learn more from The New Stack about Copilot Microsoft One-ups Google with Copilot Stack for Developers Copilot Enterprises Introduces Search and Customized Best Practices Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:29:34

The New Monitoring for Services That Feed from LLMs

2/28/2024
This New Stack Makers podcast co-hosted by Adrian Cockroft, analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs. LangChain acts as middleware, connecting LLMs with services, akin to the Java Database Controller. LangChain's monitoring capabilities led to the development of LangSmith, a monitoring tool. Another tool, LangKit by WhyLabs, offers similar functionalities but is less integrated. This reflects the typical evolution of open-source projects into commercial products. LangChain recently secured funding, indicating growing interest in such monitoring solutions. Cockcroft emphasizes the importance of enterprise-level support and tooling for integrating these solutions into commercial environments. This discussion underscores the evolving landscape of monitoring services powered by LLMs and the emergence of specialized tools to address associated challenges. Learn more from The New Stack about LangChain: LangChain: The Trendiest Web Framework of 2023, Thanks to AI How Retool AI Differs from LangChain (Hint: It's Automation) Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:27:03

How Platform Engineering Supports SRE

2/7/2024
In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs). Parker highlighted the role of IDPs in automating repetitive tasks, allowing SREs to focus on optimizing application performance. Standardization is key, ensuring observability and monitoring solutions align with best practices and cater to SRE needs. By providing standardized service level indicators (SLIs) and key performance indicators (KPIs), IDPs enable SREs to maintain reliability efficiently. Parker stresses the importance of avoiding siloed solutions by establishing standardized practices and tools for effective monitoring and incident response. Overall, the deployment of IDPs aims to streamline operations, reduce incidents, and enhance organizational value by empowering SREs to concentrate on system maintenance and improvements. Learn more from The New Stack about UST: Cloud Cost-Unit Economics- A Modern Profitability Model Cloud Native Users Struggle to Achieve Benefits, Report Says John our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:18:52

Internal Developer Platforms: Helping Teams Limit Scope

1/31/2024
In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms. Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects. Learn more from The New Stack about Tanzu and internal developer platforms: VMware Unveils a Pile of New Data Services for Its Cloud VMware VMware Expands Tanzu into a Full Platform Engineering Environment VMware Targets the Platform Engineer Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:15:23

How the Kubernetes Gateway API Beats Network Ingress

1/23/2024
In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security. While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development. Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API: Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future API Gateway, Ingress Controller or Service Mesh: When to Use What and Why Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:15:03

What You Can Do with Vector Search

1/17/2024
TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach. In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database. Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure. Learn more from The New Stack about Zilliz and vector database search: Improving ChatGPT’s Ability to Understand Ambiguous Prompts Create a Movie Recommendation Engine with Milvus and Python Using a Vector Database to Search White House Speeches Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Duration:00:25:28

How Ethical Hacking Tricks Can Protect Your APIs and Apps

1/10/2024
TNS host Heather Joslyn sits down with Ron Masas to discuss trade-offs when it comes to creating fast, secure applications and APIs. He notes a common issue of neglecting documentation and validation, leading to vulnerabilities. Weak authorization is a recurring problem, with instances where changing an invoice ID could expose another user's data. Masas, an ethical hacker, highlights the risk posed by "zombie" APIs—applications that have become disused but remain potential targets. He suggests investigating frameworks, checking default configurations, and maintaining robust logging to enhance security. Collaboration between developers and security teams is crucial, with "security champions" in development teams and nuanced communication about vulnerabilities from security teams being essential elements for robust cybersecurity. For further details, the podcast discusses case studies involving TikTok and Digital Ocean, Masas's views on AI and development, and anticipated security challenges. Learn more from The New Stack about Imperva and API security: What Developers Need to Know about Business Logic Attacks Why Your APIs Aren’t Safe — and What to Do about It The Limits of Shift-Left: What’s Next for Developer Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:16:20

2023 Top Episodes - What’s Platform Engineering?

1/3/2024
Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast. This structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users." This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept. Learn more from The New Stack about Platform Engineering and Humanitec: Platform Engineering Overview, News, and Trends The Hype Train Is Over. Platform Engineering Is Here to Stay 9 Steps to Platform Engineering Hell

Duration:00:23:44

2023 Top Episodes - The End of Programming is Nigh

12/27/2023
Is the end of programming nigh? That's the big question posed in this episode recorded earlier in 2023. It was very popular among listeners, and with the topic being as relevant as ever, we wanted to wrap up the year by highlighting this conversation again. If you ask Matt Welsh, he'd say yes, the end of programming is upon us. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming. Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more. Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities. For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview. Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple. Learn more from The New Stack about AI and the future of software development: Top 5 Large Language Models and How to Use Them Effectively 30 Non-Trivial Ways for Developers to Use GPT-4 Developer Tips in AI Prompt Engineering

Duration:00:31:59

The New Age of Virtualization

12/21/2023
Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation. The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs. Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained. Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation: The Future of VMs on Kubernetes: Building on KubeVirt A Platform for Kubernetes Scaling Open Source Community by Getting Closer to Users

Duration:00:16:23

Kubernetes Goes Mainstream? With Calico, Yes

12/13/2023
The Kubernetes landscape is evolving, shifting from the domain of visionaries and early adopters to a more mainstream audience. Tigera, represented by CEO Ratan Tipirneni at KubeCon North America in Chicago, recognizes the changing dynamics and the demand for simplified Kubernetes solutions. Tigera's open-source Calico security platform has been updated with a focus on mainstream users, presenting a cohesive and user-friendly solution. This update encompasses five key capabilities: vulnerability scoring, configuration hardening, runtime security, network security, and observability. The aim is to provide users with a comprehensive view of their cluster's security through a zero to 100 scoring system, tracked over time. Tigera's recommendation engine suggests actions to enhance overall security based on the risk profile, evaluating factors such as egress traffic controls and workload isolation within dynamic Kubernetes environments. Tigera emphasizes the importance of understanding the actual flow of data across the network, using empirical data and observed behavior to build accurate security measures rather than relying on projections. This approach addresses the evolving needs of customers who seek not just vulnerability scores but insights into runtime behavior for a more robust security profile. Learn more from The New Stack about Tigera and Cloud Native Security: Cloud Native Network Security: Who’s Responsible? Turbocharging Host Workloads with Calico eBPF and XDP 3 Observability Best Practices for Cloud Native App Security

Duration:00:20:08

Hello, GitOps -- Boeing's Open Source Push

12/12/2023
Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago. The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups. Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams. Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future. Learn more from The New Stack about Boeing and CNCF open source projects: How Boeing Uses Cloud Native How Open Source Has Turned the Tables on Enterprise Software Scaling Open Source Community by Getting Closer to Users Mercedes-Benz: 4 Reasons to Sponsor Open Source Projects

Duration:00:19:14

How AWS Supports Open Source Work in the Kubernetes Universe

12/7/2023
At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users. AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs. The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods. The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration. Learn more from The New Stack about AWS and Open Source: Powertools for AWS Lambda Grows with Help of Volunteers Amazon Web Services Open Sources a KVM-Based Fuzzing Framework AWS: Why We Support Sustainable Open Source

Duration:00:17:45

2024 Forecast: What Can Developers Expect in the New Year?

12/6/2023
In the past year, developers have faced both promise and uncertainty, particularly in the realm of generative AI. Heath Newburn, global field CTO for PagerDuty, joins TNS host Heather Joslyn to talk about the impact AI and other topics will have on developers in 2024. Newburn anticipates a growing emphasis on DevSecOps in response to high-profile cyber incidents, noting a shift in executive attitudes toward security spending. The rise of automation-centric tools like Backstage signals a changing landscape in the link between development and operations tools. Notably, there's a move from focusing on efficiency gains to achieving new outcomes, with organizations seeking innovative products rather than marginal coding speed improvements. Newburn highlights the importance of experimentation, encouraging organizations to identify areas for trial and error, learning swiftly from failures. The upcoming year is predicted to favor organizations capable of rapid experimentation and information gathering over perfection in code writing. Listen to the full podcast episode as Newburn further discusses his predictions related to platform engineering, remote work, and the continued impact of generative AI. Learn more from The New Stack about PagerDuty and trends in software development: How AI and Automation Can Improve Operational Resiliency Why Infrastructure as Code Is Vital for Modern DevOps Operationalizing AI: Accelerating Automation, DataOps, AIOps

Duration:00:22:16

How to Know If You’re Building the Right Internal Tools

12/5/2023
In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape. Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome. Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms. Learn more from The New Stack about Software Engineering, Observability, and Chronosphere: Cloud Native Observability: Fighting Rising Costs, Incidents A Guide to Measuring Developer Productivity 4 Key Observability Best Practices

Duration:00:20:07

Hey Programming Language Developer -- Get Over Yourself

11/30/2023
Jean Yang, founder of API observability company Akita Software, emphasizes that programming languages should be shaped by software development needs and data, rather than philosophical ideals. Yang, a former assistant professor at Carnegie Mellon University, believes that programming tools and processes should be influenced by actual use and data, prioritizing the developer experience over the language creator's beliefs. With a background in programming languages, Yang advocates for a shift away from the outdated notion that language developers are building solely for themselves. In this discussion on The New Stack Makers, Yang underscores the importance of understanding the reality of developers' needs, especially as developer tools have evolved into a full-time industry. She argues for a focus on UX design and product fundamentals in developing tools, moving beyond the traditional mindset where developer tools were considered side projects. Yang founded Akita to address the challenges of building reliable software systems in a world dominated by APIs and microservices. The company transitioned to API observability, recognizing the crucial role APIs play in enhancing the understandability of complex systems. Yang's commitment to improving software correctness and the belief in APIs as key to abstraction and ease of monitoring align with Postman's direction after acquiring Akita. Postman aims to serve developers worldwide, emphasizing the significance of APIs in complex systems. Check out more episodes from The Tech Founder Odyssey series: How Byteboard’s CEO Decided to Fix the Broken Tech Interview A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem How Teleport’s Leader Transitioned from Engineer to CEO

Duration:00:26:10

Docker CTO Explains How Docker Can Support AI Efforts

11/28/2023
Docker CTO Justin Cormack reveals that Docker has been a go-to tool for data scientists in AI and machine learning for years, primarily in specialized areas like image processing and prediction models. However, the release of OpenAI's ChatGPT last year sparked a significant surge in Docker's popularity within the AI community. The focus shifted to large language models (LLMs), with a growing interest in the retrieval-augmented generation (RAG) stack. Docker's collaboration with Ollama enables developers to run Llama 2 and Code Llama locally, simplifying the process of starting and experimenting with AI applications. Additionally, partnerships with Neo4j and LangChain allow for enhanced support in storing and retrieving data for LLMs. Cormack emphasizes the simplicity of getting started locally, addressing challenges related to GPU shortages in the cloud. Docker's efforts also include building an AI solution using its data, aiming to assist users in Dockerizing applications through an interactive notebook in Visual Studio Code. This tool leverages LLMs to analyze applications, suggest improvements, and generate Docker files tailored to specific languages and applications. Docker's integration with AI technologies demonstrates a commitment to making AI and Docker more accessible and user-friendly. Learn more from The New Stack about AI and Docker: Artificial Intelligence News, Analysis, and Resources Will GenAI Take Jobs? No, Says Docker CEO Debugging Containers in Kubernetes — It’s Complicated

Duration:00:12:28

What Does Open Mean in AI?

11/22/2023
In this episode, Stefano Maffulli, Executive Director of the Open Source Initiative, discusses the need for a new definition as AI differs significantly from open source software. The complexity arises from the unique nature of AI, particularly large language models and transformers, which challenge traditional copyright frameworks. Maffulli emphasizes the urgency of establishing a definition for open source AI and discusses an ongoing effort to release a set of principles by the year's end. The concept of "open" in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The recent upheaval at OpenAI, resulting in the removal of CEO Sam Altman, reflects a profound shift in the technology community, prompting a reconsideration of the definition of "open" in the realm of AI. The conversation highlights the parallels between the current AI debate and the early days of software development, emphasizing the necessity for a cohesive approach to navigate the evolving landscape. Altman's ousting underscores a clash of belief systems within OpenAI, with a "safetyist" community advocating caution and transparency, while Altman leans towards experimentation. The historical significance of open source, with a focus on trust preservation over technical superiority, serves as a guide for defining "open" and "AI" in a rapidly changing environment. Learn more from The New Stack about AI and Open Source: Artificial Intelligence News, Analysis, and Resources Open Source Development Threatened in Europe The AI Engineer Foundation: Open Source for the Future of AI

Duration:00:22:39

Debugging Containers in Kubernetes

11/21/2023
DockerCon showcased a commitment to enhancing the developer experience, with a particular focus on addressing the challenge of debugging containers in Kubernetes. The newly launched Docker Debug offers a language-independent toolbox for debugging both local and remote containerized applications. By abstracting Kubernetes concepts like pods and namespaces, Docker aims to simplify debugging processes and shift the focus from container layers to the application itself. Our guest, Docker Principal Engineer Ivan Pedrazas, emphasized the need to eliminate unnecessary complexities in debugging, especially in the context of Kubernetes, where developers grapple with unfamiliar concerns exposed by the API. Another Docker project, Tape, simplifies deployment by consolidating Kubernetes artifacts into a single package, streamlining the process for developers. The ultimate goal is to facilitate debugging of slim containers with minimal dependencies, optimizing security and user experience in Kubernetes development. While progress is being made, bridging the gap between developer practices and platform engineering expectations remains an ongoing challenge. Learn more from The New Stack about Kubernetes and Docker: Kubernetes Overview, News, and Trends Docker Rolls out 3 Tools to Speed and Ease Development Will GenAI Take Jobs? No, Says Docker CEO

Duration:00:15:49