The New Stack Podcast-logo

The New Stack Podcast

Technology Podcasts

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

Location:

United States

Description:

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

Twitter:

@thenewstack

Language:

English


Episodes
Ask host to enable sharing for playback control

How OpenTofu Happened — and What’s Next?

7/25/2024
In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp's license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn't be altered. Maislish highlighted the importance of distinguishing between vendor-backed and foundation-backed open source projects to avoid sudden licensing changes. Before coding, the community created a manifesto, gathering significant support and pledges, but received no response from HashiCorp. Consequently, they proceeded with the fork and development of OpenTofu. Despite accusations of intellectual property theft from HashiCorp, OpenTofu gained traction and was adopted by organizations like Oracle. The community continues to prioritize user feedback through GitHub. Learn more from The New Stack about OpenTofu: OpenTofu vs. HashiCorp Takes Center Stage at Open Source Summit OpenTofu Amiable to a Terraform Reconciliation OpenTofu 1.6 General Availability: Open Source Infrastructure as Code Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:29:30

Ask host to enable sharing for playback control

The Fediverse: What It Is, Why It’s Promising, What’s Next

7/18/2024
In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative. Evan Prodromou, OpenEarth Foundation's director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect. Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast. He notes that small social networks dwindled to a few giants, such as Twitter and Facebook, which rarely interconnected. The acquisition of Twitter by Elon Musk disrupted the established norms, prompting users to reconsider their dependence on centralized platforms. The fediverse aims to address these issues by allowing users to maintain relationships across different instances, ensuring a smoother transition between networks. This decentralization fosters community management and better control over social interactions. Check out the full podcast episode to explore how tech giants like Meta are engaging with the fediverse and how to join decentralized social networks. Learn more from The New Stack about fediverse: FediForum Showcases New Fediverse Apps and Developer Network One Login: Towards a Single Fediverse Identity on ActivityPub Web Dev 2024: Fediverse Ramps Up, More AI, Less JavaScript Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Duration:00:40:38

Ask host to enable sharing for playback control

Why Framework’s ‘Right to Repair’ Ethos Is Gaining Fans

7/11/2024
In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the "right to repair" movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily. Hartley, interviewed by Chris Pirillo, highlights how Framework’s approach helps reduce electronic waste, likening obsolete electronics to a form of "technical debt." He shares his personal struggle with old devices, like an ASUS Eee, illustrating the need for repairable technology. Hartley also describes his role in fostering a DIY community, collaborating closely with Fedora Linux maintainers and creating user-friendly support scripts. Framework’s community is actively contributing to the platform, developing new features and hardware integrations. The episode underscores the growing momentum of the right to repair movement, advocating for consumer empowerment and environmental sustainability. Learn more from The New Stack about repairing and upgrading devices: New Linux Laptops Come with Right-to-Repair and More Troubling Tech Trends: The Dark Side of CES 2024 Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:18:56

Ask host to enable sharing for playback control

What’s the Future of Distributed Ledgers?

7/2/2024
Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams. Baird highlighted the Hashgraph Consensus Algorithm, an efficient, secure distributed consensus mechanism he created, leveraging a hashgraph data structure and gossip protocol for rapid, robust transaction sharing among network nodes. This algorithm, which has been open source under the Apache 2.0 license for nine months, aims to maintain decentralization by involving 32 global organizations in its governance. Aitken emphasized building an ecosystem of DLT contributors, adhering to open source best practices, and developing cross-chain applications and more wallets to enhance exchange capabilities. This collaborative approach seeks to ensure transparency in both governance and software development. For more insights into DLT’s 2.0 era, listen to the full episode. Learn more from The New Stack about Distributed Ledgers (DLTs) IOTA Distributed Ledger: Beyond Blockchain for Supply Chains Why I Changed My Mind About Blockchain Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:23:33

Ask host to enable sharing for playback control

Linux xz and the Great Flaws in Open Source

6/27/2024
The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections. The exploit reveals a significant flaw: the human element in open source. Maintainers, often under pressure from company executives to quickly address vulnerabilities and updates, can become targets for social engineering. Attackers built trust within the community by contributing to projects over time, eventually gaining maintainer status and inserting malicious code. This scenario underscores the economic pressures on open source, where maintainers work unpaid and face demands from large organizations, exposing the fragility of the open-source supply chain. Despite these challenges, the community's resilience is also evident in their rapid response to such threats. Learn more from The New Stack about Linux xz utils Linux xz Backdoor Damage Could Be Greater Than Feared Unzipping the XZ Backdoor and Its Lessons for Open Source The Linux xz Backdoor Episode: An Open Source Myster Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:12:44

Ask host to enable sharing for playback control

How Amazon Bedrock Helps Build GenAI Apps in Python

6/20/2024
Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python's ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn't necessarily require deep data science expertise or Python. Amazon Bedrock, AWS’s generative AI framework introduced in September, exemplifies this flexibility by allowing developers to use any programming language via an API-based service. Bedrock supports various languages like Python, C, C++, and Java, enabling developers to leverage large language models without intricate knowledge of machine learning. It also integrates well with open-source libraries such as Langchain and llamaindex. Debnath recommends visiting the community AWS platform and GitHub for resources on getting started with Bedrock. The episode includes a demonstration of Bedrock's capabilities and its benefits for Python users. Learn More from The New Stack on Amazon Bedrock: Amazon Bedrock Expands Palette of Large Language Models Build a Q&A Application with Amazon Bedrock and Amazon Titan 10 Key Products for Building LLM-Based Apps on AWS Join our community of newsletter subscribers to stay on top of the news and at the top of your game/

Duration:00:06:02

Ask host to enable sharing for playback control

How to Start Building in Python with Amazon Q Developer

6/13/2024
Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few. Peck introduces Amazon Q, a generative AI coding assistant launched by AWS in November, and demonstrates its capabilities. The assistant can be integrated into an integrated development environment (IDE) like VS Code. It assists in explaining, refactoring, fixing, and even developing new features for Python codebases. Peck emphasizes Amazon Q's ability to surface best practices from extensive AWS documentation, making it easier for developers to navigate and apply. Amazon Q Developer is available for free to users with an AWS Builder ID, without requiring an AWS cloud account. Peck's demo showcases how this tool can simplify and enhance the coding experience, especially for those handling complex or unfamiliar codebases. Learn more from The New Stack about Amazon Q and Amazon’s Generative AI strategy: Amazon Q, a GenAI to Understand AWS (and Your Business Docs) Decoding Amazon’s Generative AI Strategy Responsible AI at Amazon Web Services: Q&A with Diya Wynn Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:09:42

Ask host to enable sharing for playback control

Who’s Keeping the Python Ecosystem Safe?

6/6/2024
Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment. One of Fiedler’s significant initiatives was enforcing mandatory two-factor authentication (2FA) for all PyPI user accounts by January 1, following a community awareness campaign. This transition was smooth, thanks to proactive outreach. Additionally, the foundation collaborates with security researchers and the public to report and address malicious packages. In late 2023, a security audit by Trail of Bits, funded by the Open Technology Fund, identified and quickly resolved medium-sized vulnerabilities, increasing PyPI's overall security. More details on Fiedler's work are available in the full interview video. Learn more from The New Stack about PyPl: PyPl Strives to Pull Itself Out of Trouble How Python Is Evolving Poisoned Lolip0p PyPI Packages Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:18:09

Ask host to enable sharing for playback control

How Training Data Differentiates Falcon, the LLM from the UAE

5/30/2024
The name "Falcon" for the UAE’s large language model (LLM) symbolizes the national bird's qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry. Last June, Falcon introduced a 40-billion parameter model, outperforming the LLaMA-65B, with smaller models enabling local inference without the cloud. The latest 180-billion parameter model, trained on 3.5 trillion tokens, illustrates Falcon’s commitment to quality and efficiency over sheer size. Falcon’s distinctiveness lies in its data quality, utilizing over 80% RefinedWeb data, based on CommonCrawl, which ensures cleaner and deduplicated data, resulting in high-quality outcomes. This data-centric approach, combined with powerful computational resources, sets Falcon apart in the AI landscape. Learn more from The New Stack about Open Source AI: Open Source Initiative Hits the Road to Define Open Source AI Linus Torvalds on Security, AI, Open Source and Trust Transparency and Community: An Open Source Vision for AI Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:23:27

Ask host to enable sharing for playback control

Out with C and C++, In with Memory Safety

5/22/2024
Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as "the Joker to the Batman" by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers. The White House has emphasized memory safety, advocating for the adoption of memory-safe programming languages and better software measurability. The Office of the National Cyber Director (ONCD) noted that languages like C and C++ lack memory safety traits and are prevalent in critical systems. They recommend using memory-safe languages, such as Java, C#, and Rust, to develop secure software. Memory safety is particularly crucial for the US government due to the high stakes, especially in space exploration, where reliability standards are exceptionally stringent. Dash underscores the importance of resilience and predictability in missions that may outlast their creators, necessitating rigorous memory safety practices. Learn more from The New Stack about Memory Safety: White House Warns Against Using Memory-Unsafe Languages Can C++ Be Saved? Bjarne Stroupstrup on Ensuring Memory Safety Bjarne Stroupstrup's Plan for Bringing Safety to C++ Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:36:19

Ask host to enable sharing for playback control

How Open Source and Time Series Data Fit Together

5/16/2024
In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases. Brad Bebee, General Manager of Amazon Neptune and Amazon Timestream highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. Paul Dix, Co-founder and CTO of InfluxData joined Bebee, and highlighted Influx's prized utility in tracking measurements, metrics, and sensor data in real-time. AWS's Timestream complements this by providing managed time series database services, including TimesTen for Live Analytics and Timestream for Influx DB. Bebee emphasized the growing relevance of time series data and customers' preference for managed open-source databases, aligning with AWS's strategy of offering such services. This partnership aims to simplify database management and enhance performance for customers utilizing time series databases. Learn more from The New Stack about time series databases: What Are Time Series Databases, and Why Do You Need Them? Amazon Timestream: Managed InfluxDB for Time Series Data Install the InfluxDB Time-Series Database on Ubuntu Server 22.04 Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:21:13

Ask host to enable sharing for playback control

Postgres is Now a Vector Database, Too

5/9/2024
Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications. The tool, developed by Andrew Kane and offered by AWS in services like Aurora and RDS, originally used an indexing scheme called IVFFlat but has since adopted Hierarchical Navigable Small World (HNSW) for improved query performance. HNSW offers a graph-based approach, enhancing the ability to find nearest neighbors efficiently, which is crucial for generative AI tasks. AWS emphasizes customer feedback and continuous innovation in the rapidly evolving field of generative AI, aiming to stay responsive and adaptive to customer needs. Learn more from The New Stack about Vector Databases Top 5 Vector Database Solutions for Your AI Project Vector Databases Are Having a Moment – A Chat with Pinecone Why Vector Size Matters Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Duration:00:17:56

Ask host to enable sharing for playback control

Valkey: A Redis Fork with a Future

5/2/2024
Valkey, a Redis fork supported by the Linux Foundation, challenges Redis' new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle. Despite Redis' free license for end users, many contributors may not support it. Valkey, with significant industry backing, prioritizes continuity and a smooth transition for Redis users. AWS, along with Google and Oracle maintainers, emphasizes the importance of open, permissive licenses for large tech companies. Valkey plans incremental updates and module development in Rust to enhance functionality and attract more engineers. The focus remains on compatibility, continuity, and consolidating client behaviors for a robust ecosystem. Learn more from The New Stack about the Valkey Project and changes to Open Source licensing Linux Foundation Backs 'Valkey' Open Source Fork of Redis Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud Services HashiCorp's Licensing Change is only the Latest Challenge to Open Source Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:17:37

Ask host to enable sharing for playback control

Kubernetes Gets Back to Scaling with Virtual Clusters

4/25/2024
A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon+ CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs' open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE. The integration of vCluster into Rancher at KubeCon Paris enables users to manage virtual clusters alongside real clusters seamlessly. This innovation addresses challenges faced by companies managing multiple applications and clusters, advocating for a multi-tenant cluster approach for improved sharing and security, contrary to the trend of isolated single-tenant clusters that emerged due to complexities in cluster sharing within Kubernetes. Learn more from The New Stack about virtual clusters: Vcluster to the Rescue Navigating the Trade-Offs of Scaling Kubernetes Dev Environments Managing Kubernetes Clusters for Platform Engineers Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Duration:00:23:29

Ask host to enable sharing for playback control

How Giant Swarm Is Helping to Support the Future of Flux

4/22/2024
When Weaveworks, known for pioneering "GitOps," shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm's VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux's maintenance is secure in this episode of The New Makers podcast. Giant companies like Microsoft Azure and GitLab have pledged support. Giant Swarm, an avid Flux user, also contributes to its development, ensuring its vitality alongside related projects like infrastructure code plugins and UI improvements. Abbassi highlighted the importance of considering a project's sustainability and integration capabilities when choosing open-source tools. He noted Argo CD's advantage in UI, emphasizing that projects like Flux must evolve to meet user expectations and avoid being overshadowed. This underscores the crucial role of community support, diversity, and compatibility within the Cloud Native Computing Foundation's ecosystem for long-term tool adoption. Learn more from The New Stack about Flux: End of an Era: Weaveworks Closes Shop Amid Cloud Native Turbulence Why Flux Isn't Dying after Weaveworks Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:28:39

Ask host to enable sharing for playback control

AI, LLMs and Security: How to Deal with the New Threats

4/11/2024
The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems. One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits. Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches. Learn more from The New Stack about AI and security: Artificial Intelligence: Stopping the Big Unknown in Application, Data Security Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023 Will Generative AI Kill DevSecOps? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:37:31

Ask host to enable sharing for playback control

How Kubernetes Faces a New Reality with the AI Engineer

4/4/2024
The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. Nevertheless, Mohan noted that the pace of data engineers has increased as they explore new AI applications and workloads. Kubernetes now plays a crucial role in supporting these advancements by helping manage resources efficiently, especially considering the high cost of training large language models (LLMs) and using GPUs for AI workloads. Mohan also discussed the evolving landscape of AI frameworks and the importance of aligning business use cases with AI strategies. Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use — But Not Yet, Report Says https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ The Paradigm Shift from Model-Centric to Data-Centric AI https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ AI Development Needs to Focus More on Data, Less on Models https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use - But Not Yet, Report Says The Paradigm Shift from Model-Centric to Data-Centric AI AI Development Needs to Focus More on Data, Less on Models Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Duration:00:29:29

Ask host to enable sharing for playback control

LLM Observability: The Breakdown

3/28/2024
LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram & Associates and an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem. Similar to infrastructure observability in DevOps and SRE practices, LLM observability aims to provide insights into the LLM stack's performance. This includes monitoring metrics specific to LLMs, such as GPU/CPU usage, storage, model serving, change agents in applications, hallucinations, span traces, relevance, retrieval models, latency, monitoring, and user feedback. MSV emphasizes the importance of monitoring resource usage, model catalog synchronization with external providers like Hugging Face, vector database availability, and the inference engine's functionality. He also mentions peer companies in the LLM observability space like Datadog, New Relic, Signoz, Dynatrace, LangChain (LangSmith), Arize.ai (Phoenix), and Truera, hinting at a deeper exploration in a future episode of The New Stack Makers. Learn more from The New Stack about LLM and observability Observability in 2024: More OpenTelemetry, Less Confusion How AI Can Supercharge Observability Next-Gen Observability: Monitoring and Analytics in Platform Engineering Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:25:51

Ask host to enable sharing for playback control

Why Software Developers Should Be Thinking About the Climate

3/21/2024
In a conversation on The New Stack Makers, co-hosted by Alex Williams, TNS founder and publisher, and Charles Humble, an industry expert who served as a software engineer, architect and CTO and now podcaster, author and consultant at Conissaunce Ltd., discussed why software developers and engineers should care about their impact on climate change. Humble emphasized that building software sustainably starts with better operations, leading to cost savings and improved security. He cited past successes in combating environmental issues like acid rain and the ozone hole through international agreements and emissions reduction strategies. Despite modest growth since 2010, data centers remain significant electricity consumers, comparable to countries like Brazil. The power-intensive nature of AI models exacerbates these challenges and may lead to scarcity issues. Humble mentioned the Green Software Foundation's Maturity Matrix with goals for carbon-free data centers and longer device lifespans, discussing their validity and the role of regulation in achieving them. Overall, software development's environmental impact, primarily carbon emissions, necessitates proactive measures and industry-wide collaboration. Learn more from The New Stack about sustainability: What is GreenOps? Putting a Sustainable Focus on FinOps Unraveling the Costs of Bad Code in Software Development Can Reducing Cloud Waste Help Save the Planet? How to Build Open Source Sustainability Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:38:55

Ask host to enable sharing for playback control

Nvidia’s Superchips for AI: ‘Radical,’ but a Work in Progress

3/14/2024
This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia's GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system. Mallya noted that despite its innovative design, challenges remain in adoption due to interface issues and the need for software to catch up with hardware advancements. However, optimism persists for the future of AI-focused chips, with Nvidia leading the charge in creating large-scale coherent memory systems. Meanwhile, Flip AI, a DevOps large language model, aims to interpret observability data to troubleshoot incidents effectively across various cloud platforms. While discussing the latest chip innovations and challenges in training large language models, the episode sheds light on the evolving landscape of AI hardware and software integration. Learn more from The New Stack about Nvidia and the future of chip design Nvidia Wants to Rewrite the Software Development Stack Nvidia GPU Dominance at a Crossroads Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Duration:00:39:45