Confluent Developer ft. Tim Berglund, Adi Polak & Viktor Gamov-logo

Confluent Developer ft. Tim Berglund, Adi Polak & Viktor Gamov

Technology Podcasts

Hi, we’re Tim Berglund, Adi Polak, and Viktor Gamov and we’re excited to bring you the Confluent Developer podcast (formerly “Streaming Audio.”) Our hand-crafted weekly episodes feature in-depth interviews with our community of software developers (actual human beings - not AI) talking about some of the most interesting challenges they’ve faced in their careers. We aim to explore the conditions that gave rise to each person’s technical hurdles, as well as how their experiences transformed their understanding and approach to building systems. Whether you’re a seasoned open source data streaming engineer, or just someone who’s interested in learning more about Apache Kafka®, Apache Flink® and real-time data, we hope you’ll appreciate the stories, the discussion, and our effort to bring you a high-quality show worth your time.

Location:

United States

Description:

Hi, we’re Tim Berglund, Adi Polak, and Viktor Gamov and we’re excited to bring you the Confluent Developer podcast (formerly “Streaming Audio.”) Our hand-crafted weekly episodes feature in-depth interviews with our community of software developers (actual human beings - not AI) talking about some of the most interesting challenges they’ve faced in their careers. We aim to explore the conditions that gave rise to each person’s technical hurdles, as well as how their experiences transformed their understanding and approach to building systems. Whether you’re a seasoned open source data streaming engineer, or just someone who’s interested in learning more about Apache Kafka®, Apache Flink® and real-time data, we hope you’ll appreciate the stories, the discussion, and our effort to bring you a high-quality show worth your time.

Language:

English


Episodes
Ask host to enable sharing for playback control

Reimagining Stream Processing with Matthias J. Sax | Ep. 9

11/17/2025
Viktor Gamov talks to Matthias J. Sax (Confluent) about his career in stream processing and, specifically, Kafka Streams. Matthias’ first job: an electrician-in-training on BMW’s assembly lines. His challenge: building Kafka Streams at Confluent with a focus on API design, backward compatibility, and a library-first approach that also fits microservices. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:36:42

Ask host to enable sharing for playback control

How Time Kills All Deals in Pre-Sales with Rachel Predreschi | Ep. 8

11/5/2025
Listen: https://confluent.buzzsprout.com | In this episode, Tim Berglund talks to his guest, Rachel Predreschi (DeltaStream), about her career in pre-sales engineering. Her first job: rectory office assistant at her local parish. Her challenge/theme: working at early-stage startups to bridge sales, marketing, and engineering to reach product-market fit. Check out Tim and Rachel's previous podcast, Keyboard and Quill: https://youtube.com/playlist?list=PLihIrF0tCXdeJxpAJgbOsY48B9lD_w24v&si=5NjdA-Rss9Rsmyy1 SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:27:40

Ask host to enable sharing for playback control

Scaling AI in Engineering with Peter Bell | Ep. 7

11/3/2025
Listen: https://confluent.buzzsprout.com | Today, Adi Polak talks to her guest, Peter Bell (gather.dev), about his career in software engineering leadership, CTO community building, and AI-driven development. Peter’s first job: electronics lab technician at their school (alongside shifts at Tesco). His challenge/theme: working at scale with AI adoption and change management. Check out gather.dev: https://www.gather.dev/ SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:27:16

Ask host to enable sharing for playback control

How Kafka Expert Robin Moffat Tackles Open Source Problems | Ep. 6

10/27/2025
Today, Viktor Gamov talks to his colleague Robin Moffat (Confluent) about his career in data engineering. His first job: paperboy. His challenge: working at a retailer with Oracle materialized views as well as teaching others how to productively approach Kafka’s internal systems. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:24:50

Ask host to enable sharing for playback control

Building Parquet into Apache Pinot ft. Neha Pawar | Ep. 5

10/20/2025
Today, Tim Berglund talks to Neha Pawar (StarTree) about her career in real-time analytics and open source database engineering. Her first job: a year-long internship at NVIDIA. Her challenge: leading the technical effort to add native Parquet support into Apache Pinot. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:26:07

Ask host to enable sharing for playback control

The Fix That Secured 1000s of Credit Cards ft. Brian Sletten | Ep. 4

10/13/2025
In this episode, Tim talks to Brian Sletten (Bosatsu Consulting) about his career in software development. His first job: working at a small communications company that built network matrix switch interfaces. His challenge/theme: overhauling credit card storage and security at a major hospitality company. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:29:37

Ask host to enable sharing for playback control

How Viktor Gamov Stays Curious as Tech Rapidly Evolves | Ep. 3

10/6/2025
Adi Polak interviews her co-host, Viktor Gamov, about his career’s evolution from distributed systems to streaming technology. Viktor’s first job: apple picking. His challenge/theme: staying curious and non-judgmental in the ever-changing landscape of tech. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:30:11

Ask host to enable sharing for playback control

How Tim Berglund Found His Calling | Ep. 2

9/29/2025
Viktor Gamov interviews his co-host, Tim Berglund, about his career in the world of streaming data. Tim’s first job: Burger King broiler steamer. His challenge/theme: pivoting from working in hardware and firmware to finding his calling in enterprise software and developer relations. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:30:36

Ask host to enable sharing for playback control

Real-time Threat Detection ft. Adi Polak | Ep. 1

9/22/2025
The Confluent Developer Podcast is here! For this first episode, Tim Berglund talks to his co-host, Adi Polak (Confluent), about her career in distributed data systems. Her first job: neighborhood dogwalker. Her challenge/theme: working at Akamai with Hadoop on data optimization and real-time threat detection, and the power of collaboration. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:32:53

Ask host to enable sharing for playback control

We're back! Welcome to the Confluent Developer Podcast.

9/2/2025
Hi, I'm Tim Berglund. It's been about four years since I've been podcasting at Confluent, and "Streaming Audio" has been on hiatus for a little more than two, but I've got great news: we are back! We're back with a new name, a new format, and new hosts. Welcome to the Confluent Developer Podcast, where we talk to software developers of all stripes about some of the most interesting problems they've solved in their career. I'll be joined by my co-hosts, Adi Polak and Viktor Gamov. And hey, you know, we're all basically Kafka people, so of course, we're going to gravitate towards experts in data streaming and the technologies relevant in that space. But you know what? We're not limited to that. Really, we want to talk to developers of all kinds about the toughest problems they've solved and how that process changed them and changed the environment around them. So join us. We're launching in September with weekly episodes on the Confluent Developer YouTube channel or wherever it is you get your podcasts. We'll see you soon. SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo SubscribeSubscribe on YouTubeLife Is But A Stream

Duration:00:01:20

Ask host to enable sharing for playback control

Apache Kafka 3.5 - Kafka Core, Connect, Streams, & Client Updates

6/15/2023
Apache Kafka® 3.5 is here with the capability of previewing migrations between ZooKeeper clusters to KRaft mode. Follow along as Danica Fine highlights key release updates. Kafka Core: Kafka Connect: Kafka Streams: Kafka Client: EPISODE LINKS release notesblogDownloadget startedWatch the video

Duration:00:11:25

Ask host to enable sharing for playback control

A Special Announcement from Streaming Audio

4/13/2023
After recording 64 episodes and featuring 58 amazing guests, the Streaming Audio podcast series has amassed over 130,000 plays on YouTube in the last year. We're extremely proud of these achievements and feel that it's time to take a well-deserved break. Streaming Audio will be taking a vacation! We want to express our gratitude to you, our valued listeners, for spending 10,000 hours with us on this incredible journey. Rest assured, we will be back with more episodes! In the meantime, feel free to revisit some of our previous episodes. For instance, you can listen to Anna McDonald share her stories about the worst Apache Kafka® bugs she’s ever seen, or listen to Jun Rao offer his expert advice on running Kafka in production. And who could forget the charming backstory behind Mitch Seymour's Kafka storybook, Gently Down the Stream? These memorable episodes brought us joy, and we're thrilled to have shared them with you. As we reflect on our accomplishments with pride, we also look forward to an exciting future. Until we meet again, happy listening! EPISODE LINKS Top 6 Worst Apache Kafka JIRA BugsRunning Apache Kafka in ProductionLearn How Stream-Processing Works The Simplest Way PossibleWatch the video version of this podcastStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:01:18

Ask host to enable sharing for playback control

How to use Data Contracts for Long-Term Schema Management

3/21/2023
Have you ever struggled with managing data long term, especially as the schema changes over time? In order to manage and leverage data across an organization, it’s essential to have well-defined guidelines and standards in place around data quality, enforcement, and data transfer. To get started, Abraham Leal (Customer Success Technical Architect, Confluent) suggests that organizations associate their Apache Kafka® data with a data contract (schema). A data contract is an agreement between a service provider and data consumers. It defines the management and intended usage of data within an organization. In this episode, Abraham talks to Kris about how to use data contracts and schema enforcement to ensure long-term data management. When an organization sends and stores critical and valuable data in Kafka, more often than not it would like to leverage that data in various valuable ways for multiple business units. Kafka is particularly suited for this use case, but it can be problematic later on if the governance rules aren’t established up front. With schema registry, evolution is easy due to its robust security guarantees. When managing data pipelines, you can also use GitOps automation features for an extra control layer. It allows you to be creative with topic versioning, upcasting/downcasting the data collected, and adding quality assurance steps at the end of each run to ensure your project remains reliable. Abraham explains that Protobuf and Avro are the best formats to use rather than XML or JSON because they are built to handle schema evolution. In addition, they have a much lower overhead per-record, so you can save bandwidth and data storage costs by adopting them. There’s so much more to consider, but if you are thinking about implementing or integrating with your data quality team, Abraham suggests that you use schema registry heavily from the beginning. If you have more questions, Kris invites you to join the conversation. You can also watch the KOR Financial Current talk Abraham mentions or take Danica Fine’s free course on how to use schema registry on Confluent Developer. EPISODE LINKS OS projectKOR Financial Current TalkThe Key Concepts of Schema RegistrySchema Evolution and CompatibilitySchema Registry Made Simple by Confluent Cloud ft. Magesh NandakumarKris Jenkins’ TwitterWatch the video version of this podcastStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:57:28

Ask host to enable sharing for playback control

How to use Python with Apache Kafka

3/14/2023
Can you use Apache Kafka® and Python together? What’s the current state of Python support? And what are the best options to get started? In this episode, Dave Klein joins Kris to talk about all things Kafka and Python: the libraries, the tools, and the pros & cons. He also talks about the new course he just launched to support Python programmers entering the event-streaming world. Dave has been an active member of the Kafka community for many years and noticed that there were a lot of Kafka resources for Java but few for Python. So he decided to create a course to help people get started using Python and Kafka together. Historically, Java has had the most documentation, and people have often missed how good the Python support is for Kafka users. Python and Kafka are an ideal fit for machine learning applications and data engineering in general. Yet there are a lot of use cases for building, streaming, and machine learning pipelines. In fact, someone conducted a survey to find out what languages were most popular in the Kafka community and Python came in second after Java. That’s how Dave got the idea to create a course for newbies. In this course, Dave combines video lectures with code-heavy exercises to give developers a taste of what the code looks like, how to structure it, a preview of the shape of the code, and the structure of the classes and the functions so you can get hands-on practice using the library. He also covers building a producer and a consumer and using the admin client. And, of course, there is a module that covers working with the schemas supported by the Kafka library. Dave explains that Python opens up a world of opportunity and is ripe for expansion. So if you are ready to dive in, head over to developer.confluent.io to learn more about Dave’s course. EPISODE LINKS Getting Started with Python for Apache KafkaIntroduction to Apache Kafka for Python DevelopersBuilding a Python client application for KafkaCoding in MotionBuilding and Designing Events and Event Streams with Apache KafkaWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:31:57

Ask host to enable sharing for playback control

Next-Gen Data Modeling, Integrity, and Governance with YODA

3/7/2023
In this episode, Kris interviews Doron Porat, Director of Infrastructure at Yotpo, and Liran Yogev, Director of Engineering at ZipRecruiter (formerly at Yotpo), about their experiences and strategies in dealing with data modeling at scale. Yotpo has a vast and active data lake, comprising thousands of datasets that are processed by different engines, primarily Apache Spark™. They wanted to provide users with self-service tools for generating and utilizing data with maximum flexibility, but encountered difficulties, including poor standardization, low data reusability, limited data lineage, and unreliable datasets. The team realized that Yotpo's modeling layer, which defines the structure and relationships of the data, needed to be separated from the execution layer, which defines and processes operations on the data. This separation would give programmers better visibility into data pipelines across all execution engines, storage methods, and formats, as well as more governance control for exploration and automation. To address these issues, they developed YODA, an internal tool that combines excellent developer experience, DBT, Databricks, Airflow, Looker and more, with a strong CI/CD and orchestration layer. Yotpo is a B2B, SaaS e-commerce marketing platform that provides businesses with the necessary tools for accurate customer analytics, remarketing, support messaging, and more. ZipRecruiter is a job site that utilizes AI matching to help businesses find the right candidates for their open roles. EPISODE LINKS Next Gen Data Modeling in the Open Data PlatformData Mesh 101Data Mesh Architecture: A Modern Distributed Data ModelWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:55:55

Ask host to enable sharing for playback control

Migrate Your Kafka Cluster with Minimal Downtime

3/1/2023
Migrating Apache Kafka® clusters can be challenging, especially when moving large amounts of data while minimizing downtime. Michael Dunn (Solutions Architect, Confluent) has worked in the data space for many years, designing and managing systems to support high-volume applications. He has helped many organizations strategize, design, and implement successful Kafka cluster migrations between different environments. In this episode, Michael shares some tips about Kafka cluster migration with Kris, including the pros and cons of the different tools he recommends. Michael explains that there are many reasons why companies migrate their Kafka clusters. For example, they may want to modernize their platforms, move to a self-hosted cloud server, or consolidate clusters. He tells Kris that creating a plan and selecting the right tool before getting started is critical for reducing downtime and minimizing migration risks. The good news is that a few tools can facilitate moving large amounts of data, topics, schemas, applications, connectors, and everything else from one Apache Kafka cluster to another. Kafka MirrorMaker/MirrorMaker2 (MM2) is a stand-alone tool for copying data between two Kafka clusters. It uses source and sink connectors to replicate topics from a source cluster into the destination cluster. Confluent Replicator allows you to replicate data from one Kafka cluster to another. Replicator is similar to MM2, but the difference is that it’s been battle-tested. Cluster Linking is a powerful tool offered by Confluent that allows you to mirror topics from an Apache Kafka 2.4/Confluent Platform 5.4 source cluster to a Confluent Platform 7+ cluster in a read-only state, and is available as a fully-managed service in Confluent Cloud. At the end of the day, Michael stresses that coupled with a well-thought-out strategy and the right tool, Kafka cluster migration can be relatively painless. Following his advice, you should be able to keep your system healthy and stable before and after the migration is complete. EPISODE LINKS MirrorMaker 2ReplicatorCluster LinkingSchema MigrationMulti-Cluster Apache Kafka with Cluster LinkingWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:01:01:30

Ask host to enable sharing for playback control

Real-Time Data Transformation and Analytics with dbt Labs

2/22/2023
dbt is known as being part of the Modern Data Stack for ELT processes. Being in the MDS, dbt Labs believes in having the best of breed for every part of the stack. Oftentimes folks are using an EL tool like Fivetran to pull data from the database into the warehouse, then using dbt to manage the transformations in the warehouse. Analysts can then build dashboards on top of that data, or execute tests. It’s possible for an analyst to adapt this process for use with a microservice application using Apache Kafka® and the same method to pull batch data out of each and every database; however, in this episode, Amy Chen (Partner Engineering Manager, dbt Labs) tells Kris about a better way forward for analysts willing to adopt the streaming mindset: Reusable pipelines using dbt models that immediately pull events into the warehouse and materialize as materialized views by default. dbt Labs is the company that makes and maintains dbt. dbt Core is the open-source data transformation framework that allows data teams to operate with software engineering’s best practices. dbt Cloud is the fastest and most reliable way to deploy dbt. Inside the world of event streaming, there is a push to expand data access beyond the programmers writing the code, and towards everyone involved in the business. Over at dbt Labs they’re attempting something of the reverse— to get data analysts to adopt the best practices of software engineers, and more recently, of streaming programmers. They’re improving the process of building data pipelines while empowering businesses to bring more contributors into the analytics process, with an easy to deploy, easy to maintain platform. It offers version control to analysts who traditionally don’t have access to git, along with the ability to easily automate testing, all in the same place. In this episode, Kris and Amy explore: EPISODE LINKS dbt labsAn Analytics Engineer’s Guide to StreamingIf Streaming Is the Answer, Why Are We Still Doing Batch?All Current 2022 sessions and slidesWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:43:41

Ask host to enable sharing for playback control

What is the Future of Streaming Data?

2/15/2023
What’s the next big thing in the future of streaming data? In this episode, Greg DeMichillie (VP of Product and Solutions Marketing, Confluent) talks to Kris about the future of stream processing in environments where the value of data lies in their ability to intercept and interpret data. Greg explains that organizations typically focus on the infrastructure containers themselves, and not on the thousands of data connections that form within. When they finally realize that they don't have a way to manage the complexity of these connections, a new problem arises: how do they approach managing such complexity? That’s where Confluent and Apache Kafka® come into play - they offer a consistent way to organize this seemingly endless web of data so they don't have to face the daunting task of figuring out how to connect their shopping portals or jump through hoops trying different ETL tools on various systems. As more companies seek ways to manage this data, they are asking some basic questions: The next question for companies who have already adopted Kafka is a bit more complex: "What about my partners?” For example, companies with inventory management systems use supply chain systems to track product creation and shipping. As a result, they need to decide which emails to update, if they need to write custom REST APIs to sit in front of Kafka topics, etc. Advanced use cases like this raise additional questions about data governance, security, data policy, and PII, forcing companies to think differently about data. Greg predicts this is the next big frontier as more companies adopt Kafka internally. And because they will have to think less about where the data is stored and more about how data moves, they will have to solve problems to make managing all that data easier. If you're an enthusiast of real-time data streaming, Greg invites you to attend the Kafka Summit (London) in May and Current (Austin, TX) for a deeper dive into the world of Apache Kafka-related topics now and beyond. EPISODE LINKS What’s Ahead of the Future of Data Streaming?If Streaming Is the Answer, Why Are We Still Doing Batch?All Current 2022 sessions and slidesKafka Summit London 2023Current 2023Watch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:41:29

Ask host to enable sharing for playback control

What can Apache Kafka Developers learn from Online Gaming?

2/8/2023
What can online gaming teach us about making large-scale event management more collaborative in real-time? Ben Gamble (Developer Relations Manager, Aiven) has come to the world of real-time event streaming from an usual source: the video games industry. And if you stop to think about it, modern online games are complex, distributed real-time data systems with decades of innovative techniques to teach us. In this episode, Ben talks with Kris about integrating gaming concepts with Apache Kafka®. Using Kafka’s state management stream processing, Ben has built systems that can handle real-time event processing at a massive scale, including interesting approaches to conflict resolution and collaboration. Building latency into a system is one way to mask data processing time. Ben says that you can efficiently hide latency issues and prioritize performance improvements by setting an initial target and then optimizing from there. If you measure before optimizing, you can add an extra layer to manage user expectations better. Tricks like adding a visual progress bar give the appearance of progress but actually hide latency and improve the overall user experience. To effectively handle challenging activities, like resolving conflicts and atomic edits, Ben suggests “slicing” (or nano batching) to break down tasks into small, related chunks. Slicing allows each task to be evaluated separately, thus producing timely outcomes that resolve potential background conflicts without the user knowing. Ben also explains how he uses pooling to make collaboration seamless. Pooling is a process that links open requests with potential matches. Similar to booking seats on an airplane, seats are assigned when requests are made. As these types of connections are handled through a Kafka event stream, the initial open requests are eventually fulfilled when seats become available. According to Ben, real-world tools that facilitate collaboration (such as Google Docs and Slack) work similarly. Just like multi-player gaming systems, multiple users can comment or chat in real-time and users perceive instant responses because of the techniques ported over from the gaming world. As Ben sees it, the proliferation of these types of concepts across disciplines will also benefit a more significant number of collaborative systems. Despite being long established for gamers, these patterns can be implemented in more business applications to improve the user experience significantly. EPISODE LINKS Going Multiplayer With KafkaBuilding a Dependable Real-Time Betting App with Confluent Cloud and AblyEvent Streaming PatternsWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagedetails

Duration:00:55:32

Ask host to enable sharing for playback control

Apache Kafka 3.4 - New Features & Improvements

2/7/2023
Apache Kafka® 3.4 is released! In this special episode, Danica Fine (Senior Developer Advocate, Confluent), shares highlights of the Apache Kafka 3.4 release. This release introduces new KIPs in Kafka Core, Kafka Streams, and Kafka Connect. In Kafka Core: KIP-792KIP-830KIP-854KIP-866 (early access)KIP-876KIP-881In Kafka Streams: KIP-770KIP-837And finally, for Kafka Connect: KIP-787Tune in to learn more about the Apache Kafka 3.4 release! EPISODE LINKS See release notes for Apache Kafka 3.4Read the blog to learn moreDownload Apache Kafka 3.4get startedWatch the video version of this podcastJoin the Community

Duration:00:05:13