Learning Bayesian Statistics-logo

Learning Bayesian Statistics

Technology Podcasts

Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners...

Location:

France

Description:

Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way. By day, I'm a Senior data scientist. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages PyMC and ArviZ. I also love Nutella, but I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and unlock exclusive Bayesian swag on Patreon!

Language:

English

Contact:

+33660447360


Episodes
Ask host to enable sharing for playback control

#155 Probabilistic Programming for the Real World, with Andreas Munk

4/8/2026
Support & Resources → Support the show on Patreon → Bayesian Modeling Course (first 2 lessons free): Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work Takeaways: Q: Why is bridging deep learning and probabilistic programming so important? A: Deep learning is extraordinarily good at fitting complex functions, but it throws away uncertainty. Probabilistic programming keeps uncertainty explicit throughout. Combining the two – as in inference compilation – lets you get the expressiveness of neural networks while still doing proper Bayesian inference. Q: What is inference compilation and how does it relate to amortized inference? A: Amortized inference is the general idea of training a model upfront so you don't have to run expensive inference from scratch every single time. Inference compilation is a specific form of amortized inference where a neural network is trained to propose good posterior samples for a given probabilistic program – essentially learning to do inference rather than computing it fresh each query. Q: What is PyProb and what problems does it solve? A: PyProb is a probabilistic programming library designed specifically to support amortized inference workflows. It lets you write probabilistic models in Python and then train inference networks on top of them, making methods like inference compilation practical for real-world simulators and scientific models. Q: What are probabilistic surrogate networks and why do they matter? A: A probabilistic surrogate network is a learned approximation of a complex, expensive simulator that preserves uncertainty. Instead of running a costly simulation thousands of times, you train a surrogate that can answer probabilistic queries much faster – crucial for applications like risk modeling where speed and uncertainty quantification both matter. Chapters: 00:00:00 Introduction to Bayesian Inference and Its Barriers 00:03:51 Andreas Munch's Journey into Statistics 00:10:09 Bridging the Gap: Bayesian Inference in Real-World Applications 00:15:56 Deep Learning Meets Probabilistic Programming 00:22:05 Understanding Inference Compilation and Amortized Inference 00:28:14 Exploring PyProb: A Tool for Amortized Inference 00:33:55 Probabilistic Surrogate Networks and Their Applications 00:38:10 Building Surrogate Models for Probabilistic Programming 00:45:44 The Challenge of Bayesian Inference in Enterprises 00:52:57 Communicating Uncertainty to Stakeholders 01:01:09 Democratizing Bayesian Inference with Evara 01:06:27 Insurance Pricing and Latent Variables 01:16:41 Modeling Uncertainty in Predictions 01:20:29 Dynamic Inference and Decision-Making 01:23:17 Updating Models with Actual Data 01:26:11 The Future of Bayesian Sampling in Excel 01:31:54 Navigating Business Challenges and Growth 01:36:40 Exploring Language Models and Their Applications 01:38:35 The Quest for Better Inference Algorithms 01:41:01 Dinner with Great Minds: A Thought Experiment Thank you to my Patrons for making this episode possible!

Duration:01:54:07

Ask host to enable sharing for playback control

Bitesize | "What Would Have Happened?" - Bayesian Synthetic Control Explained

4/2/2026
Today's clip is from Episode 154 of the podcast, with Thomas Pinder. In this conversation, Thomas Pinder explains how Bayesian methods naturally lend themselves to causal modeling, and why that matters for real-world business decisions. The key insight is that causal questions in industry are rarely black and white: instead of a single treatment effect, you get a full posterior distribution, credible intervals, and the ability to communicate the probability that an effect is positive, which is far more useful to stakeholders than a p-value. Thomas then dives into Bayesian Synthetic Control, a reframing of the classic synthetic control method from a constrained optimization problem into a Bayesian regression problem. Rather than optimizing weights on a simplex, you place a Dirichlet prior on the regression coefficients, which turns out to be not just mathematically elegant but practically richer: you can express prior beliefs about how many control units are informative, set the concentration parameter accordingly, or let a gamma hyperprior on that parameter let the data decide. The result is a more flexible, less fragile counterfactual, implemented cleanly in PyMC or NumPyro. Get the full discussion here Support & Resources → Support the show on Patreon: https://www.patreon.com/c/learnbayesstats → Bayesian Modeling Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Duration:00:05:23

Ask host to enable sharing for playback control

#154 Bayesian Causal Inference at Scale, with Thomas Pinder

3/25/2026
• Support & get perks! • Bayesian Modeling course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Takeaways: Q: Why was GPJax created and how does it benefit researchers? A: GPJax was developed to provide a high-performance, flexible framework for Gaussian processes (GPs) within the JAX ecosystem. It allows researchers to move beyond black-box implementations and easily experiment with custom kernels and model structures while leveraging JAX’s automatic differentiation and GPU acceleration. Q: What are the primary advantages of using Gaussian processes for data modeling? A: Gaussian processes are highly effective at modeling complex, nonlinear relationships in data. Unlike many machine learning methods that only provide a point estimate, GPs offer built-in uncertainty quantification, which is essential for understanding the reliability of predictions in research and industry. Q: How does the GPJax and NumPyro integration enhance probabilistic modeling? A: The integration allows users to treat GPJax models as components within a larger NumPyro probabilistic program. This combination enables the use of advanced sampling techniques like NUTS (No-U-Turn Sampler), making it easier to build and fit complex hierarchical models that include Gaussian processes. Q: What are the main challenges when applying Gaussian processes to high-dimensional data? A: High-dimensional data significantly complicates GP modeling due to the curse of dimensionality and the cubic scaling of computational costs. In high dimensions, defining meaningful distance metrics for kernels becomes harder, often requiring specialized techniques like sparse GPs or dimensionality reduction to remain tractable. Full Takeaways at: COMING UP SOON Chapters: 11:40 What is GPJax and how does it simplify Gaussian Process modeling? 15:48 How are Bayesian methods used for experimentation and causal inference in industry? 18:40 How do you implement Bayesian Synthetic Control? 32:17 What is Bayesian Synthetic Difference-in-Differences? 39:44 What are the research applications and supported methods for the GPJax library? 45:47 What are the primary software and computational bottlenecks when scaling Gaussian Processes? 49:02 What are the real-world industrial applications of Gaussian Process models? 54:36 How is Bayesian modeling applied to soccer and sports analytics? 58:43 What is the future development roadmap for the GPJax ecosystem? 01:05:37 What is Impulso and how does it integrate into a Bayesian modeling workflow? 01:13:42 How do you balance Bayesian computational overhead with industrial latency requirements? 01:20:26 Why is there optimism that scalable Bayesian methods for causal inference are now within reach? Thank you to my Patrons for making this episode possible! Links from the show at: COMING UP SOON

Duration:01:26:18

Ask host to enable sharing for playback control

#153 The Neuroscience of Philanthropy, with Cherian Koshy

3/11/2026
• Support & get perks! • Bayesian Modeling course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Takeaways: Q: Is generosity a natural human trait? A: Yes, generosity is hardwired in our brains and is essential for social interaction. Q: Why do people say they care about causes but not act on it? A: There is often a disconnect between stated care for causes and actual action. Understanding the conditions under which generosity aligns with a person's identity is crucial for bridging this gap. Q: How should fundraising efforts be approached? A: Fundraising should primarily focus on belief updating rather than mere persuasion. Q: What are the benefits of being generous? A: Generosity has significant mental and physical health benefits, as the brain's reward systems activate when we give, making us feel good. Q: How do our beliefs relate to our actions? A: Our beliefs about ourselves strongly influence our actions and decisions, including our decision to be generous. Q: Can generosity impact a community? A: Yes, generosity can be a powerful tool for improving community dynamics. Q: How can technology like AI assist institutions with donors? A: AI could help institutions remember donors better, improving the donor-institution relationship. Chapters: 00:00 What's the role of Behavioral Science inPhilanthropy 19:57 What is The Neuroscience of Generosity? 24:40 How can we best understand Donor Decision-Making? 32:14 How can we achieve reframe Beliefs and Actions? 35:39 What is the role of Identity in Habit Formation? 38:06 What is the Generosity Gap in Philanthropy? 45:06 How can we reduce Friction in Donation Processes? 48:27 What is the role of AI and Trust in Nonprofits? 52:11 How can we build Predictive Models for Donor Behavior? 55:41 What is the role of Empathy in Sales and Stakeholder Engagement? 01:00:46 How can we best align ideas with Stakeholder Beliefs? 01:02:06 How can we explore Generosity and Memory? Thank you to my Patrons for making this episode possible! Links from the show: https://www.fieldofplay.co.uk/Bayesian workflow agent skillNeurogivingwebsitepress kitUnlocking the Science of Exercise, Nutrition & Weight Management

Duration:01:09:12

Ask host to enable sharing for playback control

Bitesize | How To Model Risk Aversion In Pricing?

3/4/2026
Today's clip is from Episode 152 of the podcast, with Daniel Saunders. In this conversation, Daniel Saunders explains how to incorporate risk aversion into Bayesian price optimization. The key insight is that uncertainty around expected profit is asymmetric across price points, low prices yield more predictable (if modest) returns, while high prices introduce much wider uncertainty. Rather than simply maximizing expected profit, you can pass profit through an exponential utility function that models diminishing returns, a well-established idea from economics. This adds an adjustable risk aversion parameter to the optimization: as risk aversion increases, the model shifts toward more conservative price recommendations, trading off potentially large but uncertain gains for outcomes with tighter, more reliable distributions. Get the full discussion here • Join this channel to get access to perks: https://www.patreon.com/c/learnbayesstats • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302 • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Duration:00:03:34

Ask host to enable sharing for playback control

#152 A Bayesian decision theory workflow, with Daniel Saunders

2/26/2026
• Support & get perks! • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com • Intro to Bayes and Advanced Regression courses (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Chapters: 00:00 The Importance of Decision-Making in Data Science 06:41 From Philosophy to Bayesian Statistics 14:57 The Role of Soft Skills in Data Science 18:19 Understanding Decision Theory Workflows 22:43 Shifting Focus from Accuracy to Business Value 26:23 Leveraging PyTensor for Optimization 34:27 Applying Optimal Decision-Making in Industry 40:06 Understanding Utility Functions in Regulation 41:35 Introduction to Obeisance Decision Theory Workflow 42:33 Exploring Price Elasticity and Demand 45:54 Optimizing Profit through Bayesian Models 51:12 Risk Aversion and Utility Functions 57:18 Advanced Risk Management Techniques 01:01:08 Practical Applications of Bayesian Decision-Making 01:06:54 Future Directions in Bayesian Inference 01:10:16 The Quest for Better Inference Algorithms 01:15:01 Dinner with a Polymath: Herbert Simon Thank you to my Patrons for making this episode possible! Links from the show: https://www.fieldofplay.co.uk/ A Bayesian decision theory workflowwebsite, LinkedIn andGitHubState Space Models & Structural Time Series, with Jesse GrabowskiBART & The Future of Bayesian Tools, with Osvaldo MartinOptimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian SeyboldtThe Past, Present & Future of Stan, with Bob Carpenter

Duration:01:19:18

Ask host to enable sharing for playback control

BITESIZE | How Do Diffusion Models Work?

2/19/2026
Today's clip is from Episode 151 of the podcast, with Jonas Arruda In this conversation, Jonas Arruda explains how diffusion models generate data by learning to reverse a noise process. The idea is to start from a simple distribution like Gaussian noise and gradually remove noise until the target distribution emerges. This is done through a forward process that adds noise to clean parameters and a backward process that learns how to undo that corruption. A noise schedule controls how much noise is added or removed at each step, guiding the transformation from pure randomness back to meaningful structure. Get the full discussion here • Join this channel to get access to perks: https://www.patreon.com/c/learnbayesstats • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302 • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Duration:00:03:40

Ask host to enable sharing for playback control

151 Diffusion Models in Python, a Live Demo with Jonas Arruda

2/12/2026
• Support & get perks! • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com • Intro to Bayes and Advanced Regression courses (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Chapters: 00:00 Exploring Generative AI and Scientific Modeling 10:27 Understanding Simulation-Based Inference (SBI) and Its Applications 15:59 Diffusion Models in Simulation-Based Inference 19:22 Live Coding Session: Implementing Baseflow for SBI 34:39 Analyzing Results and Diagnostics in Simulation-Based Inference 46:18 Hierarchical Models and Amortized Bayesian Inference 48:14 Understanding Simulation-Based Inference (SBI) and Its Importance 49:14 Diving into Diffusion Models: Basics and Mechanisms 50:38 Forward and Backward Processes in Diffusion Models 53:03 Learning the Score: Training Diffusion Models 54:57 Inference with Diffusion Models: The Reverse Process 57:36 Exploring Variants: Flow Matching and Consistency Models 01:01:43 Benchmarking Different Models for Simulation-Based Inference 01:06:41 Hierarchical Models and Their Applications in Inference 01:14:25 Intervening in the Inference Process: Adding Constraints 01:25:35 Summary of Key Concepts and Future Directions Thank you to my Patrons for making this episode possible! Links from the show: - Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026! - Jonas's Diffusion for SBI Tutorial & Review (Paper & Code) - The BayesFlow Library - Jonas on LinkedIn - Jonas on GitHub - Further reading for more mathematical details: Holderrieth & Erives - 150 Fast Bayesian Deep Learning, with David Rügamer, Emanuel Sommer & Jakob Robnik - 107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt

Duration:01:35:43

Ask host to enable sharing for playback control

#150 Fast Bayesian Deep Learning, with David Rügamer, Emanuel Sommer & Jakob Robnik

1/28/2026
• Support & get perks! • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com • Intro to Bayes and Advanced Regression courses (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Chapters: 00:00 Scaling Bayesian Neural Networks 04:26 Origin Stories of the Researchers 09:46 Research Themes in Bayesian Neural Networks 12:05 Making Bayesian Neural Networks Fast 16:19 Microcanonical Langevin Sampler Explained 22:57 Bottlenecks in Scaling Bayesian Neural Networks 29:09 Practical Tools for Bayesian Neural Networks 36:48 Trade-offs in Computational Efficiency and Posterior Fidelity 40:13 Exploring High Dimensional Gaussians 43:03 Practical Applications of Bayesian Deep Ensembles 45:20 Comparing Bayesian Neural Networks with Standard Approaches 50:03 Identifying Real-World Applications for Bayesian Methods 57:44 Future of Bayesian Deep Learning at Scale 01:05:56 The Evolution of Bayesian Inference Packages 01:10:39 Vision for the Future of Bayesian Statistics Thank you to my Patrons for making this episode possible! Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026! Links from the show: David Rügamer: * Website * Google Scholar * GitHub Emanuel Sommer: * Website * GitHub * Google Scholar Jakob Robnik: * Google Scholar * GitHub * Microcanonical Langevin paper * LinkedIn

Duration:01:20:27

Ask host to enable sharing for playback control

BITESIZE | Building Resilience in Modern Tech Careers

1/21/2026
Today’s clip is from episode 149 of the podcast, with Alana Karen. This conversation explores the evolving landscape of technology, particularly in Silicon Valley, focusing on the cultural shifts due to mass layoffs, the debate over remote work, and the impact of AI on job roles and priorities. The discussion highlights the importance of adapting to these changes and preparing for the future by developing complex skills that AI cannot easily replicate. Get the full discussion here! • Join this channel to get access to perks: https://www.patreon.com/c/learnbayesstats • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302 • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Duration:00:25:22

Ask host to enable sharing for playback control

#149 The Future of Work in Tech, with Alana Karen

1/14/2026
• Support & get perks! • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com • Intro to Bayes and Advanced Regression courses (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Chapters: 11:37 The Hard Tech Era 21:08 The Shift in Tech Work Culture 28:49 AI's Impact on Job Security and Work Dynamics 34:33 Adapting to AI: Skills for the Future 45:56 Understanding AI Models and Their Limitations 47:25 The Importance of Diversity in AI Development 54:34 Positioning Technical Talent for Job Security 57:58 Building Resilience in Uncertain Times 01:06:33 Recognizing Diverse Ambitions in Career Progression 01:12:51 The Role of Managers in Employee Retention 01:26:55 Solving Complex Problems with AI and Innovation Thank you to my Patrons for making this episode possible! Links from the show: Alana's latest bookUse code BAYESIAN for 10% offAlana’s SubstackAlana on LinkedinAlana on InstagramThe Obstacle Is the WayCourage Is Calling

Duration:01:32:32

Ask host to enable sharing for playback control

BITESIZE | The Trial Design That Learns in Real Time

1/7/2026
Today’s clip is from episode 148 of the podcast, with Scott Berry. In this conversation, Alex and Scott discuss emphasizing the shift from frequentist to Bayesian approaches in clinical trials. They highlight the limitations of traditional trial designs and the advantages of adaptive and platform trials, particularly in the context of COVID-19 treatment. The discussion provides insights into the complexities of trial design and the innovative methodologies that are shaping the future of medical research. Get the full discussion here! • Join this channel to get access to perks: https://www.patreon.com/c/learnbayesstats • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302 • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122 Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Duration:00:22:09

Ask host to enable sharing for playback control

#148 Adaptive Trials, Bayesian Thinking, and Learning from Data, with Scott Berry

12/30/2025
• Support & get perks! • Proudly sponsored by PyMC Labs. Get in touch! • Intro to Bayes and Advanced Regression courses (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Chapters: 13:16 Understanding Adaptive and Platform Trials 25:25 Real-World Applications and Innovations in Trials 34:11 Challenges in Implementing Bayesian Adaptive Trials 42:09 The Birth of a Simulation Tool 44:10 The Importance of Simulated Data 48:36 Lessons from High-Stakes Trials 52:53 Navigating Adaptive Trial Designs 56:55 Communicating Complexity to Stakeholders 01:02:29 The Future of Clinical Trials 01:10:24 Skills for the Next Generation of Statisticians Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Giuliano Cruz, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli, Guillaume Berthon, Avenicio Baca, Spencer Boucher, Krzysztof Lechowski, Danimal, Jácint Juhász, Sander and Philippe. Links from the show: Berry ConsultantsScott's podcastLBS #45

Duration:01:24:49

Ask host to enable sharing for playback control

BITESIZE | Making Variational Inference Reliable: From ADVI to DADVI

12/17/2025
Today’s clip is from episode 147 of the podcast, with Martin Ingram. Alex and Martin discuss the intricacies of variational inference, particularly focusing on the ADVI method and its challenges. They explore the evolution of approximate inference methods, the significance of mean field variational inference, and the innovative linear response technique for covariance estimation. The discussion also delves into the trade-offs between stochastic and deterministic optimization techniques, providing insights into their implications for Bayesian statistics. Get the full discussion here. Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Duration:00:21:59

Ask host to enable sharing for playback control

#147 Fast Approximate Inference without Convergence Worries, with Martin Ingram

12/12/2025
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch! Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Chapters: 13:17 Understanding DADVI: A New Approach 21:54 Mean Field Variational Inference Explained 26:38 Linear Response and Covariance Estimation 31:21 Deterministic vs Stochastic Optimization in DADVI 35:00 Understanding DADVI and Its Optimization Landscape 37:59 Theoretical Insights and Practical Applications of DADVI 42:12 Comparative Performance of DADVI in Real Applications 45:03 Challenges and Effectiveness of DADVI in Various Models 48:51 Exploring Future Directions for Variational Inference 53:04 Final Thoughts and Advice for Practitioners Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek,

Duration:01:09:55

Ask host to enable sharing for playback control

BITESIZE | Why Bayesian Stats Matter When the Physics Gets Extreme

12/5/2025
Today’s clip is from episode 146 of the podcast, with Ethan Smith. Alex and Ethan discuss the application of Bayesian inference in high energy density physics, particularly in analyzing complex data sets. They highlight the advantages of Bayesian techniques, such as incorporating prior knowledge and managing uncertainties. They also shares insights from an ongoing experimental project focused on measuring the equation of state of plasma at extreme pressures. Finally, Alex and Ethan advocate for best practices in managing large codebases and ensuring model reliability. Get the full discussion here. Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Duration:00:19:12

Ask host to enable sharing for playback control

#146 Lasers, Planets, and Bayesian Inference, with Ethan Smith

11/27/2025
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch! Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Chapters: 14:31 Understanding High Energy Density Physics and Plasma Spectroscopy 21:24 Challenges in Data Analysis and Experimentation 36:11 The Role of Bayesian Inference in High Energy Density Physics 47:17 Transitioning to Advanced Sampling Techniques 51:35 Best Practices in Model Development 55:30 Evaluating Model Performance 01:02:10 The Role of High Energy Density Physics 01:11:15 Innovations in Diagnostic Technologies 01:22:51 Future Directions in Experimental Physics 01:26:08 Advice for Aspiring Scientists Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady,

Duration:01:35:19

Ask host to enable sharing for playback control

BITESIZE | How to Thrive in an AI-Driven Workplace?

11/20/2025
Today’s clip is from episode 145 of the podcast, with Jordan Thibodeau. Alexandre Andorra and Jordan Thibodeau discuss the transformative impact of AI on productivity, career opportunities in the tech industry, and the intricacies of the job interview process. They emphasize the importance of expertise, networking, and the evolving landscape of tech companies, while also providing actionable advice for individuals looking to enhance their careers in AI and related fields. Get the full discussion here. Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Duration:00:19:34

Ask host to enable sharing for playback control

#145 Career Advice in the Age of AI, with Jordan Thibodeau

11/12/2025
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch! Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Guillaume Berthon. Takeaways:

Duration:01:52:18

Ask host to enable sharing for playback control

BITESIZE | Why is Bayesian Deep Learning so Powerful?

11/5/2025
Today’s clip is from episode 144 of the podcast, with Maurizio Filippone. In this conversation, Alex and Maurizio delve into the intricacies of Gaussian processes and their deep learning counterparts. They explain the foundational concepts of Gaussian processes, the transition to deep Gaussian processes, and the advantages they offer in modeling complex data. The discussion also touches on practical applications, model selection, and the evolving landscape of machine learning, particularly in relation to transfer learning and the integration of deep learning techniques with Gaussian processes. Get the full discussion here. Intro to Bayes CourseAdvanced Regression Course Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Duration:00:19:00