TalkRL: The Reinforcement Learning Podcast-logo

TalkRL: The Reinforcement Learning Podcast

Technology Podcasts

TalkRL podcast is All Reinforcement Learning, All the Time. In-depth interviews with brilliant people at the forefront of RL research and practice. Guests from places like MILA, OpenAI, MIT, DeepMind, Berkeley, Amii, Oxford, Google Research, Brown, Waymo, Caltech, and Vector Institute. Hosted by Robin Ranjit Singh Chauhan.

Location:

Canada

Description:

TalkRL podcast is All Reinforcement Learning, All the Time. In-depth interviews with brilliant people at the forefront of RL research and practice. Guests from places like MILA, OpenAI, MIT, DeepMind, Berkeley, Amii, Oxford, Google Research, Brown, Waymo, Caltech, and Vector Institute. Hosted by Robin Ranjit Singh Chauhan.

Language:

English

Contact:

6048856418


Episodes
Ask host to enable sharing for playback control

Vincent Moens on TorchRL

4/8/2024
Dr. Vincent Moens is an Applied Machine Learning Research Scientist at Meta, and an author of TorchRL and TensorDict in pytorch. Featured References TorchRL: A data-driven decision-making library for PyTorch Albert Bou, Matteo Bettini, Sebastian Dittert, Vikash Kumar, Shagun Sodhani, Xiaomeng Yang, Gianni De Fabritiis, Vincent Moens Additional References TorchRL on githubTensorDict Documentation

Duration:00:40:14

Ask host to enable sharing for playback control

Arash Ahmadian on Rethinking RLHF

3/25/2024
Arash Ahmadian is a Researcher at Cohere and Cohere For AI focussed on Preference Training of large language models. He’s also a researcher at the Vector Institute of AI. Featured Reference Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, Sara Hooker Additional References Self-Rewarding Language ModelsReinforcement Learning: An IntroductionLearning from Delayed RewardsSimple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

Duration:00:33:30

Ask host to enable sharing for playback control

Glen Berseth on RL Conference

3/11/2024
Glen Berseth is an assistant professor at the Université de Montréal, a core academic member of the Mila - Quebec AI Institute, a Canada CIFAR AI chair, member l'Institute Courtios, and co-director of the Robotics and Embodied AI Lab (REAL). Featured Links Reinforcement Learning Conference Closing the Gap between TD Learning and Supervised Learning--A Generalisation Point of View Raj Ghugare, Matthieu Geist, Glen Berseth, Benjamin Eysenbach

Duration:00:21:38

Ask host to enable sharing for playback control

Ian Osband

3/7/2024
Ian Osband is a Research scientist at OpenAI (ex DeepMind, Stanford) working on decision making under uncertainty. We spoke about: - Information theory and RL - Exploration, epistemic uncertainty and joint predictions - Epistemic Neural Networks and scaling to LLMs Featured References Reinforcement Learning, Bit by Bit Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband, Zheng Wen From Predictions to Decisions: The Importance of Joint Predictive Distributions Zheng Wen, Ian Osband, Chao Qin, Xiuyuan Lu, Morteza Ibrahimi, Vikranth Dwaracherla, Mohammad Asghari, Benjamin Van Roy Epistemic Neural Networks Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, Benjamin Van Roy Approximate Thompson Sampling via Epistemic Neural Networks Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, Benjamin Van Roy Additional References Thesis defenceHomepageEpistemic Neural NetworksBehaviour Suite for Reinforcement LearningEfficient Exploration for LLMs

Duration:01:08:26

Ask host to enable sharing for playback control

Sharath Chandra Raparthy

2/11/2024
Sharath Chandra Raparthy on In-Context Learning for Sequential Decision Tasks, GFlowNets, and more! Sharath Chandra Raparthy is an AI Resident at FAIR at Meta, and did his Master's at Mila. Featured Reference Generalization to New Sequential Decision Making Tasks with In-Context Learning Sharath Chandra Raparthy , Eric Hambro, Robert Kirk , Mikael Henaff, , Roberta Raileanu Additional References Sharath Chandra RaparthyHuman-Timescale Adaptation in an Open-Ended Task SpaceData Distributional Properties Drive Emergent In-Context Learning in TransformersDecision Transformer: Reinforcement Learning via Sequence Modeling

Duration:00:40:41

Ask host to enable sharing for playback control

Pierluca D'Oro and Martin Klissarov

11/13/2023
Pierluca D'Oro and Martin Klissarov on Motif and RLAIF, Noisy Neighborhoods and Return Landscapes, and more! Pierluca D'Oro is PhD student at Mila and visiting researcher at Meta. Martin Klissarov is a PhD student at Mila and McGill and research scientist intern at Meta. Featured References Motif: Intrinsic Motivation from Artificial Intelligence Feedback Martin Klissarov*, Pierluca D'Oro*, Shagun Sodhani, Roberta Raileanu, Pierre-Luc Bacon, Pascal Vincent, Amy Zhang, Mikael Henaff Policy Optimization in a Noisy Neighborhood: On Return Landscapes in Continuous Control Nate Rahn*, Pierluca D'Oro*, Harley Wiltzer, Pierre-Luc Bacon, Marc G. Bellemare To keep doing RL research, stop calling yourself an RL researcher Pierluca D'Oro

Duration:00:57:24

Ask host to enable sharing for playback control

Martin Riedmiller

8/22/2023
Martin Riedmiller of Google DeepMind on controlling nuclear fusion plasma in a tokamak with RL, the original Deep Q-Network, Neural Fitted Q-Iteration, Collect and Infer, AGI for control systems, and tons more! Martin Riedmiller is a research scientist and team lead at DeepMind. Featured References Magnetic control of tokamak plasmas through deep reinforcement learning Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de las Casas, Craig Donner, Leslie Fritz, Cristian Galperti, Andrea Huber, James Keeling, Maria Tsimpoukelli, Jackie Kay, Antoine Merle, Jean-Marc Moret, Seb Noury, Federico Pesamosca, David Pfau, Olivier Sauter, Cristian Sommariva, Stefano Coda, Basil Duval, Ambrogio Fasoli, Pushmeet Kohli, Koray Kavukcuoglu, Demis Hassabis & Martin Riedmiller Human-level control through deep reinforcement learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis Neural fitted Q iteration–first experiences with a data efficient neural reinforcement learning method Martin Riedmiller

Duration:01:13:56

Ask host to enable sharing for playback control

Max Schwarzer

8/8/2023
Max Schwarzer is a PhD student at Mila, with Aaron Courville and Marc Bellemare, interested in RL scaling, representation learning for RL, and RL for science. Max spent the last 1.5 years at Google Brain/DeepMind, and is now at Apple Machine Learning Research. Featured References Bigger, Better, Faster: Human-level Atari with human-level efficiency Max Schwarzer, Johan Obando-Ceron, Aaron Courville, Marc Bellemare, Rishabh Agarwal, Pablo Samuel Castro Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier Pierluca D'Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, Aaron Courville The Primacy Bias in Deep Reinforcement Learning Evgenii Nikishin, Max Schwarzer, Pierluca D'Oro, Pierre-Luc Bacon, Aaron Courville Additional References Rainbow: Combining Improvements in Deep Reinforcement LearningWhen to use parametric models in reinforcement learning?Data-Efficient Reinforcement Learning with Self-Predictive RepresentationsPretraining Representations for Data-Efficient Reinforcement Learning

Duration:01:10:18

Ask host to enable sharing for playback control

Julian Togelius

7/25/2023
Julian Togelius is an Associate Professor of Computer Science and Engineering at NYU, and Cofounder and research director at modl.ai Featured References Choose Your Weapon: Survival Strategies for Depressed AI Academics Julian Togelius, Georgios N. Yannakakis Learning Controllable 3D Level Generators Zehua Jiang, Sam Earle, Michael Cerny Green, Julian Togelius PCGRL: Procedural Content Generation via Reinforcement Learning Ahmed Khalifa, Philip Bontrager, Sam Earle, Julian Togelius Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation Niels Justesen, Ruben Rodriguez Torrado, Philip Bontrager, Ahmed Khalifa, Julian Togelius, Sebastian Risi

Duration:00:40:04

Ask host to enable sharing for playback control

Jakob Foerster

5/8/2023
Jakob Foerster on Multi-Agent learning, Cooperation vs Competition, Emergent Communication, Zero-shot coordination, Opponent Shaping, agents for Hanabi and Prisoner's Dilemma, and more. Jakob Foerster is an Associate Professor at University of Oxford. Featured References Learning with Opponent-Learning Awareness Jakob N. Foerster, Richard Y. Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, Igor Mordatch Model-Free Opponent Shaping Chris Lu, Timon Willi, Christian Schroeder de Witt, Jakob Foerster Off-Belief Learning Hengyuan Hu, Adam Lerer, Brandon Cui, David Wu, Luis Pineda, Noam Brown, Jakob Foerster Learning to Communicate with Deep Multi-Agent Reinforcement Learning Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, Shimon Whiteson Adversarial Cheap Talk Chris Lu, Timon Willi, Alistair Letcher, Jakob Foerster Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning Yat Long Lo, Christian Schroeder de Witt, Samuel Sokota, Jakob Nicolaus Foerster, Shimon Whiteson Additional References Lectures by Jakob on youtube

Duration:01:03:45

Ask host to enable sharing for playback control

Danijar Hafner 2

4/12/2023
Danijar Hafner on the DreamerV3 agent and world models, the Director agent and heirarchical RL, realtime RL on robots with DayDreamer, and his framework for unsupervised agent design! Danijar Hafner is a PhD candidate at the University of Toronto with Jimmy Ba, a visiting student at UC Berkeley with Pieter Abbeel, and an intern at DeepMind. He has been our guest before back on episode 11. Featured References Mastering Diverse Domains through World Models [ blog ] DreaverV3 Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, Timothy Lillicrap DayDreamer: World Models for Physical Robot Learning [ blog ] Philipp Wu, Alejandro Escontrela, Danijar Hafner, Ken Goldberg, Pieter Abbeel Deep Hierarchical Planning from Pixels [ blog ] Danijar Hafner, Kuang-Huei Lee, Ian Fischer, Pieter Abbeel Action and Perception as Divergence Minimization [ blog ] Danijar Hafner, Pedro A. Ortega, Jimmy Ba, Thomas Parr, Karl Friston, Nicolas Heess Additional References Mastering Atari with Discrete World ModelsblogDream to Control: Learning Behaviors by Latent ImaginationblogPlanning to Explore via Self-Supervised World Models

Duration:00:45:21

Ask host to enable sharing for playback control

Jeff Clune

3/27/2023
AI Generating Algos, Learning to play Minecraft with Video PreTraining (VPT), Go-Explore for hard exploration, POET and Open Endedness, AI-GAs and ChatGPT, AGI predictions, and lots more! Professor Jeff Clune is Associate Professor of Computer Science at University of British Columbia, a Canada CIFAR AI Chair and Faculty Member at Vector Institute, and Senior Research Advisor at DeepMind. Featured References Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos [ Blog Post ] Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune Robots that can adapt like animals Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret Illuminating search spaces by mapping elites Jean-Baptiste Mouret, Jeff Clune Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley First return, then explore Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune

Duration:01:11:11

Ask host to enable sharing for playback control

Natasha Jaques 2

3/14/2023
Hear about why OpenAI cites her work in RLHF and dialog models, approaches to rewards in RLHF, ChatGPT, Industry vs Academia, PsiPhi-Learning, AGI and more! Dr Natasha Jaques is a Senior Research Scientist at Google Brain. Featured References Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, Rosalind Picard Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar Basis for Intentions: Efficient Inverse Reinforcement Learning using Past Experience Marwa Abdulhai, Natasha Jaques, Sergey Levine Additional References Fine-Tuning Language Models from Human PreferencesLearning to summarize from human feedbackTraining language models to follow instructions with human feedback

Duration:00:46:02

Ask host to enable sharing for playback control

Jacob Beck and Risto Vuorio

3/7/2023
Jacob Beck and Risto Vuorio on their recent Survey of Meta-Reinforcement Learning. Jacob and Risto are Ph.D. students at Whiteson Research Lab at University of Oxford. Featured Reference A Survey of Meta-Reinforcement Learning Jacob Beck, Risto Vuorio, Evan Zheran Liu, Zheng Xiong, Luisa Zintgraf, Chelsea Finn, Shimon Whiteson Additional References VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-LearningMastering Diverse Domains through World ModelsUnsupervised Meta-Learning for Reinforcement LearningDecoupling Exploration and Exploitation for Meta-Reinforcement Learning without SacrificesRL2: Fast Reinforcement Learning via Slow Reinforcement LearningLearning to reinforcement learn

Duration:01:07:05

Ask host to enable sharing for playback control

John Schulman

10/18/2022
John Schulman is a cofounder of OpenAI, and currently a researcher and engineer at OpenAI. Featured References WebGPT: Browser-assisted question-answering with human feedback Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John Schulman Training language models to follow instructions with human feedback Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe Additional References Our approach to alignment researchTraining Verifiers to Solve Math Word ProblemsUC Berkeley Deep RL Bootcamp Lecture 6: Nuts and Bolts of Deep RL ExperimentationProximal Policy Optimization AlgorithmsOptimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs

Duration:00:44:21

Ask host to enable sharing for playback control

Sven Mika

8/19/2022
Sven Mika is the Reinforcement Learning Team Lead at Anyscale, and lead committer of RLlib. He holds a PhD in biomathematics, bioinformatics, and computational biology from Witten/Herdecke University. Featured References RLlib Documentation: RLlib: Industry-Grade Reinforcement Learning Ray: Documentation RLlib: Abstractions for Distributed Reinforcement Learning Eric Liang, Richard Liaw, Philipp Moritz, Robert Nishihara, Roy Fox, Ken Goldberg, Joseph E. Gonzalez, Michael I. Jordan, Ion Stoica Episode sponsor: Anyscale Ray Summit 2022 is coming to San Francisco on August 23-24. Hear how teams at Dow, Verizon, Riot Games, and more are solving their RL challenges with Ray's RLlib. Register at raysummit.org and use code RAYSUMMIT22RL for a further 25% off the already reduced prices.

Duration:00:34:56

Ask host to enable sharing for playback control

Karol Hausman and Fei Xia

8/16/2022
Karol Hausman is a Senior Research Scientist at Google Brain and an Adjunct Professor at Stanford working on robotics and machine learning. Karol is interested in enabling robots to acquire general-purpose skills with minimal supervision in real-world environments. Fei Xia is a Research Scientist with Google Research. Fei Xia is mostly interested in robot learning in complex and unstructured environments. Previously he has been approaching this problem by learning in realistic and scalable simulation environments (GibsonEnv, iGibson). Most recently, he has been exploring using foundation models for those challenges. Featured References Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [ website ] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan Inner Monologue: Embodied Reasoning through Planning with Language Models Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter Additional References Large-scale simulation for embodied perception and robot learningQT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic ManipulationMT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at ScaleReLMoGen: Leveraging Motion Generation in Reinforcement Learning for Mobile ManipulationActionable Models: Unsupervised Offline Reinforcement Learning of Robotic SkillsSocratic Models: Composing Zero-Shot Multimodal Reasoning with Language Episode sponsor: Anyscale Ray Summit 2022 is coming to San Francisco on August 23-24. Hear how teams at Dow, Verizon, Riot Games, and more are solving their RL challenges with Ray's RLlib. Register at raysummit.org and use code RAYSUMMIT22RL for a further 25% off the already reduced prices.

Duration:01:03:09

Ask host to enable sharing for playback control

Sai Krishna Gottipati

7/31/2022
Saikrishna Gottipati is an RL Researcher at AI Redefined, working on RL, MARL, human in the loop learning. Featured References Cogment: Open Source Framework For Distributed Multi-actor Training, Deployment & Operations AI Redefined, Sai Krishna Gottipati, Sagar Kurandwad, Clodéric Mars, Gregory Szriftgiser, François Chabot Do As You Teach: A Multi-Teacher Approach to Self-Play in Deep Reinforcement Learning Currently under review Learning to navigate the synthetically accessible chemical space using reinforcement learning Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio Additional References Asymmetric self-play for automatic goal discovery in robotic manipulationContinuous Coordination As a Realistic Scenario for Lifelong LearningEpisode sponsor: Anyscale Ray Summit 2022 is coming to San Francisco on August 23-24. Hear how teams at Dow, Verizon, Riot Games, and more are solving their RL challenges with Ray's RLlib. Register at raysummit.org and use code RAYSUMMIT22RL for a further 25% off the already reduced prices.

Duration:01:07:48

Ask host to enable sharing for playback control

Aravind Srinivas 2

5/9/2022
Aravind Srinivas is back! He is now a research Scientist at OpenAI. Featured References Decision Transformer: Reinforcement Learning via Sequence Modeling Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch VideoGPT: Video Generation using VQ-VAE and Transformers Wilson Yan, Yunzhi Zhang, Pieter Abbeel, Aravind Srinivas

Duration:00:45:10

Ask host to enable sharing for playback control

Rohin Shah

4/11/2022
Dr. Rohin Shah is a Research Scientist at DeepMind, and the editor and main contributor of the Alignment Newsletter. Featured References The MineRL BASALT Competition on Learning from Human Feedback Rohin Shah, Cody Wild, Steven H. Wang, Neel Alex, Brandon Houghton, William Guss, Sharada Mohanty, Anssi Kanervisto, Stephanie Milani, Nicholay Topin, Pieter Abbeel, Stuart Russell, Anca Dragan Preferences Implicit in the State of the World Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, Anca Dragan Benefits of Assistance over Reward Learning Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael D Dennis, Pieter Abbeel, Anca Dragan, Stuart Russell On the Utility of Learning about Humans for Human-AI Coordination Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca Dragan Evaluating the Robustness of Collaborative Agents Paul Knott, Micah Carroll, Sam Devlin, Kamil Ciosek, Katja Hofmann, A. D. Dragan, Rohin Shah Additional References AGI Safety Fundamentals

Duration:01:37:04