Argmax
Science Podcasts
A show where three machine learning enthusiasts talk about recent papers and developments in machine learning. Watch our video on YouTube https://www.youtube.com/@argmaxfm
Location:
United States
Genres:
Science Podcasts
Description:
A show where three machine learning enthusiasts talk about recent papers and developments in machine learning. Watch our video on YouTube https://www.youtube.com/@argmaxfm
Language:
English
Website:
https://www.argmax.fm
Episodes
LoRA
9/2/2023
We talk about Low Rank Approximation for fine tuning Transformers. We are also on YouTube now! Check out the video here: https://youtu.be/lLzHr0VFi3Y
Duration:01:02:56
15: InstructGPT
3/28/2023
In this episode we discuss the paper "Training language models to follow instructions with human feedback" by Ouyang et al (2022). We discuss the RLHF paradigm and how important RL is to tuning GPT.
Duration:00:57:27
14: Whisper
3/17/2023
This week we talk about Whisper. It is a weakly supervised speech recognition model.
Duration:00:49:14
13: AlphaTensor
3/10/2023
We talk about AlphaTensor, and how researchers were able to find a new algorithm for matrix multiplication.
Duration:00:49:05
12: SIRENs
10/24/2022
In this episode we talked about "Implicit Neural Representations with Periodic Activation Functions" and the strength of periodic non-linearities.
Duration:00:54:17
11: CVPR Workshop on Autonomous Driving Keynote by Ashok Elluswamy, a Tesla engineer
9/30/2022
In this episode we discuss this video: https://youtu.be/jPCV4GKX9Dw
How Tesla approaches collision detection with novel methods.
Duration:00:48:51
10: Outracing champion Gran Turismo drivers with deep reinforcement learning
8/22/2022
We discuss Sony AI's accomplishment of creating a novel AI agent that can beat professional racers in Gran Turismo. Some topics include:
- The crafting of rewards to make the agent behave nicely
- What is QR-SAC?
- How to deal with "rare" experiences in the replay buffer
Link to paper: https://www.nature.com/articles/s41586-021-04357-7
Duration:00:54:45
8: GATO (A Generalist Agent)
7/29/2022
Today we talk about GATO, a multi-modal, multi-task, multi-embodiment generalist agent.
Duration:00:44:51
9: Heads-Up Limit Hold'em Poker Is Solved
7/29/2022
Today we talk about recent AI advances in Poker; specifically the use of counterfactual regret minimization to solve the game of 2-player Limit Texas Hold'em.
Duration:00:47:54
7: Deep Unsupervised Learning Using Nonequilibrium Thermodynamics (Diffusion Models)
6/13/2022
We start talking about diffusion models as a technique for generative deep learning.
Duration:00:30:55
6: Deep Reinforcement Learning at the Edge of the Statistical Precipice
6/6/2022
We discuss NeurIPS outstanding paper award winning paper, talking about important topics surrounding metrics and reproducibility.
Duration:01:01:08
5: QMIX
4/25/2022
We talk about QMIX https://arxiv.org/abs/1803.11485 as an example of Deep Multi-agent RL.
Duration:00:42:01
4: Can Neural Nets Learn the Same Model Twice?
4/5/2022
Todays paper: Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility
and Double Descent from the Decision Boundary Perspective (https://arxiv.org/pdf/2203.08124.pdf)
Summary:
A discussion of reproducibility and double descent through visualizations of decision boundaries.
Highlights of the discussion:
Duration:00:55:23
3: VICReg
3/21/2022
Todays paper: VICReg (https://arxiv.org/abs/2105.04906)
Summary of the paper
VICReg prevents representation collapse using a mixture of variance, invariance and covariance when calculating the loss. It does not require negative samples and achieves great performance on downstream tasks.
Highlights of discussion
Duration:00:44:45
2: data2vec
3/7/2022
Todays paper: data2vec (https://arxiv.org/abs/2202.03555)
Summary of the paper
A multimodal SSL algorithm that predicts latent representation of different types of input.
Highlights of discussion
Duration:00:53:23
1: Reward is Enough
2/21/2022
This is the first episode of Argmax! We talk about our motivations for doing a podcast, and what we hope listeners will get out of it.
Todays paper: Reward is Enough
Summary of the paper
The authors present the Reward is Enough hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.
Highlights of discussion
Duration:00:54:36