Jeff and I discuss his new framework to understand how our cortex functions by building models of complete objects in all the cortical columns throughout the cortex. We also talk about his book On Intelligence, and I get his take on a number of other topics.
Ryota and I discuss his two goals - to implement the functions of consciousness, and to figure out how to measure whether a given system (human, AI, etc) has consciousness. We also talk about his paper implementing curiosity and empowerment as intrinsic motivation in a reinforcement learning AI agent. Plus much more.
Dileep and I talk about how his company, Vicarious, aims to create general artificial intelligence for robots, using tons of inspiration from brain structure and function. We also discuss his recent graphical model that, among other things, breaks CAPTCHAs with very few training examples.
Niko and I discuss cognitive computational neuroscience as an emerging fusion between cognitive science, computational neuroscience, and artificial intelligence - and how it all fits together. Plus we talk about the conference by that name.
Grace shares her recent work adding an attention signal to convolutional neural networks - ones that emulate the ventral visual stream in the brain - to test the "feature gain similarity" model of attention. Lots more, of course. Click the show to access the show notes.
Josh and I talk about all the ways supervised machine learning can be used in neuroscience research, and we walk through how a variety of machine learning algorithms perform decoding on a few neural data sets.
In this episode, Dan Yamins and I talk about how he uses hierarchical convolutional neural networks to model the ventral visual stream, finding that the units in his model correspond to neurons in progressive layers of the brain. We also delve into the AI agents he develops that learn how to play through intrinsic motivation. Click the episode to get the show notes.
Ryan and I go deep (pun intended) on convolutional neural networks and how he uses them to solve problems in medical risk factor discovery and improve genome sequencing, and more. Click the episode for the show notes
David and I cover recurrent neural networks (RNNs), his work using RNNs to study motor brain processes, how dynamical systems theory is a useful approach to brains and AI, and more. Click the episode for the show notes.
What does it mean to be a neural data scientist? Mark and I talk about that, his work discovering how rat prefrontal cortex learns to remember, and a bunch of AI topics he's written about via his Medium blog. Click the episode for the show notes
We talk about how he got the embodied cultured networks to actually work, the hurdles they had to jump through, the rise of citizen science, the Maker movement, and more. Click the episode for the show notes