Data Skeptic Podcast-logo

Data Skeptic Podcast

Science >

The Data Skeptic Podcast features conversations with researchers and other professionals active in applying data science to real world problems. The topics relate to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches. The podcast has an alternating format with even episodes featuring long for conversations, and odd episodes featuring short discussions about topics related to data science which are aimed at listeners who might not be familiar with some of the topics discussed on the show.

The Data Skeptic Podcast features conversations with researchers and other professionals active in applying data science to real world problems. The topics relate to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches. The podcast has an alternating format with even episodes featuring long for conversations, and odd episodes featuring short discussions about topics related to data science which are aimed at listeners who might not be familiar with some of the topics discussed on the show.
More Information

Location:

United States

Description:

The Data Skeptic Podcast features conversations with researchers and other professionals active in applying data science to real world problems. The topics relate to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches. The podcast has an alternating format with even episodes featuring long for conversations, and odd episodes featuring short discussions about topics related to data science which are aimed at listeners who might not be familiar with some of the topics discussed on the show.

Language:

English


Episodes

Optimal Decision Making with POMDPs

2/23/2018
More
In a previous episode, we discussed Markov Decision Processes or MDPs, a framework for decision making and planning. This episode explores the generalization Partially Observable MDPs (POMDPs) which are an incredibly general framework that describes most every agent based system.

Duration:00:24:10

AI Decision-Making

2/16/2018
More
Making a decision is a complex task. Today's guest Dongho Kim discusses how he and his team at Prowler has been building a platform that will be accessible by way of APIs and a set of pre-made scripts for autonomous decision making based on probabilistic modeling, reinforcement learning, and game theory. The aim is so that an AI system could make decisions just as good as humans can. At the moment Prowler is focusing on multi-agent systems for the video game industry, smart city applications...

Duration:00:53:09

[MINI] Reinforcement Learning

2/9/2018
More
In many real world situations, a person/agent doesn't necessarily know their own objectives or the mechanics of the world they're interacting with. However, if the agent receives rewards which are correlated with the both their actions and the state of the world, then reinforcement learning can be used to discover behaviors that maximize the reward earned.

Duration:00:29:06

Evolutionary Computation

2/2/2018
More
In this week’s episode, Kyle is joined by Risto Miikkulainen, a professor of computer science and neuroscience at the University of Texas at Austin. They talk about evolutionary computation, its applications in deep learning, and how it’s inspired by biology. They also discuss some of the things Sentient Technologies is working on in stock and finances, retail, e-commerce and web design, as well as the technology behind it-- evolutionary algorithms.

Duration:00:24:48

[MINI] Markov Decision Processes

1/26/2018
More
Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples. Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.

Duration:00:20:24

Neuroscience Frontiers

1/19/2018
More
Last week on Data Skeptic, we visited the Laboratory of Neuroimaging, or LONI, at USC and learned about their data-driven platform that enables scientists from all over the world to share, transform, store, manage and analyze their data to understand neurological diseases better. We talked about how neuroscientists measure the brain using data from MRI scans, and how that data is processed and analyzed to understand the brain. This week, we'll continue the second half of our two-part...

Duration:00:33:31

Neuroimaging and Big Data

1/12/2018
More
Last year, Kyle had a chance to visit the Laboratory of Neuroimaging, or LONI, at USC, and learn about how some researchers are using data science to study the function of the brain. We’re going to be covering some of their work in two episodes on Data Skeptic. In this first part of our two-part episode, we'll talk about the data collection and brain imaging and the LONI pipeline. We'll then continue our coverage in the second episode, where we'll talk more about how researchers can gain...

Duration:00:26:40

The Agent Model of Artificial Intelligence

1/5/2018
More
In artificial intelligence, the term 'agent' is used to mean an autonomous, thinking agent with the ability to interact with their environment. An agent could be a person or a piece of software. In either case, we can describe aspects of the agent in a standard framework.

Duration:00:22:23

Artificial Intelligence a Podcast Approach

12/29/2017
More
This episode kicks off the next theme on Data Skeptic: artificial intelligence. Kyle discusses what's to come for the show in 2018, why this topic is relevant, and how we intend to cover it.

Duration:00:40:10

Holiday reading 2017

12/22/2017
More
We break format from our regular programming today and bring you an excerpt from Max Tegmark's book "Life 3.0". The first chapter is a short story titled "The Tale of the Omega Team". Audio excerpted courtesy of Penguin Random House Audio from LIFE 3.0 by Max Tegmark, narrated by Rob Shapiro. You can find "Life 3.0" at your favorite bookstore and the audio edition via penguinrandomhouseaudio.com. Kyle will be giving a talk at the Monterey County SkeptiCamp 2018.

Duration:00:12:37

Complexity and Cryptography

12/15/2017
More
This week, our host Kyle Polich is joined by guest Tim Henderson from Google to talk about the computational complexity foundations of modern cryptography and the complexity issues that underlie the field. A key question that arises during the discussion is whether we should trust the security of modern cryptography.

Duration:00:35:51

Mercedes Benz Machine Learning Research

12/14/2017
More
This episode features an interview with Rigel Smiroldo recorded at NIPS 2017 in Long Beach California. We discuss data privacy, machine learning use cases, model deployment, and end-to-end machine learning.

Duration:00:27:04

[MINI] Parallel Algorithms

12/8/2017
More
When computers became commodity hardware and storage became incredibly cheap, we entered the era of so-call "big" data. Most definitions of big data will include something about not being able to process all the data on a single machine. Distributed computing is required for such large datasets. Getting an algorithm to run on data spread out over a variety of different machines introduced new challenges for designing large-scale systems. First, there are concerns about the best strategy...

Duration:00:20:35

Quantum Computing

12/1/2017
More
In this week's episode, Scott Aaronson, a professor at the University of Texas at Austin, explains what a quantum computer is, various possible applications, the types of problems they are good at solving and much more. Kyle and Scott have a lively discussion about the capabilities and limits of quantum computers and computational complexity.

Duration:00:47:48

Azure Databricks

11/28/2017
More
I sat down with Ali Ghodsi, CEO and found of Databricks, and John Chirapurath, GM for Data Platform Marketing at Microsoft related to the recent announcement of Azure Databricks. When I heard about the announcement, my first thoughts were two-fold. First, the possibility of optimized integrations with existing Azure services. This would be a big benefit to heavy Azure users who also want to use Spark. Second, the benefits of active directory to control Databricks access for large...

Duration:00:28:25

[MINI] Exponential Time Algorithms

11/24/2017
More
In this episode we discuss the complexity class of EXP-Time which contains algorithms which require $O(2^{p(n)})$ time to run. In other words, the worst case runtime is exponential in some polynomial of the input size. Problems in this class are even more difficult than problems in NP since you can't even verify a solution in polynomial time. We mostly discuss Generalized Chess as an intuitive example of a problem in EXP-Time. Another well-known problem is determining if a given...

Duration:00:16:43

P vs NP

11/17/2017
More
In this week's episode, host Kyle Polich interviews author Lance Fortnow about whether P will ever be equal to NP and solve all of life’s problems. Fortnow begins the discussion with the example question: Are there 100 people on Facebook who are all friends with each other? Even if you were an employee of Facebook and had access to all its data, answering this question naively would require checking more possibilities than any computer, now or in the future, could possibly do. The P/NP...

Duration:00:40:21

Sudoku \in NP

11/10/2017
More
Algorithms with similar runtimes are said to be in the same complexity class. That runtime is measured in the how many steps an algorithm takes relative to the input size. The class P contains all algorithms which run in polynomial time (basically, a nested for loop iterating over the input). NP are algorithms which seem to require brute force. Brute force search cannot be done in polynomial time, so it seems that problems in NP are more difficult than problems in P. I say it "seems" this...

Duration:00:18:28

The Computational Complexity of Machine Learning

11/3/2017
More
In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael's doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems. When we look at machine learning algorithms they are almost like...

Duration:00:47:30

Turing Machines

10/27/2017
More
TMs are a model of computation at the heart of algorithmic analysis. A Turing Machine has two components. An infinitely long piece of tape (memory) with re-writable squares and a read/write head which is programmed to change it's state as it processes the input. This exceptionally simple mechanical computer can compute anything that is intuitively computable, thus says the Church-Turing Thesis. Attempts to make a "better" Turing Machine by adding things like additional tapes can make...

Duration:00:13:53

See More