Try Premium for 30 days

Live games for all NFL, MLB, NBA, & NHL teams
Commercial-Free Music
No Display Ads
Learning Machines 101-logo

Learning Machines 101

Technology News >

More Information


Allen, TX




LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference?

This 69 episode of Learning Machines 101 provides a short overview of the 2017 Neural Information Processing Systems conference with a focus on the development of methods for teaching learning machines rather than simply training them on examples. In addition, a book review of the book “Deep Learning” is provided. #nips2017


LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each...


LM101-067: How to use Expectation Maximization to Learn Constraint Satisfaction Solutions (Rerun)

In this episode we discuss how to learn to solve constraint satisfaction inference problems. The goal of the inference process is to infer the most probable values for unobservable variables. These constraints, however, can be learned from experience. Specifically, the important machine learning method for handling unobservable components of the data using Expectation Maximization is introduced. Check it out at:


LM101-066: How to Solve Constraint Satisfaction Problems using MCMC Methods (Rerun)

In this episode of Learning Machines 101 ( we discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables. Specifically, Monte Carlo Markov Chain ( MCMC ) methods are discussed.


LM101-065: How to Design Gradient Descent Learning Machines (Rerun)

In this episode rerun we introduce the concept of gradient descent which is the fundamental principle underlying learning in the majority of deep learning and neural network learning algorithms. Check out the website: to obtain a transcript of this episode!


LM101-064: Stochastic Model Search and Selection with Genetic Algorithms (Rerun)

In this rerun of episode 24 we explore the concept of evolutionary learning machines. That is, learning machines that reproduce themselves in the hopes of evolving into more intelligent and smarter learning machines. This leads us to the topic of stochastic model search and evaluation. Check out the blog with additional technical references at:


LM101-063: How to Transform a Supervised Learning Machine into a Policy Gradient Reinforcement Learning Machine

This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly smarter. The essential idea for the construction of such reinforcement learning machines is based upon first developing a supervised learning...


LM101-062: How to Transform a Supervised Learning Machine into a Value Function Reinforcement Learning Machine

This 62nd episode of Learning Machines 101 ( discusses how to design reinforcement learning machines using your knowledge of how to build supervised learning machines! Specifically, we focus on Value Function Reinforcement Learning Machines which estimate the unobservable total penalty associated with an episode when only the beginning of the episode is observable. This estimated Value Function can then be used by the learning machine to select a particular...


LM101-061: What happened at the Reinforcement Learning Tutorial? (RERUN)

This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Learning with Function Approximation Tutorial presented by Professor Richard Sutton on the first day of the conference. This episode is a RERUN...


LM101-060: How to Monitor Machine Learning Algorithms using Anomaly Detection Machine Learning Algorithms

This 60th episode of Learning Machines 101 discusses how one can use novelty detection or anomaly detection machine learning algorithms to monitor the performance of other machine learning algorithms deployed in real world environments. The episode is based upon a review of a talk by Chief Data Scientist Ira Cohen of Anodot presented at the 2016 Berlin Buzzwords Data Science Conference. Check out: to hear the podcast or read a transcription of the podcast!


LM101-059: How to Properly Introduce a Neural Network

I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at:


LM101-058: How to Identify Hallucinating Learning Machines using Specification Analysis

In this 58th episode of Learning Machines 101, I’ll be discussing an important new scientific breakthrough published just last week for the first time in the journal Econometrics in the special issue on model misspecification titled “Generalized Information Matrix Tests for Detecting Model Misspecification”. The article provides a unified theoretical framework for the development of a wide range of methods for determining if a learning machine is capable of learning its statistical...


LM101-057: How to Catch Spammers using Spectral Clustering

In this 57th episode, we explain how to use unsupervised machine learning algorithms to catch internet criminals who try to steal your money electronically! Check it out at:


LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications

In this NEW episode we discuss Latent Semantic Indexing type machine learning algorithms which have a PROBABILISTIC interpretation. We explain why such a probabilistic interpretation is important and discuss how such algorithms can be used in the design of document retrieval systems, search engines, and recommender systems. Check us out at: and follow us on twitter at: @lm101talk


LM101-055: How to Learn Statistical Regularities using MAP and Maximum Likelihood Estimation (Rerun)

In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. In particular, the episode introduces fundamental machine learning concepts such as: probability models, model misspecification, maximum likelihood estimation, and MAP estimation. Check us out at:


LM101-053: How to Enhance Learning Machines with Swarm Intelligence (Particle Swarm Optimization)

In this 53rd episode of Learning Machines 101, we introduce the concept of a Swarm Intelligence with respect to Particle Swarm Optimization Algorithms. The essential idea of “Swarm Intelligence” is that you have a group of individual entities which behave in a coordinated manner yet there is no master control center providing directions to all of the individuals in the group. The global group behavior is an “emergent property” of local interactions among individuals in the group! We will...


LM101-052: How to Use the Kernel Trick to Make Hidden Units Disappear

Today, we discuss a simple yet powerful idea which began popular in the machine learning literature in the 1990s which is called “The Kernel Trick”. The basic idea of the “Kernel Trick” is that you specify similarity relationships among input patterns rather than a recoding transformation to solve a nonlinear problem with a linear learning machine. It's a great magic trick...check it out at: where you can obtain transcripts of this episode and download free...


LM101-051: How to Use Radial Basis Function Perceptron Software for Supervised Learning[Rerun]

This particular podcast is a RERUN of Episode 20 and describes step by step how to download free software which can be used to make predictions using a feedforward artificial neural network whose hidden units are radial basis functions. This is essentially a nonlinear regression modeling problem. We show the performanceof this nonlinear learning machine is substantially better on test data set than the linear learning machine software presented in Episode 13. Basically performance for the...


LM101-050: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)[RERUN]

In this episode we will explain how to download and use free machine learning software from the website: This podcast is concerned withthe very practical issues associated with downloading and installing machine learning software on your computer. If you follow these instructions, by the end of this episode you will have installed one of the simplest (yet most widely used) machine learning algorithms on your computer. You can then use the software to make...


LM101-049: How to Experiment with Lunar Lander Software

In this episode we continue the discussion of learning when the actions of the learning machine can alter the characteristics of the learning machine’s statistical environment. We describe how to download free lunar lander software so you can experiment with an autopilot for a lunar lander module that learns from its experiences and describe the results of some simulation studies. To learn more, visit: to download the free lunar lander software which illustrates...


See More