Grounded Truth-logo

Grounded Truth

Technology Podcasts

As more organizations adopt AI, we emerge from the wild west of training black-box machine learning models and placing AI into production to "see what happens"! With the burgeoning of several specifically designed AI solutions, such as Watchful (insert shameless plug), we begin to demystify the deployment of AI. The Grounded Truth Podcast, hosted by John Singleton (co-founder of Watchful), gathers some of the world's most influential data scientists, machine learning practitioners, and innovation leaders for discussions on how we can accelerate our understanding, development, and implementation of cutting-edge machine learning applications in both academia and enterprise.

Location:

United States

Description:

As more organizations adopt AI, we emerge from the wild west of training black-box machine learning models and placing AI into production to "see what happens"! With the burgeoning of several specifically designed AI solutions, such as Watchful (insert shameless plug), we begin to demystify the deployment of AI. The Grounded Truth Podcast, hosted by John Singleton (co-founder of Watchful), gathers some of the world's most influential data scientists, machine learning practitioners, and innovation leaders for discussions on how we can accelerate our understanding, development, and implementation of cutting-edge machine learning applications in both academia and enterprise.

Language:

English


Episodes
Ask host to enable sharing for playback control

The Future of AI: Doom or Boom? Featuring the Host of The AI FYI Podcast

2/15/2024
Welcome to the "Grounded Truth Podcast," where we bring together some of the brightest minds in AI to explore the most pressing topics shaping our future. Our latest episode, "The Future of AI: Doom or Boom?"—promises to be a riveting discussion. Joining host John Singleton, Co-founder and Head of Success at Watchful, are: Shayan Mohanty, CEO and Co-founder of Watchful, and the podcast "AI FYI" host Andy Butkovic, Joe Cloughley, and Kiran Vajapey. Together, we'll delve into the fascinating world of AI, covering a wide range of topics: • LLMs adoption • AI ethics and cultural impact • AI's transformative effect on traditional industries. • The rapid pace of AI's technological advancement Whether you're a seasoned AI expert or simply curious about its impact, this episode promises something for everyone. Learn more about The AI FYI Podcast by visiting: http://www.aifyipod.com.

Duration:00:42:27

Ask host to enable sharing for playback control

Challenges and Shifts Required for Placing Generative AI into Production

12/14/2023
In this episode of "Grounded Truth," we dive into the world of Generative AI and the complexities of placing it into production. Our special guests for this episode are Manasi Vartak, Founder and CEO of Verta, and Shayan Mohanty, Co-founder and CEO of Watchful. 🌐 Verta: Empowering Gen AI Application Development www.verta.ai🚀 Evolution in the AI Landscape 🤔 Challenges in Gen AI Application Production 🌟 What's Changed Since Chat GPT's Release? 🔮 Predictions for the AI Industry in 2024

Duration:00:37:31

Ask host to enable sharing for playback control

Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows

12/11/2023
🎙️ RAG vs. Fine Tuning - Dive into the latest episode of "Grounded Truth" hosted by John Singleton as he discusses "Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows" with Emmanuel Turlay, Founder & CEO of Sematic and Airtrain.ai and Shayan Mohanty, Co-founder & CEO of Watchful. 🤖 RAG: Retrieval Augmented Generation - RAG involves putting content inside the prompt/context window to make models aware of recent events, private information, or company documents. The process includes retrieving the most relevant information from sources like Bing, Google, or internal databases, feeding it into the model's context window, and generating user-specific responses. Ideal for ensuring factual answers by extracting data from a specified context. ⚙️ Fine Tuning - Fine tuning entails training models for additional epochs on more data, allowing customization of the model's behavior, tone, or output format. Used to make models act in specific ways, such as speaking like a lawyer or adopting the tone of Harry Potter. Unlike RAG, it focuses more on the form and tone of the output rather than knowledge augmentation. 🤔 Decision Dilemma: RAG or Fine Tuning? Emmanuel highlights the misconception that fine tuning injects new knowledge, emphasizing its role in shaping the output according to user needs. RAG is preferred for factual answers, as it extracts information directly from a specified context, ensuring higher accuracy. Fine tuning, on the other hand, is more about customizing the form and tone of the output. 🔄 The Verdict: A Balanced Approach? It's not a one-size-fits-all decision. The choice between RAG and fine tuning depends on specific use cases. Evaluating the decision involves understanding the goals: knowledge augmentation (RAG) or customization of form and tone (Fine Tuning). Perhaps a balanced approach, leveraging both techniques based on the desired outcomes. AirTrain YouTube Channel: https://www.youtube.com/@AirtrainAI

Duration:00:34:46

Ask host to enable sharing for playback control

Decoding LLM Uncertainties for Better Predictability

11/9/2023
Welcome to another riveting episode of "Grounded Truth"! In this episode, your host John Singleton, co-founder and Head of Success at Watchful, is joined by Shayan Mohanty, CEO of Watchful. Together, they embark on a deep dive into the intricacies of Large Language Models (LLMs). In Watchful's journey through language model exploration, we've uncovered fascinating insights into putting the "engineering" back into prompt engineering. Our latest research focuses on introducing meaningful observability metrics to enhance our understanding of language models. If you'd like to explore on your own, feel free to play with a demo here: https://uncertainty.demos.watchful.io/ Repo can be found here: https://github.com/Watchfulio/uncertainty-demo 💡 What to expect in this episode: - Recap of our last exploration, where we unveiled the role of perceived ambiguity in LLM prompts and its alignment with the "ground truth." - Introduction of two critical measures: Structural Uncertainty (using normalized entropy) and Conceptual Uncertainty (revealing internal cohesion through cosine distances). - Why these measures matter: Assessing predictability in prompts, guiding decisions on fine-tuning versus prompt engineering, and setting the stage for objective model comparisons. 🚀 Join John and Shayan on this quest to make language model interactions more transparent and predictable. The episode aims to unravel complexities, provide actionable insights, and pave the way for a clearer understanding of LLM uncertainties.

Duration:00:34:32

Ask host to enable sharing for playback control

A Surprisingly Effective Way to Estimate Token Importance in LLM Prompts

10/5/2023
Welcome to another captivating episode of "Grounded Truth." Today, our host, John Singleton, engages in a deep dive into the world of prompt engineering, interpretability in closed-source LLMs, and innovative techniques to enhance transparency in AI models. Joining us as a special guest is Shayan Mohanty, the visionary CEO and co-founder of Watchful. Shayan brings to the table his latest groundbreaking research, which centers around a remarkable free tool designed to elevate the transparency of prompts used with large language models. In this episode, we'll explore Shayan's research, including: 🔍 Estimating token importances in prompts for powerhouse language models like ChatGPT. 🧠 Transitioning from the art to the science of prompt crafting. 📊 Uncovering the crucial link between model embeddings and interpretations. 💡 Discovering intriguing insights through comparisons of various model embeddings. 🚀 Harnessing the potential of embedding quality to influence model output. 🌟 Taking the initial strides towards the automation of prompt engineering. To witness the real impact of Shayan's research, don't miss the opportunity to experience a live demo at https://heatmap.demos.watchful.io/.

Duration:00:27:13

Ask host to enable sharing for playback control

Is Data Labeling Dead?

8/11/2023
Dive into the thought-provoking world of data labeling in this episode of the Grounded Truth podcast - "Is Data Labeling Dead?". Hosted by John Singleton and featuring Shayan Mohanty, co-founder and CEO of Watchful, this episode offers a captivating discussion on the changing landscape of data labeling and its intricate relationship with the rise of large language models (LLMs). Uncover the historical journey of data labeling, from its early manual stages to the advent of in-house solutions and automation. Delve into the pivotal question: is traditional data labeling becoming obsolete due to the capabilities of LLMs like GPT-3? While the title suggests a binary perspective, the podcast presents a nuanced exploration, showcasing the evolving nature of data labeling. Discover how LLMs have revolutionized the handling of low-context tasks like sentiment analysis and categorization, reshaping the demand for conventional data labeling services. However, the conversation goes beyond absolutes, shedding light on the transformation of data labeling rather than its demise. This episode of the Grounded Truth podcast underscores that data labeling is far from dead; it is evolving to accommodate the dynamic interplay between LLMs and labeling practices. While LLMs handle routine tasks efficiently, data labeling is pivoting towards high-context labeling, specialized needs, and optimizing workflows for sophisticated model development. Explore the captivating journey of data labeling in this episode, where tradition meets innovation and adaptation guides the way forward.

Duration:00:41:11

Ask host to enable sharing for playback control

Leveraging Machine Teaching to Build Autonomous Agents

7/17/2023
Welcome to "Grounded Truth," the podcast where we bring together influential data scientists, machine learning practitioners, and innovation leaders to discuss the most relevant topics in AI. I'm John Singleton, your host, and I'm thrilled to be joined by Kence Anderson, CEO and co-founder of Composabl (https://composabl.ai/), a platform for building autonomous intelligent agents. Kence is a seasoned entrepreneur and has a wealth of experience in the AI field, including his work at Microsoft as a principal program manager for AI and research machine teaching innovation. In this episode, we delve into the fascinating world of autonomous agents and their impact on various industries. From autonomous driving to industrial automation, we explore the challenges and advancements in human-like decision-making. Composabl's mission is to empower individuals without deep AI expertise to create intelligent agents using modular building blocks and their own subject matter expertise. Through the concept of machine teaching, users can train agents to make real-time decisions, whether it's controlling a drone, a bulldozer or even optimizing virtual processes in factories or logistics. Joining us as well is Shayan Mohanty, co-founder and CEO of Watchful, the machine teaching platform for data-centric AI. Together, we discuss the distinct roles of perception and action in AI and how they differ. While perception involves perceiving and understanding the environment, action focuses on making decisions and taking appropriate steps. We explore the significance of supervised learning in perception tasks like computer vision and prediction, as well as its limitations in driving actionable outcomes. To learn more from Kence Anderson, be sure to check out his latest publication, "Designing Autonomous AI: A Guide for Machine Teaching," available on O'Reilly (https://www.oreilly.com/library/view/designing-autonomous-ai/9781098110741/) Don't forget to like, subscribe, and follow us on Apple Podcasts, Spotify, YouTube, and other podcast platforms. ‍ Reference: “Dreyfus model of skill acquisition” - https://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition

Duration:00:54:35

Ask host to enable sharing for playback control

The Application of LLMs for Database DevOps

6/13/2023
In this episode of Grounded Truth, we delve into the world of database DevOps. This concept aims to bring the same principles and practices used in application development to the realm of databases. Our guest, Kendra Little, with a diverse background encompassing sysadmin, analyst, product manager, developer advocate, and founder, shares her insights on this topic. Joining us as well is Shayan Mohanty, co-founder and CEO of Watchful. Database DevOps involves treating databases as we treat applications by focusing on version control, automating deployments, tracking changes, and integrating database workflows with software development practices. The goal is to streamline and optimize the management of databases, bringing efficiency and consistency to the process. Furthermore, we explore the potential impact of large language models, such as foundation models, on database DevOps workflows. This intriguing intersection prompts discussions on potential applications and opportunities for leveraging these models in the database domain.

Duration:00:39:27

Ask host to enable sharing for playback control

What is Prompt Ensembling?

5/23/2023
In this thought-provoking episode of "Grounded Truth" titled "What is Prompt Ensembling?", host John Singleton, Co-Founder and Head of Success at Watchful, engages in a captivating discussion with Shayan Mohanty, Co-Founder and CEO of Watchful. Together, they delve into the intricacies of embedding LLM (Large Language Model) capabilities in a performant and reasoned manner. They address pressing concerns such as dealing with hallucination, harnessing the full potential of LLMs, and exploring concrete approaches to implementing these models effectively. The episode also explores the necessity of controlling LLMs' output and the significance of techniques like prompt chaining and prompt ensembling. Listeners will gain valuable insights into why this topic remains a crucial issue today, where the boundaries of AI and human interaction converge.

Duration:00:25:39

Ask host to enable sharing for playback control

Engineering with Large Language Models

5/12/2023
In this episode of Grounded Truth, we're diving deep into the world of engineering with large language models, or LLMs. LLMs are rapidly transforming the way we interact with technology. These models have the ability to understand natural language and generate human-like responses, making them incredibly versatile tools for a wide range of applications. In this episode, we'll be discussing the quirks and potential benefits of productizing LLM-enabled applications. We'll explore the challenges of building and deploying applications that use LLMs, and the considerations that arise when creating products that use these powerful tools. Our guests are Watchful’s co-founder and CEO, Shayan Mohanty and Watchful’s Head of Engineering, David Stanley.

Duration:00:39:27

Ask host to enable sharing for playback control

Dr. Jennifer Prendki, PhD, Alectio

4/24/2023
In the latest episode of Grounded Truth, John Singleton hosts Dr. Jennifer Prenki, the founder and CEO of Alectio, a company that specializes in data prep ops to help machine learning teams build models in a more cost-effective way. Jennifer, who has previously worked as VP of Machine Learning at Figure Eight and led data science projects at Walmart Labs and Atlassian, discusses the difference between data prep and data prep ops, among other topics. Tune in for this enlightening conversation!

Duration:00:45:45

Ask host to enable sharing for playback control

Dr. Sneha Subramanian, PhD, Salesloft

4/19/2023
Grounded Truth Podcast host John Singleton, co-founder and Head of Success at Watchful, is joined by Dr. Sneha Subramanian, PhD, Director of Data Science at Salesloft, to discuss her recent experience implementing a Generative AI feature within the Salesloft platform and some key lessons learned. Don't miss out on this exciting conversation!

Duration:00:43:24

Ask host to enable sharing for playback control

Dr. Geoffrey Fairchild, PhD - Los Alamos National Laboratory

4/4/2023
In this episode of Grounded Truth, host John Singleton interviews Dr. Geoffrey Fairchild, PhD, Group Leader of the Information Systems and Modeling Group at Los Alamos National Laboratory. They delve into the exciting world of AI and discuss how it is transforming the industry. Dr. Fairchild provides insights into his team's work at Los Alamos, including developing algorithms for anomaly detection and forecasting. They also discuss the importance of transparency and interpretability in today’s Large Language Models (LLMs), especially when making decisions that impact human lives. Overall, this episode is a fascinating exploration of the cutting-edge research being done in AI and the ethical considerations that must be considered when implementing it.

Duration:00:44:49

Ask host to enable sharing for playback control

Grounded Truth Podcast "Hot Take" - Pausing Giant AI Experiments

3/30/2023
In this episode, host John Singleton is joined by Watchful co-founder and CEO Shayan Mohanty to deliver our first HOT TAKE on a trending topic. Today we'll discuss the recent open letter titled "Pause Giant AI Experiments: An Open Letter," posted by the Future of Life Institute and signed by notable figures such as Elon Musk. You can read the full letter here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ The letter calls for a pause in AI research until there is a compelling reason to build controllable AI systems. We'll be sharing our initial thoughts on this topic and exploring the importance of building controllable AI systems that can provide the right outcomes given the right inputs. Join us as we dive into this fascinating conversation on the future of AI. ‍

Duration:00:10:52

Ask host to enable sharing for playback control

South Park and ChatGPT: Did They Get It Right?

3/28/2023
In this episode of the Grounded Truth podcast, host John Singleton is joined by the co-founder and CEO of Watchful, Shayan Mohanty, to discuss and review the recent South Park episode titled "Deep Learning," which was co-written by Trey Parker and ChatGPT. Each share their initial reactions and thoughts on the episode's depiction of the impact of large language models (LLMs) and the potential dangers of their use, along with the episode's humorous portrayal of various industry tropes and the collision of art and reality.

Duration:00:39:17

Ask host to enable sharing for playback control

David Wang, Chief Innovation Officer for Wilson Sonsini

3/4/2023
In this episode of Grounded Truth, host John Singleton is joined by David Wang, Chief Innovation Officer for Wilson Sonsini. They discuss the concept of trust, specifically in natural language and how it relates to artificial intelligence and machine learning solutions, how trust is defined and the subjective nature of trust. They also discuss the dichotomy between automated solutions and individual opinions when it comes to trust and how trust is conveyed in high-end professional services through the brand and social proof of expertise.

Duration:00:35:35

Ask host to enable sharing for playback control

ChatGPT: What's Next?

3/1/2023
ChatGPT is what everyone is talking about, but what impact will it really have on our world? What are the implications for future AI development? How will this change how we leverage AI - from product development to looking up brownie recipes. John Singleton, host of the Grounded Truth Podcast, joins his co-founder and CEO of Watchfull.io Shayan Mohanty in an engaging discussion (along with a series of hot takes) on what to consider when it comes to the chat about ChatGPT.

Duration:00:49:53