The AI Podcast-logo

The AI Podcast

Technology Podcasts

One person, one interview, one story. Join us as we explore the impact of AI on our world, one amazing person at a time -- from the wildlife biologist tracking endangered rhinos across the savannah here on Earth to astrophysicists analyzing 10 billion-year-old starlight in distant galaxies to the Walmart data scientist grappling with the hundreds of millions of parameters lurking in the retailer’s supply chain. Every two weeks, we’ll bring you another tale, another 25-minute interview, as we build a real-time oral history of AI that’s already garnered nearly 3.4 million listens and been acclaimed as one of the best AI and machine learning podcasts. Listen in and get inspired. https://blogs.nvidia.com/ai-podcast/

Location:

United States

Description:

One person, one interview, one story. Join us as we explore the impact of AI on our world, one amazing person at a time -- from the wildlife biologist tracking endangered rhinos across the savannah here on Earth to astrophysicists analyzing 10 billion-year-old starlight in distant galaxies to the Walmart data scientist grappling with the hundreds of millions of parameters lurking in the retailer’s supply chain. Every two weeks, we’ll bring you another tale, another 25-minute interview, as we build a real-time oral history of AI that’s already garnered nearly 3.4 million listens and been acclaimed as one of the best AI and machine learning podcasts. Listen in and get inspired. https://blogs.nvidia.com/ai-podcast/

Twitter:

@NvidiaAI

Language:

English

Contact:

650 590 4713


Episodes

ITIF's Daniel Castro on Energy-Efficient AI and Climate Change

3/11/2024
AI-driven change is in the air, as are concerns about the technology’s environmental impact. In this episode of NVIDIA’s AI Podcast, Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, speaks with host Noah Kravitz about the motivation behind his AI energy use report, which addresses misconceptions about the technology’s energy consumption. Castro also touches on the need for policies and frameworks that encourage the development of energy-efficient technology. Tune in to discover the crucial role of GPU acceleration in enhancing sustainability and how AI can help address climate change challenges.

Duration:00:33:13

Exploring Filmmaking with Cuebric's AI: Insights from Pinar Seyhan Demirdag - Ep. 214

2/26/2024
In today’s episode of NVIDIA’s AI Podcast, host Noah Kravitz talks with Pinar Seyhan Demirdag, co-founder and CEO of Cuebric. Cuebric is on a mission to offer new solutions in filmmaking and content creation through immersive, two-and-a-half-dimensional cinematic environments. Their AI-powered application aims to help creators quickly bring their ideas to life, making high-quality production more accessible. Demirdag discusses how Cuebric uses generative AI to enable the creation of engaging environments affordably. Listen in to find out about the current landscape of content creation, the role of AI in simplifying the creative process, and Cuebric's participation in NVIDIA's GTC technology conference. https://blogs.nvidia.com/blog/pinar-demirdag-cuebric/

Duration:00:33:27

How the Ohio Supercomputer Center Drives the Future of Computing - Ep. 213

2/13/2024
NASCAR races are all about speed, but even the fastest cars need to factor in safety, especially as rules and tracks change. The Ohio Supercomputer Center is ready to help. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Alan Chalker, the director of strategic programs at the OSC, about all things supercomputing. The center’s Open OnDemand program, which takes the form of a web-based interface, empowers Ohio higher education institutions and industries with accessible, reliable and secure computational services and training and educational programs. Chalker dives into the history and evolution of the OSC, and explains how it’s working with client companies like NASCAR, which is simulating race car designs virtually. Tune in to learn more about Chalker’s outlook on the future of supercomputing and OSC’s role in realizing it.

Duration:00:34:26

Cardiac Clarity: Dr. Keith Channon Talks Revolutionizing Heart Health With AI - Ep. 212

1/27/2024
Here’s some news to still beating hearts: AI is helping bring some clarity to cardiology. Caristo Diagnostics has developed an AI-powered solution for detecting coronary inflammation in cardiac CT scans. In this episode of NVIDIA’s AI Podcast, Dr. Keith Channon, cofounder and chief medical officer at the startup, speaks with host Noah Kravtiz about the technology. Called Caristo, it analyzes radiometric features in CT scan data to identify inflammation in the fat tissue surrounding coronary arteries, a key indicator of heart disease. Tune in to learn more about how Caristo uses AI to improve treatment plans and risk predictions by providing physicians with a patient-specific readout of inflammation levels.

Duration:00:33:44

DigitalPath's Ethan Higgins On Using AI to Fight Wildfires - Ep. 211

1/16/2024
DigitalPath is igniting change in the golden state — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real time. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath system architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego. DigitalPath built computer vision models to process images collected from network cameras — anywhere from eight to 16 million a day — intelligently identifying signs of fire like smoke. “One of the things we realized early on, though, is that it’s not necessarily a problem about just detecting a fire in a picture,” Higgins said. “It’s a process of making a manageable amount of data to handle.” That’s because, he explained, it’s unlikely that humans will be entirely out of the loop in the detection process for the foreseeable future. The company uses various AI algorithms to classify images based on whether they should be reviewed or acted upon — if so, an alert is sent out to a CAL FIRE command centers. There are some downsides to using computer vision to detect wildfires — namely, that extinguishing more fires means a greater buildup of natural fuel and the potential for larger wildfires in the long term. DigitalPath, along with UCSD, are exploring using high-resolution LIDAR data to identify where those fuels can be let out in the form of prescribed burns. Looking ahead, Higgins foresees the field tapping generative AI to accelerate new simulation tools — as well as using AI models to analyze the output of other models to doubly improve wildfire prediction and detection. “AI is not perfect, but when you couple multiple models together, it can get really close,” he said.

Duration:00:22:21

The Case for Generative AI in the Legal Field - Ep. 210

12/19/2023
Thomson Reuters, the global content and technology company, is transforming the legal industry with generative AI. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Thomson Reuters’ Chief Product Officer David Wong about its potential — and implications. Many of Thomson Reuters offerings for the legal industry either address an information retrieval problem or help generate written content. It has a AI-driven digital solution that enables law practitioners to search laws and cases intelligently within different jurisdictions. It also provides AI-powered tools that are set to be integrated with commonly used products like Microsoft 365 to automate the time-consuming processes of drafting and analyzing legal documents. These technologies increase the productivity of legal professionals, enabling them to focus their time on higher value work. According to Wong, ultimately these tools also have the potential to help deliver better access to justice. To address ethical concerns, the company has created publicly available AI development guidelines, as well as privacy and data protection policies. And it’s participating in the drafting of ethical guidelines for the industries it serves. There’s still a wide range of reactions surrounding AI use in the legal field, from optimism about its potential to fears of job replacement. But Wong underscored that no matter what the outlook, “it is very likely that professionals that use AI are going to replace professionals that don’t use AI.” Looking ahead, Thomson Reuters aims to further integrate generative AI, as well as retrieval-augmented generation techniques into its flagship research products to help lawyers synthesize, read and respond to complicated technical and legal questions. Recently, Thomson Reuters acquired Casetext, which developed the first AI legal assistant, CoCounsel. In 2024 Thomson Reuters is building on this with the launch of an AI assistant that will be the interface across Thomson Reuters products with GenAI capabilities, including those in other fields such as tax and accounting.

Duration:00:29:43

Wayve CEO Alex Kendall on Making a Splash in Autonomous Vehicles - Ep. 209

12/6/2023
A new era of autonomous vehicle technology, known as AV 2.0, has emerged, marked by large, unified AI models that can control multiple parts of the vehicle stack, from perception and planning to control. Wayve, a London-based autonomous driving technology company, and a member of NVIDIA's startup accelerator program, is leading the surf. In the latest episode of NVIDIA’s AI Podcast, host Katie Burke Washabaugh spoke with the company’s cofounder and CEO, Alex Kendall, about what AV 2.0 means for the future of self-driving cars. Unlike AV 1.0’s focus on perfecting a vehicle’s perception capabilities using multiple deep neural networks, AV 2.0 calls for comprehensive in-vehicle intelligence to drive decision-making in real-world, dynamic environments. Embodied AI — the concept of giving AI a physical interface to interact with the world — is the basis of this new AV wave. Kendall pointed out that it’s a “hardware/software problem — you need to consider these things separately,” even as they work together. For example, a vehicle can have the highest-quality sensors, but without the right software, the system can’t use them to execute the right decisions. Generative AI plays a key role, enabling synthetic data generation so AV makers can use a model’s previous experiences to create and simulate novel driving scenarios. It can “take crowds of pedestrians and snow and bring them together” to “create a snowy, crowded pedestrian scene” that the vehicle has never experienced before. According to Kendall, that will “play a huge role in both learning and validating the level of performance that we need to deploy these vehicles safely” — all while saving time and costs. In June, Wayve unveiled GAIA-1, a generative world model for developing autonomous vehicles. The company also recently announced LINGO-1, an AI model that allows passengers to use natural language to enhance the learning and explainability of AI driving models. Looking ahead, the company hopes to scale and further develop its solutions, improving the safety of AVs to deliver value, build public trust and meet customer expectations. Kendall views embodied AI as playing a definitive role in the future of the AI landscape, pushing pioneers to “build better” and “build further” to achieve the “next big breakthroughs.” For more on NVIDIA's Inception startup accelerator program, visit https://www.nvidia.com/en-us/startups/

Duration:00:31:45

Afresh Co-Founder Nathan Fenner On How AI Can Help Grocers Manage Supply Chains - Ep. 208

11/20/2023
Talk about going after low-hanging fruit. Afresh is an AI startup that helps grocery stores and retailers reduce food waste by making supply chains more efficient. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with the company’s cofounder and president, Nathan Fenner, about its mission, offerings and the greater challenge of eliminating food waste. Most supply chain and inventory management offerings targeting grocers and retailers are outdated. Fenner and his team noticed those solutions, built for the nonperishable side of the business, didn’t work as well on the fresh side — creating enormous amounts of food waste and causing billions in lost profits. The team first sought to solve the store-replenishment challenge by developing a platform to help grocers decide how much fresh produce to order to optimize costs while meeting demand. They created machine learning and AI models that could effectively use the data generated by fresh produce, which is messier than data generated by nonperishable goods because of factors like time to decay, greater demand fluctuation and unreliability caused by lack of barcodes, leading to incorrect scans at self-checkout registers. The result was a fully integrated, machine learning-based platform that helps grocers make informed decisions at each node of the operations process. The company also recently launched inventory management software that allows grocers to save time and increase data accuracy by intelligently tracking inventory. That information can be inputted back into the platform’s ordering solution, further refining the accuracy of inventory data. It’s all part of Afresh’s greater mission to tackle climate change. “The most impactful thing we can do is reduce food waste to mitigate climate change,” Fenner said. “It’s really one of the key things that brought me into the business: I think I’ve always had a keen eye to work in the climate space. It’s really motivating for a lot of our team, and it’s a key part of our mission.”

Duration:00:33:00

Co-Founder of Annalise.ai Aengus Tran on Using AI as a Spell Check for Health Checks - Ep. 207

11/5/2023
Clinician-led healthcare AI company Harrison.ai has built an AI system that serves as “spell checker” for radiologists — flagging critical findings to improve the speed and accuracy of radiology image analysis, reducing misdiagnoses. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Harrison.ai CEO and cofounder Aengus Tran about the company’s mission to scale global healthcare capacity with autonomous AI systems. Harrison.ai’s initial product, annalise.ai, is an AI tool that automates radiology image analysis to enable faster, more accurate diagnoses. It can produce 124-130 different possible diagnoses and flag key findings to aid radiologists in their final diagnosis. Currently, annalise.ai works for chest X-rays and brain CT scans. While an AI designed for categorizing traffic lights, for example, doesn’t need perfection, medical tools must be highly accurate — any oversight could be fatal. To overcome this challenge, annalise.ai was trained on millions of meticulously annotated images — some were annotated three to five times over before being used for training. Harrison.ai is also developing Franklin.ai, a sibling AI tool aimed to accelerate and improve the accuracy of histopathology diagnosis — in which a clinician performs a biopsy and inspects the tissue for the presence of cancerous cells. Similarly to annalise.ai, Franklin.ai flags critical findings to assist pathologists in speeding and increasing the accuracy of diagnoses. Ethical concerns about AI use are ever-rising, but for Tran, the concern is less about whether it’s ethical to use AI for medical diagnosis but “actually the converse: Is it ethical to not use AI for medical diagnosis,” especially if “humans using those AI systems simply pick up more misdiagnosis, pick up more cancer and conditions?” Tran also talked about the future of AI systems and suggested that the focus is dual: first, focus on improving preexisting systems and then think of new cutting-edge solutions. And for those looking to break into careers in AI and healthcare, Tran says that the “first step is to decide upfront what problems you’re willing to spend a huge part of your time solving first, before the AI part,” emphasizing that the “first thing is actually to fall in love with some problem.”

Duration:00:31:16

NVIDIA’s Annamalai Chockalingam on the Rise of LLMs - Ep. 206

10/31/2023
Generative AI and large language models (LLMs) are stirring change across industries — but according to NVIDIA Senior Product Manager of Developer Marketing Annamalai Chockalingam, “we’re still in the early innings.” In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Chockalingam about LLMs: what they are, their current state and their future potential. LLMs are a “subset of the larger generative AI movement” that deals with language. They’re deep learning algorithms that can recognize, summarize, translate, predict and generate language. AI has been around for a while, but according to Chockalingam, three key factors enabled LLMs. One is the availability of large-scale data sets to train models with. As more people used the internet, more data became available for use. The second is the development of computer infrastructure, which has become advanced enough to handle “mountains of data” in a “reasonable timeframe.” And the third is advancements in AI algorithms, allowing for non-sequential or parallel processing of large data pools. LLMs can do five things with language: generate, summarize, translate, instruct or chat. With a combination of “these modalities and actions, you can build applications” to solve any problem, Chockalingam said. Enterprises are tapping LLMs to “drive innovation,” “develop new customer experiences,” and gain a “competitive advantage.” They’re also exploring what safe deployment of those models looks like, aiming to achieve responsible development, trustworthiness and repeatability. New techniques like retrieval augmented generation (RAG) could boost LLM development. RAG involves feeding models with up-to-date “data sources or third-party APIs” to achieve “more appropriate responses” — granting them current context so that they can “generate better” answers. Chockalingam encourages those interested in LLMs to “get your hands dirty and get started” — whether that means using popular applications like ChatGPT or playing with pretrained models in the NVIDIA NGC catalog. NVIDIA offers a full-stack computing platform for developers and enterprises experimenting with LLMs, with an ecosystem of over 4 million developers and 1,600 generative AI organizations. To learn more, register for LLM Developer Day on Nov. 17 to hear from NVIDIA experts about how best to develop applications.

Duration:00:38:32

Making Machines Mindful: NYU Professor Talks Responsible AI - Ep. 205

10/17/2023
Artificial intelligence is now a household term. Responsible AI is hot on its heels. Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous. In the latest episode of the NVIDIA AI Podcast, host Noah Kravitz ‌spoke with Stoyanovich about responsible AI, her advocacy efforts and how people can help.

Duration:00:35:50

NVIDIA’s Jim Fan Delves Into Large Language Models and Their Industry Impact - Ep. 204

10/2/2023
For NVIDIA Senior AI Scientist Jim Fan, the video game Minecraft served as the “perfect primordial soup” for his research on open-ended AI agents. In the latest AI Podcast episode, host Noah Kravitz spoke with Fan on using large language models to create AI agents — specifically to create Voyager, an AI bot built with Chat GPT-4 that can autonomously play Minecraft. AI agents are models that “can proactively take actions and then perceive the world, see the consequences of its actions, and then improve itself,” Fan said. Many current AI agents are programmed to achieve specific objectives, such as beating a game as quickly as possible or answering a question. They can work autonomously toward a particular output but lack a broader decision-making agency. Fan wondered if it was possible to have a “truly open-ended agent that can be prompted by arbitrary natural language to do open-ended, even creative things.” But he needed a flexible playground in which to test that possibility. “And that’s why we found Minecraft to be almost a perfect primordial soup for open-ended agents to emerge, because it sets up the environment so well,” he said. Minecraft at its core, after all, doesn’t set a specific key objective for players other than to survive and freely explore the open world. That became the springboard for Fan’s project, MineDojo, which eventually led to the creation of the AI bot Voyager. “Voyager leverages the power of Chat GPT-4 to write code in Javascript to execute in the game,” Fan explained. “GPT-4 then looks at the output, and if there’s an error from JavaScript or some feedback from the environment, GPT-4 does a self-reflection and tries to debug the code.” The bot learns from its mistakes and stores the correctly implemented programs in a skill library for future use, allowing for “lifelong learning.” In-game, Voyager can autonomously explore for hours, adapting its decisions based on its environment and developing skills to combat monsters and find food when needed. “We see all these behaviors come from the Voyager setup, the skill library and also the coding mechanism,” Fan explained. “We did not preprogram any of these behaviors.” He then spoke more generally about the rise and trajectory of LLMs. He foresees strong applications in software, gaming and robotics and increasingly pressing conversations surrounding AI safety. Fan encourages those looking to get involved and work with LLMs to “just do something,” whether that means using online resources or experimenting with beginner-friendly, CPU-based AI models.

Duration:00:37:38

Anima Anandkumar on Using Generative AI to Tackle Global Challenges - Ep. 203

9/10/2023
Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. Anima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President’s Council of Advisors on Science and Technology. At the talk, Anandkumar says that generative AI was described as “an inflection point in our lives,” with discussions swirling around how to “harness it to benefit society and humanity through scientific applications.” On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community. It can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research. Generative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns. However, Anandkumar explains that it’s not just a matter of upsizing language models or adding compute power — it’s also about fine-tuning and setting the right parameters. “Those are the aspects we’re working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, ‘How do we capture the multitude of scales present in the natural world?’” she said. “With the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?” Anandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications. She also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved. “This is the research advice I give to everyone: the most important thing is the question, not the answer,” she said.

Duration:00:40:08

Deepdub’s Ofir Krakowski on Redefining Dubbing from Hollywood to Bollywood - Ep. 202

8/29/2023
In the global entertainment landscape, TV show and film production stretches far beyond Hollywood or Bollywood — it's a worldwide phenomenon. However, while streaming platforms have broadened the reach of content, dubbing and translation technology still has plenty of room for growth. Deepdub acts as a digital bridge, providing access to content by using generative AI to break down language and cultural barriers. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with the Israel-based startup’s co-founder and CEO, Ofir Krakowski. Deepdub uses AI-driven dubbing to help entertainment companies boost efficiency and cut costs while increasing accessibility. The company is a member of NVIDIA Inception, a free program that offers startups go-to-market support, expertise and technological assistance. Traditional dubbing is slow, costly and often missing the mark, Krakowski says. Current technology struggles with the subtleties of language, leaving jokes, idioms or jargon lost in translation. Deepdub offers a web-based platform that enables people to interact with sophisticated AI models to handle each part of the translation and dubbing process efficiently. It translates the text, generates a voice and mixes it into the original music and audio effects. But as Krakowkski points out, even the best AI models make mistakes, so the platform involves a human touchpoint to verify translations and ensure that generated voices sound natural and capture the right emotion. Deepdub is also working on matching lip movements to dubbed voices. Ultimately, Krakowski hopes to free the world from the restrictions placed by language barriers. “I believe that the technology will enable people to enjoy the content that is created around the world,” he said. “It will globalize storytelling and knowledge, which are currently bound by language barriers.” https://blogs.nvidia.com/blog/2023/08/30/deepdub/

Duration:00:32:37

Replit CEO Amjad Masad on Empowering the Next Billion Software Creators - Ep. 201

8/13/2023
Replit aims to empower the next billion software creators. In this week’s episode of NVIDIA’s AI Podcast, host Noah Kraviz dives into a conversation with Replit CEO Amjad Masad. Masad says the San Francisco-based maker of a software development platform, which came up as a member of NVIDIA’s startup accelerator program, wants to bridge the gap between ideas and software, a task simplified by advances in generative AI. “Replit is fundamentally about reducing the friction between an idea and a software product,” Masad said. The company’s Ghostwriter coding AI has two main features: a code completion model and a chat model. These features not only make suggestions as users type their code, but also provide intelligent explanations of what a piece of code is doing, tracing dependencies and context. The model can even flag errors and offers solutions — like a full collaborator in a Google Docs for code. The company is also developing “make me an app” functionality. This tool allows users to provide high-level instructions to an Artificial Developer Intelligence, which then builds, tests and iterates the requested software. The aim is to make software creation accessible to all, even those with no coding experience. While this feature is still under development, Masad said the company plans to improve it over the next year, potentially having it ready for developers in the next 6 to 8 months. Going forward, Masad envisions a future where AI functions as a collaborator, able to conduct high-level tasks and even manage resources. “We're entering a period where software is going to feel more alive,” Masad said. “And so I think computing is becoming more humane, more accessible, more exciting, more natural.” For more on NVIDIA’s startup accelerator program, visit https://www.nvidia.com/en-us/startups/

Duration:00:42:29

Codeium’s Varun Mohan and Jeff Wang on Unleashing the Power of AI in Software Development - Ep. 200

7/25/2023
The world increasingly runs on code. Accelerating the work of those who create that code will boost their productivity — and that’s just what AI startup Codeium, a member of NVIDIA’s Inception program for startups, aims to do. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz interviewed Codeium founder and CEO Varun Mohan and Jeff Wang, the company’s head of business, about the company's business, about how AI is transforming software. Codeium's AI-powered code acceleration toolkit boasts three core features: autocomplete, chat and search. Autocomplete intelligently suggests code segments, saving developers time by minimizing the need for writing boilerplate or unit tests. At the same time the chat function empowers developers to rework or even create code with natural language queries, enhancing their coding efficiency while providing searchable context on the entire code base. Noah spoke with Mohan and Wang about the future of software development with AI, and the continued, essential role of humans in the process.

Duration:00:39:02

MosaicML's Naveen Rao on Making Custom LLMs More Accessible - Ep. 199

7/11/2023
Startup MosaicML is on a mission to help the AI community enhance prediction accuracy, decrease costs, and save time by providing tools for easy training and deployment of large AI models. In this episode of NVIDIA's AI Podcast, host Noah Kravitz speaks with MosaicML CEO and co-founder Naveen Rao, about how the company aims to democratize access to large language models. MosaicML, a member of NVIDIA's Inception program, has identified two key barriers to widespread adoption: the difficulty of coordinating a large number of GPUs to train a model and the costs associated with this process. Making training of models accessible is key for many companies who need to control over model behavior, respect data privacy, and iterate fast to develop new products based on AI.

Duration:00:31:26

Matice Founder Jessica Whited on Harnessing Regenerative Species for Medical Breakthroughs - Ep. 198

6/27/2023
Scientists at Matice Biosciences are using AI to study the regeneration of tissues in animals known as super-regenerators, such as salamanders and planarians. The goal of the research is to develop new treatments that will help humans heal from injuries without scarring. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with Jessica Whited, a regenerative biologist at Harvard University and co-founder of Matice Biosciences. https://blogs.nvidia.com/blog/2023/06/21/matice/

Duration:00:39:08

MIT's Anant Agarwal on AI in Education - Ep. 197

6/6/2023
In the latest episode of NVIDIA's AI Podcast, Anant Agarwal, founder of edX and Chief Platform Officer at 2U, shared his vision for the future of online education and the impact of artificial intelligence in revolutionizing the learning experience. Agarwal, a strong advocate for Massive Open Online Courses MOOCs, discussed the importance of accessibility and quality in education. The MIT professor and renowned edtech pioneer also highlighted the implementation of AI-powered features in the edX platform, including the ChatGPT plugin and edX Xpert, an AI-powered learning assistant.

Duration:00:38:45

How Alex Fielding and Privateer Space Are Taking on Space Debris - Ep. 196

5/17/2023
In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space. Fielding is a tech industry veteran, having previously worked alongside Apple co-founder Steve Wozniak on several projects, and holds a deep expertise in engineering, robotics, machine learning and AI. Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris. The company is creating a data infrastructure to monitor and clean up space debris, ensuring sustainable growth for the budding space economy. In essence, they’re the sanitation engineers of the cosmos. Privateer is also focused on bolstering space accessibility. All of the company’s datasets and those of its partners are being made available through APIs, so users can more easily build space applications related to Earth observation, climate science and more. Privateer Space is a part of NVIDIA Inception, a free program that offers go-to-market support, expertise and technology for AI startups. During the podcast, Fielding shares the genesis of Privateer Space, his journey from Apple to the space industry, and his subsequent work on communication between satellites at different altitudes. He also addresses the severity of space debris, explaining how every launch adds more debris, including minute yet potentially dangerous fragments like frozen propellant and paint chips. https://blogs.nvidia.com/blog/2023/05/23/privateer-space

Duration:00:40:00