The MLSecOps Podcast-logo

The MLSecOps Podcast

Technology Podcasts

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today. Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

Location:

Seattle, WA

Description:

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today. Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

Twitter:

@mlsecops

Language:

English

Contact:

3603331319


Episodes
Ask host to enable sharing for playback control

AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk

11/7/2024
Send us a text Full transcript with links to resources available at https://mlsecops.com/podcast/ai-governance-essentials-empowering-procurement-teams-to-navigate-ai-risk. In this episode of the MLSecOps Podcast, Charlie McCarthy from Protect AI sits down with Dr. Cari Miller to discuss the evolving landscapes of AI procurement and governance. Dr. Miller shares insights from her work with the AI Procurement Lab and ForHumanity, delving into the essential frameworks and strategies needed to mitigate risks in AI acquisitions. They cover the AI Procurement Risk Management Framework, practical ways to ensure transparency and accountability, and how the September 2024 OMB Memo M-24-18 is guiding AI acquisition in government. Dr. Miller also emphasizes the importance of cross-functional collaboration and AI literacy to support responsible AI procurement and deployment in organizations of all types. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:37:41

Ask host to enable sharing for playback control

Crossroads: AI, Cybersecurity, and How to Prepare for What's Next

10/29/2024
Send us a text In this episode of the MLSecOps Podcast, Distinguished Engineer Nicole Nichols from Palo Alto Networks joins host and Machine Learning Scientist Mehrin Kiani to explore critical challenges in AI and cybersecurity. Nicole shares her unique journey from mechanical engineering to AI security, her thoughts on the importance of clear AI vocabularies, and the significance of bridging disciplines in securing complex systems. They dive into the nuanced definitions of AI fairness and safety, examine emerging threats like LLM backdoors, and discuss the rapidly evolving impact of autonomous AI agents on cybersecurity defense. Nicole’s insights offer a fresh perspective on the future of AI-driven security, teamwork, and the growth mindset essential for professionals in this field. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:33:15

Ask host to enable sharing for playback control

AI Beyond the Hype: Lessons from Cloud on Risk and Security

10/1/2024
Send us a text On this episode of the MLSecOps Podcast, we’re bringing together two cybersecurity legends. Our guest is the inimitable Caleb Sima, who joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience. Caleb's impressive journey includes co-founding two security startups acquired by HP and Lookout, serving as Chief Security Officer at Robinhood, and currently leading cybersecurity venture studio WhiteRabbit & chairing the Cloud Security Alliance AI Safety Initiative. Hosting this episode is Diana Kelley (CISO, Protect AI) an industry powerhouse with a long career dedicated to cybersecurity, and a longtime host on this show. Together, Caleb and Diana share a thoughtful discussion full of unique insights for the MLSecOps Community of learners. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:41:06

Ask host to enable sharing for playback control

Generative AI Prompt Hacking and Its Impact on AI Security & Safety

9/18/2024
Send us a text Welcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI! In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:31:59

Ask host to enable sharing for playback control

The MLSecOps Podcast Season 2 Finale

9/6/2024
Send us a text This compilation contains highlights from episodes throughout Season 2 of the MLSecOps Podcast, and it's a great one for community members who are new to the show. If there is a clip from this highlights reel that is especially interesting to you, you can note the name of the original episode that the clip came from and easily go check out that full length episode for a deeper dive. Extending enormous thanks to everyone who has supported this show, including our audience, Protect AI hosts, and stellar expert guests. Stay tuned for Season 3! Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:40:54

Ask host to enable sharing for playback control

Exploring Generative AI Risk Assessment and Regulatory Compliance

7/25/2024
Send us a text In this episode of the MLSecOps Podcast we have the honor of talking with David Rosenthal, Partner at VISCHER (Swiss Law, Tax & Compliance). David is also an author & former software developer, and lectures at ETH Zürich & the University of Basel. He has more than 25 years of experience in data & technology law and kindly joined the show to discuss a variety of AI regulation topics, including the EU Artificial Intelligence Act, generative AI risk assessment, and challenges related to organizational compliance with upcoming AI regulations. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:37:37

Ask host to enable sharing for playback control

MLSecOps Culture: Considerations for AI Development and Security Teams

7/3/2024
Send us a text In this episode, we had the pleasure of welcoming Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast. Chris discusses a range of topics with hosts Badar Ahmed and Diana Kelley, including the history of how W&B was formed, building a culture of security & knowledge sharing across teams in an organization, real-world ML and GenAI security concerns, data lineage and tracking, and upcoming features in the Weights & Biases platform for enhancing security. More about our guest speaker: Chris Van Pelt is a co-founder of Weights & Biases, a developer MLOps platform. In 2009, Chris founded Figure Eight/CrowdFlower. Over the past 12 years, Chris has dedicated his career optimizing ML workflows and teaching ML practitioners, making machine learning more accessible to all. Chris has worked as a studio artist, computer scientist, and web engineer. He studied both art and computer science at Hope College. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:44

Ask host to enable sharing for playback control

Practical Offensive and Adversarial ML for Red Teams

6/17/2024
Send us a text Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood. Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks. The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems). Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:35:24

Ask host to enable sharing for playback control

Expert Talk from RSA Conference: Securing Generative AI

5/20/2024
Send us a text In this episode, host Neal Swaelens (EMEA Director of Business Development, Protect AI) catches up with Ken Huang, CISSP at RSAC 2024 to talk about security for generative AI. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:25:42

Ask host to enable sharing for playback control

Practical Foundations for Securing AI

5/13/2024
Send us a text In this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance of cross-collaboration and foundational security practices. We explore the contrasts of security for AI to that for traditional software, along with the risk profiles of first-party vs. third-party ML models. Ron sheds light on the significance of understanding your AI system's provenance, having necessary controls, and audit trails for robust security. He also discusses the "Secure AI/ML Development Framework" initiative that he launched internally within his organization, featuring a lean security checklist to streamline processes. We hope you enjoy this thoughtful conversation! Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:10

Ask host to enable sharing for playback control

Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex

4/23/2024
Send us a text In this episode of the MLSecOps Podcast, host Neal Swaelens, along with co-host Oleksandr Yaremchuk, sit down with special guest Simon Suo, co-founder and CTO of LlamaIndex. Simon shares insights into the development of LlamaIndex, a leading data framework for orchestrating data in large language models (LLMs). Drawing from his background in the self-driving industry, Simon discusses the challenges and considerations of integrating LLMs into various applications, emphasizing the importance of contextualizing LLMs within specific environments. The conversation delves into the evolution of retrieval-augmented generation (RAG) techniques and the future trajectory of LLM-based applications. Simon comments on the significance of balancing performance with cost and latency in leveraging LLM capabilities, envisioning a continued focus on data orchestration and enrichment. Addressing LLM security concerns, Simon emphasizes the critical need for robust input and output evaluation to mitigate potential risks. He discusses the potential vulnerabilities associated with LLMs, including prompt injection attacks and data leakage, underscoring the importance of implementing strong access controls and data privacy measures. Simon also highlights the ongoing efforts within the LLM community to address security challenges and foster a culture of education and awareness. As the discussion progresses, Simon introduces LlamaCloud, an enterprise data platform designed to streamline data processing and storage for LLM applications. He emphasizes the platform's tight integration with the open-source LlamaIndex framework, offering users a seamless transition from experimentation to production-grade deployments. Listeners will also learn about LlamaIndex's parsing solution, LlamaParse. Join us to learn more about the ongoing journey of innovation in large language model-based applications, while remaining vigilant about LLM security considerations. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:31:04

Ask host to enable sharing for playback control

AI Threat Research: Spotlight on the Huntr Community

3/13/2024
Send us a text Learn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved! This week’s featured guests are leaders from the huntr community (brought to you by Protect AI): Dan McInerney, Lead AI Threat Researcher Marcello Salvati, Sr. Engineer & Researcher Madison Vorbrich, Community Manager Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:31:48

Ask host to enable sharing for playback control

Securing AI: The Role of People, Processes & Tools in MLSecOps

2/29/2024
Send us a text In this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO). The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for the a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:37:16

Ask host to enable sharing for playback control

ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

2/27/2024
Send us a text In this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:35:30

Ask host to enable sharing for playback control

Finding a Balance: LLMs, Innovation, and Security

2/15/2024
Send us a text In this episode of The MLSecOps Podcast, special guest, Sandy Dunn, joins us to discuss the dynamic world of large language models (LLMs) and the equilibrium of innovation and security. Co-hosts, Daryan “D” Dehghanpisheh and Dan McInerney talk with Sandy about the nuanced challenges organizations face in managing LLMs while mitigating AI risks. Exploring the swift pace of innovation juxtaposed with the imperative of maintaining robust security measures, the trio examines the critical need for organizations to adapt their security posture management to include considerations for AI usage. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:41:56

Ask host to enable sharing for playback control

Secure AI Implementation and Governance

2/13/2024
Send us a text In this episode of The MLSecOps Podcast, Nick James, CEO of WhitegloveAI dives in with show host, Chris King, Head of Product at Protect AI, to offer enlightening insights surrounding: - AI Governance - ISO - International Organization for Standardization ISO/IEC 42001:2023-Information Technology, Artificial Intelligence Management System - Continuous improvement for AI security Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional MLSecOps and AI Security tools and resources to check out: Protect AI Radar (https://bit.ly/ProtectAIRadar) ModelScan (https://bit.ly/ModelScan) Protect AI’s ML Security-Focused Open Source Tools (https://bit.ly/ProtectAIGitHub) Huntr - The World's First AI/Machine Learning Bug Bounty Platform (https://bit.ly/aimlhuntr) Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:37

Ask host to enable sharing for playback control

Risk Management and Enhanced Security Practices for AI Systems

2/6/2024
Send us a text In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems. Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:08

Ask host to enable sharing for playback control

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

11/28/2023
Send us a text In this episode, co-hosts Badar Ahmed and Daryan Dehghanpisheh are joined by Drew Farris (Principal, Booz Allen Hamilton) and Edward Raff (Chief Scientist, Booz Allen Hamilton) to discuss themes from their paper, "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks," co-authored with Michael Benaroch. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:41:19

Ask host to enable sharing for playback control

From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus

10/24/2023
Send us a text In this episode, the founder and CEO of The In Vivo Group, Alexander Titus, joins show hosts Diana Kelley and Daryan Dehghanpisheh to discuss themes from his forward-thinking paper, "The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward," authored with Adam H. Russell. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:43:20

Ask host to enable sharing for playback control

Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP

10/18/2023
Send us a text *This episode is also available in video format! Click to watch the full YouTube video.* Welcome to Season 2 of The MLSecOps Podcast! In this episode, we joined Strategic Technology Branch Chief, Martin Stanley, CISSP, from the Cybersecurity and Infrastructure Security Agency (CISA), to celebrate 20 years of Cybersecurity Awareness Month, as well as hear his expert and thoughtful insights about CISA initiatives, partnering with the National Institute of Standards and Technology (NIST) to promote the adoption of their AI Risk Management Framework, AI security and governance, and much more. We are so grateful to Martin for joining us for this enlightening talk! Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:39:45