The MLSecOps Podcast-logo

The MLSecOps Podcast

Technology Podcasts

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today. Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

Location:

Seattle, WA

Description:

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today. Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

Twitter:

@mlsecops

Language:

English

Contact:

3603331319


Episodes
Ask host to enable sharing for playback control

Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex

4/23/2024
In this episode of the MLSecOps Podcast, host Neal Swaelens, along with co-host Oleksandr Yaremchuk, sit down with special guest Simon Suo, co-founder and CTO of LlamaIndex. Simon shares insights into the development of LlamaIndex, a leading data framework for orchestrating data in large language models (LLMs). Drawing from his background in the self-driving industry, Simon discusses the challenges and considerations of integrating LLMs into various applications, emphasizing the importance of contextualizing LLMs within specific environments. The conversation delves into the evolution of retrieval-augmented generation (RAG) techniques and the future trajectory of LLM-based applications. Simon comments on the significance of balancing performance with cost and latency in leveraging LLM capabilities, envisioning a continued focus on data orchestration and enrichment. Addressing LLM security concerns, Simon emphasizes the critical need for robust input and output evaluation to mitigate potential risks. He discusses the potential vulnerabilities associated with LLMs, including prompt injection attacks and data leakage, underscoring the importance of implementing strong access controls and data privacy measures. Simon also highlights the ongoing efforts within the LLM community to address security challenges and foster a culture of education and awareness. As the discussion progresses, Simon introduces LlamaCloud, an enterprise data platform designed to streamline data processing and storage for LLM applications. He emphasizes the platform's tight integration with the open-source LlamaIndex framework, offering users a seamless transition from experimentation to production-grade deployments. Listeners will also learn about LlamaIndex's parsing solution, LlamaParse. Join us to learn more about the ongoing journey of innovation in large language model-based applications, while remaining vigilant about LLM security considerations. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:31:04

Ask host to enable sharing for playback control

AI Threat Research: Spotlight on the Huntr Community

3/13/2024
Learn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved! This week’s featured guests are leaders from the huntr community (brought to you by Protect AI): Dan McInerney, Lead AI Threat Researcher Marcello Salvati, Sr. Engineer & Researcher Madison Vorbrich, Community Manager Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:31:48

Ask host to enable sharing for playback control

Securing AI: The Role of People, Processes & Tools in MLSecOps

2/29/2024
In this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO). The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for the a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:37:16

Ask host to enable sharing for playback control

ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

2/27/2024
In this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:35:30

Ask host to enable sharing for playback control

Finding a Balance: LLMs, Innovation, and Security

2/15/2024
In this episode of The MLSecOps Podcast, special guest, Sandy Dunn, joins us to discuss the dynamic world of large language models (LLMs) and the equilibrium of innovation and security. Co-hosts, Daryan “D” Dehghanpisheh and Dan McInerney talk with Sandy about the nuanced challenges organizations face in managing LLMs while mitigating AI risks. Exploring the swift pace of innovation juxtaposed with the imperative of maintaining robust security measures, the trio examines the critical need for organizations to adapt their security posture management to include considerations for AI usage. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:41:56

Ask host to enable sharing for playback control

Secure AI Implementation and Governance

2/13/2024
In this episode of The MLSecOps Podcast, Nick James, CEO of WhitegloveAI dives in with show host, Chris King, Head of Product at Protect AI, to offer enlightening insights surrounding: - AI Governance - ISO - International Organization for Standardization ISO/IEC 42001:2023-Information Technology, Artificial Intelligence Management System - Continuous improvement for AI security Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional MLSecOps and AI Security tools and resources to check out: Protect AI Radar (https://bit.ly/ProtectAIRadar) ModelScan (https://bit.ly/ModelScan) Protect AI’s ML Security-Focused Open Source Tools (https://bit.ly/ProtectAIGitHub) Huntr - The World's First AI/Machine Learning Bug Bounty Platform (https://bit.ly/aimlhuntr) Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:37

Ask host to enable sharing for playback control

Risk Management and Enhanced Security Practices for AI Systems

2/6/2024
In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems. Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:08

Ask host to enable sharing for playback control

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

11/28/2023
In this episode, co-hosts Badar Ahmed and Daryan Dehghanpisheh are joined by Drew Farris (Principal, Booz Allen Hamilton) and Edward Raff (Chief Scientist, Booz Allen Hamilton) to discuss themes from their paper, "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks," co-authored with Michael Benaroch. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:41:19

Ask host to enable sharing for playback control

From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus

10/24/2023
In this episode, the founder and CEO of The In Vivo Group, Alexander Titus, joins show hosts Diana Kelley and Daryan Dehghanpisheh to discuss themes from his forward-thinking paper, "The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward," authored with Adam H. Russell. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:43:20

Ask host to enable sharing for playback control

Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP

10/18/2023
*This episode is also available in video format! Click to watch the full YouTube video.* Welcome to Season 2 of The MLSecOps Podcast! In this episode, we joined Strategic Technology Branch Chief, Martin Stanley, CISSP, from the Cybersecurity and Infrastructure Security Agency (CISA), to celebrate 20 years of Cybersecurity Awareness Month, as well as hear his expert and thoughtful insights about CISA initiatives, partnering with the National Institute of Standards and Technology (NIST) to promote the adoption of their AI Risk Management Framework, AI security and governance, and much more. We are so grateful to Martin for joining us for this enlightening talk! Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:39:45

Ask host to enable sharing for playback control

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 2)

9/21/2023
*This episode is also available in video format! Click to watch the full YouTube video.* Welcome back, everyone, to The MLSecOps Podcast. We’re thrilled to have you with us for Part 2 of our two-part season finale, as we wrap up Season 1 and look forward to an exciting and revamped Season 2. In this two-part season recap, we’ve been revisiting some of the most captivating discussions from our first season, offering an overview on essential topics related to AI and machine learning security. Part 1 of this series focused on topics like adversarial machine learning, ML supply chain vulnerabilities, and red teaming for AI/ML. Here in Part 2, we've handpicked some standout moments from Season 1 episodes that will take you on a tour through categories such as model provenance; governance, risk, & compliance; and Trusted AI. Our wonderful guests on the show delve into topics like defining responsible AI, bias detection and prevention, model fairness, AI audits, incident response plans, privacy engineering, and the significance of data in MLSecOps. These episodes have been a testament to the expertise and insights of our fantastic guests, and we're excited to share their wisdom with you once again. Whether you're a long-time listener or joining us for the first time, there's something here for everyone, and if you missed the full-length versions of any of these thought-provoking discussions or simply want to revisit them, you can find links to the full episodes and transcripts on our website at www.mlsecops.com/podcast. Chapters: 0:00 Opening 0:25 Intro by Charlie McCarthy 2:29 S1E9 with Guest Diya Wynn 6:32 S1E4 with Guest Dr. Cari Miller, CMP, FHCA 11:03 S1E17 with Guest Nick Schmidt 16:46 S1E7 with Guest Shea Brown, PhD 22:06 S1E8 with Guest Patrick Hall 26:12 S1E14 with Guest Katharine Jarmul 32:01 S1E13 with Guest Jennifer Prendki, PhD 36:44 S1E18 with Guest Rob van der Veer Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:42:28

Ask host to enable sharing for playback control

AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1)

9/19/2023
*This episode is also available in video format! Click to watch the full YouTube video.* Welcome to the final episode of the first season of The MLSecOps Podcast, brought to you by the team at Protect AI. In this two-part episode, we’ll be taking a look back at some favorite highlights from the season where we dove deep into machine learning security operations. In this first part, we’ll be revisiting clips related to things like adversarial machine learning; how malicious actors can use AI to fool machine learning systems into making incorrect decisions; supply chain vulnerabilities; and red teaming for AI/ML, including how security professionals might simulate attacks on their own systems to detect and mitigate vulnerabilities. If you’re new to the show, or if you could use a refresher on any of these topics, this episode is for you, as it’s a great place for listeners to start their learning journey with us and work backwards based on individual interests. And when something in this recap piques your interest, be sure to check out the transcript for links to the full-length episodes where each of these clips came from. You can visit the website and read the transcripts at www.mlsecops.com/podcast. So now, we invite you to sit back, relax, and enjoy this Season 1 recap of some of the most important MLSecOps topics of the year. And stay tuned for part 2 of this episode, where we’ll be revisiting MLSecOps conversations surrounding governance, risk, and compliance, model provenance, and Trusted AI. Thanks for listening. Chapters: 0:00 Opening 0:25 Intro by Charlie McCarthy, MLSecOps Community Leader 2:15 S1E1 with Guest Disesdi Susanna Cox 5:08 S1E2 with Guest Dr. Florian Tramèr 10:16 S1E3 with Guest Pin-Yu Chen, PhD 13:18 S1E5 with Guest Christina Liaghati, PhD 17:59 S1E6 with Guest Johann Rehberger 22:10 S1E10 with Guest Kai Greshake 27:14 S1E11 with Guest Shreya Rajpal 31:45 S1E12 with Guest Apostol Vassilev 36:36 End/Credits Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:37:10

Ask host to enable sharing for playback control

A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer

9/5/2023
Joining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob talks about the lack of security considerations and processes in AI production systems compared to traditional software development, and the unique challenges and particularities of building security into AI and machine learning systems. Together in this episode, Ian and Rob dive into things like practical threats to ML systems, the transition from MLOps to MLSecOps, the [upcoming] ISO 5338 standard on AI engineering, and what organizations can do if they are looking to mature their AI/ML security practices. This is a great dialogue and exchange of ideas overall between two super knowledgeable people in this industry. So thank you so much to Ian and to Rob for joining us on The MLSecOps Podcast this week. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:29:25

Ask host to enable sharing for playback control

ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt

8/18/2023
This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems. But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring. He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups. Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models. Additional tools and resources to check out: AI Radar ModelScan NB Defense Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:35:33

Ask host to enable sharing for playback control

Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI

8/17/2023
Watch the video for this episode at: https://mlsecops.com/podcast/exploring-ai/ml-security-risks-at-black-hat-usa-2023 This episode of The MLSecOps Podcast features expert security leaders who sat down at Black Hat USA 2023 last week with team members from Protect AI to talk about various facets of AI and machine learning security: - What is the overall state of the AI/ML security realm at this time? - What is currently the largest threat to AI and machine learning security? - What is the most important thing we need to focus on at this time in the machine learning security space? - Plus much more! Guest Speaker Information: Adam Nygate Adam is the founder of huntr.dev (recently acquired by Protect AI), the place to protect open-source software, where they not only provide bug and fix bounties, but have built a platform that automates triaging, creating a noise-free environment for maintainers. Adam’s work with the huntr.dev community of over 10k security contributors enabled them to become the 5th largest CNA in the world in 2022. Adam was included in the OpenUK’s 2022 Honours List and has been featured in articles by AWS, Nominet and on podcasts such as Zero Hour and Absolute AppSec. Daniel Miessler "Daniel is the founder of Unsupervised Learning, a company focused on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in Cybersecurity, and has spent the last several years focused on applying AI to business and human problems." Adam Shostack Check out Adam's fantastic book, "Threats: What Every Engineer Should Learn From Star Wars!" Adam is a leading expert on threat modeling, a consultant, expert witness, and game designer. Christina Liaghati, PhD Dr. Liaghati is the AI Strategy Execution & Operations Manager, AI & Autonomy Innovation Center at MITRE, and a wealth of knowledge about all things MITRE ATLAS and the ML system attack chain. She is a highly sought-after speaker and is a repeat guest of ours on The MLSecOps Podcast! Phillip Wylie Phillip is the Director of Services and Training at Scythe.io, and is a passionate offensive security professional with over 25 years of information technology and cybersecurity experience. Specialties include penetration testing, security vulnerability assessments, application security, threat, and vulnerability management. Plus special impromptu appearances by Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:35:20

Ask host to enable sharing for playback control

Everything You Need to Know About Hacker Summer Camp 2023

8/3/2023
Welcome back to The MLSecOps Podcast for this week's episode, “Everything You Need to Know About Hacker Summer Camp 2023.” This week, our show is hosted by Protect AI's Chief Information Security Officer, Diana Kelley, and Diana talks with two more longtime security experts, Chloé Messdaghi and Dan McInerney, about all things related to what the security research community fondly refers to as Hacker Summer Camp. The group discusses various events held throughout the course of this exciting week in Las Vegas, including what to expect at Black Hat [USA 2023] and DEF CON [31]. (1:21) What is Hacker Summer Camp Week and what are the various events and Cons that take place during that time? (3:58) It’s my first time attending Black Hat, DEF CON, Hacker Summer Camp Week, etc.: where can I find groups to attend with or to help me navigate where to go and what events to attend? (9:53) Advice: if it’s my first time attending Black Hat, what other advice is there for me? What should I be thinking about? (13:25) If I attend Black Hat, does that mean I’m automatically able to attend DEF CON or how does that work? (TL;DR separate passes are needed for each event) (14:14) Are certain personas more welcomed at specific conferences? (15:49) What are some interesting panel talks we should know about? (20:53) There are a couple of other conferences going on during “Summer Camp Week” - BSides Las Vegas, Squadcon, Diana Initiative. What are those? When are they taking place? Can I go to all of them? How does that work? (23:26) What AI/ML security trends are happening? What should I be looking for at Black Hat and DEF CON this year in terms of talks and research? (28:55) How can I determine if a particular talk is going to be worth my time? (32:54) Any other tips on how to stay healthy and safe (both physical and electronic safety) throughout the week? Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:38:59

Ask host to enable sharing for playback control

Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul

7/12/2023
Welcome to The MLSecOps Podcast, where we dive deep into the world of machine learning security operations. In this episode, we talk with the renowned Katharine Jarmul. Katharine is a Principal Data Scientist at Thoughtworks, and the author of the popular new book, Practical Data Privacy. Katharine also writes a blog titled, Probably Private, where she writes about data privacy, data security, and the intersection of data science and machine learning. We cover a lot of ground in this conversation; from the more general data privacy and security risks associated with ML models, to more specific cases such as the case with OpenAI’s ChatGPT. We also touch on things like how GDPR and other regulatory frameworks put a spotlight on the privacy concerns we all have when it comes to the massive amount of data collected by models. Where does the data come from? How is it collected? Who gives consent? What if somebody wants to have their data removed? We also get into how organizations and professionals such as business leaders, data scientists, and ML practitioners can address these challenges when it comes to risks surrounding data, privacy, security, and reputation. We also explore the practices and processes that need to be implemented in order to integrate “Privacy by Design” into the machine learning lifecycle. Katharine is a wealth of knowledge and insight into these data privacy issues. As always, thanks for listening to the podcast, for reading the transcript, and supporting the show in any way you can. With that, we hope you enjoy our conversation with Katharine Jarmul. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:46:44

Ask host to enable sharing for playback control

The Intersection of MLSecOps and DataPrepOps; With Guest: Jennifer Prendki, PhD

6/21/2023
On this week’s episode from The MLSecOps Podcast, we have the pleasure of hearing from Dr. Jennifer Prendki, founder and CEO of Alectio - The DataPrepOps Company. Alectio’s name comes from a blend of the acronym “AL,” standing for Active Learning, and the Latin term for the word “selection,” which is “lectio.” In this episode, Dr. Prendki defines DataPrepOps for us and describes its contrasts to MLOps, along with how DataPrepOps intersects with MLSecOps best practices. She also discusses data quality, security risks in data science, and the role that data curation plays in helping to mitigate security risks in ML models. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:34:40

Ask host to enable sharing for playback control

The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST

6/14/2023
In this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team. Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing. Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes. Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper. Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:30:30

Ask host to enable sharing for playback control

Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal

6/7/2023
In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal. Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs). In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs. If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations. Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast. Additional tools and resources to check out: Protect AI Radar: End-to-End AI Risk Management Protect AI’s ML Security-Focused Open Source Tools LLM Guard - The Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Duration:00:39:16