ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |-logo

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

Government

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Location:

United States

Description:

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Language:

English


Episodes
Ask host to enable sharing for playback control

#OpenBox The Data Brokers & Emerging Governance with Heidi Part 2

12/20/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. I spoke with Heidi Saas Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor. This is part 2. She is speaking about how enterprises can manage the challenges by good governance practices . --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:20:16

Ask host to enable sharing for playback control

#OpenBox The Data Brokers & Emerging Governance with Heidi

12/14/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. I spoke with Heidi Saas Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor. This is part 1. She is speaking about how regulations are emerging in the context of data brokers and how enterprises need to adopt to the changing compliance environment in managing data. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:21:39

Ask host to enable sharing for playback control

#openbox - Open issues and problems in dealing with dark patterns

12/6/2023
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this. In this episode, Marie speaks about the enterprise approaches in working on fair patterns and the emerging regulatory interests in addressing the gap. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:13:36

Ask host to enable sharing for playback control

#OpenBox - Open issues in dealing with dark patterns and/ or deceptive designs

11/23/2023
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this. In this episode, Marie speaks about the key considerations in dealing with the deceptive designs and how fair patterns enable a better business proposition --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:20:28

Ask host to enable sharing for playback control

#openbox Bias identification and mitigation with Patrick Hall - Part 2

11/2/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database. This is part 2 of the episode He spoke about key approaches for bias mitigation and the limitations therein. He also discusses the open problems in this area. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:22:45

Ask host to enable sharing for playback control

#Openbox - bias discussion with Patrick Hall part 1

10/18/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database. He spoke about key considerations for metrics regarding bias for varied types of data. He also discusses the open problems in this area. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:22:38

Ask host to enable sharing for playback control

#OPENBOX Navigating Causality with Aleksander Molak - Part 2

9/28/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker. He is the author of the book Causal inference and discovery in Python. This is Part 2. He discusses about some critical considerations regarding causality including honest reflections on how to leverage causality for humanity. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:23:18

Ask host to enable sharing for playback control

#OPENBOX - Navigating Causal Discovery with Aleksander Molak

9/28/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker. He is the author of the book Causal inference and discovery in Python. This is Part 1. He discusses about open issues and considerations in causal discovery, Directed acrylic graphs, and Causal effect estimators. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:25:26

Ask host to enable sharing for playback control

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 2

9/5/2023
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us ⁠Upol Ehsan⁠ is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area of⁠ Human-centered Explainable AI⁠ (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets. We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl. Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment. This is part 2 of the discussion. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:22:44

Ask host to enable sharing for playback control

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 1

9/5/2023
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us ⁠Upol Ehsan⁠ is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area of⁠ Human-centered Explainable AI⁠ (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets. We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl. Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment. This is part 1 of the discussion. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:23:01

Ask host to enable sharing for playback control

#OPENBOX - Open issues in Data Poisoning defence with Antonio Part 2

3/7/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. In this part 2 of the podcast, he speaks about emerging types of attacks wherein the attack approaches are less sophisticaand computer vision. He is expected to join CISPA labs saarbrücken, Germany. He is passionate about machine learning security and closely follows the cutting edge research in this space. He also authored a paper with Kathrin (one of our earlier podcast guest). We are discussing about” Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning” Paper that he co-authored. In this part 2 of the podcast, he speaks about emerging types of attacks wherein the attack approaches are less sophisticated, but impactful. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:22:27

Ask host to enable sharing for playback control

#OPENBOX - Data Poisoning and associated Open issues with Antonio Part 1

3/7/2023
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us Antonio. He is a PhD student at University Ca' Foscari of Venice working in the fields of adversarial machine learning and computer vision. He is expected to join CISPA labs saarbrücken, Germany. He is passionate about machine learning security and closely follows cutting-edge research in this space. He also authored a paper with Kathrin (one of our earlier podcast guests). We are discussing ” Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning” Paper that he co-authored. In this he is speaking about the varied attack vectors and specific open issues in this space --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:16:05

Ask host to enable sharing for playback control

#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 2

10/21/2022
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored. This is part 2 of the podcast. Eric discusses known limitations of per-turn, per-dialogue, and pairwise methods [scenario-based testing, fault injection, adversarial attacks --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:16:47

Ask host to enable sharing for playback control

#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 1

10/21/2022
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored. This is part 1 of the podcast. In this podcast, Eric discusses human evaluation in open-domain conversational contexts, Likert scales, and subjective outcomes. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:18:55

Ask host to enable sharing for playback control

#OPENBOX - Carol Anderson - Zero-Shot Learning Part 2

10/21/2022
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021. and also discuss specific practical issues associated with Zero-shot learning. This is part 2 of the conversation. She covers the difficulty of classifying among the available options, labor-intensive label design, and the underlying bias encoded in models. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:09:23

Ask host to enable sharing for playback control

#OPENBOX - Carol Anderson - Zero Shot Learning Part 1

10/21/2022
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021. and also discuss specific practical issues associated with Zero-shot learning. This is part 1 of the conversation. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:19:50

Ask host to enable sharing for playback control

#OPENBOX - Maura Pintor - Improving Optimization of Adversarial Examples - Part 1

10/21/2022
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus. This is part 1 of the podcast. In this podcast, Maura covered why evaluating defenses is complex, the nature of mitigation failures and why robustness is overestimated in evasion attacks in the context of open issues. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:23:33

Ask host to enable sharing for playback control

#OPENBOX - Maura Pintor - Open issues in improving optimization of adversarial examples - Part 2

10/21/2022
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus. This is part 2 of the podcast. In this podcast, Maura covered transferability analysis, testing ML models for robustness, and challenges associated with repurposing models in the context of open issues. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:12:36

Ask host to enable sharing for playback control

#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse Part 2

9/27/2022
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. In this podcast we have Kathrin Grosse. Kathrin Grosse is a Post Doc researcher with Battista Biggio at the University of Cagliari working on Adversarial learning. This podcast covers a paper titled “Machine Learning Security against Data Poisoning: Are We There Yet? ” published in April 2022, which she co-authored. This is part 2 of the podcast. In this podcast, she covers the thoughts around gaining a better understanding of how defenses work, adaptive attacks and thus, our knowledge about the limits of existing defenses is rather narrow --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:09:48

Ask host to enable sharing for playback control

#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse - Part 1

9/27/2022
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. In this podcast we have Kathrin Grosse. Kathrin Grosse is a Post Doc researcher with Battista Biggio at the University of Cagliari working on Adversarial learning. In this podcast we cover a paper titled “Machine Learning Security against Data Poisoning: Are We There Yet? ” published in April 2022, which she co-authored. This is part 1 of the podcast. In this podcast, she covers the thoughts around the impracticality of some threat models considered for poisoning attacks in a real-world application and scalability of poisoning attacks against large-scale models — --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Duration:00:13:28