She Said Privacy/He Said Security-logo

She Said Privacy/He Said Security

Business & Economics Podcasts

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Location:

United States

Description:

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Language:

English


Episodes
Pídele al anfitrión que permita compartir el control de reproducción

New CCPA Rules: What Businesses Need to Know

9/4/2025
Daniel M. Goldberg is the Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC. He advises on a wide range of privacy, security, and AI matters. His expertise spans from handling high-stakes regulatory enforcement actions to shaping the application of privacy and AI laws. Earlier this year, the California Privacy Lawyers Association named him the "California Privacy Lawyer of the Year." In this episode… California is reshaping privacy compliance with its latest updates to the California Consumer Privacy Act (CCPA). These sweeping changes introduce new obligations for businesses operating in California, notably in the areas of Automated Decision-Making Technology (ADMT), cybersecurity audits, and risk assessments. So, what can companies do now to get ahead? Companies can prepare by understanding the scope of the new rules and whether or not they apply to their business, as the regulations are set to take effect on October 1, 2025, if they are filed with the Secretary of State by August 31. If that filing happens later, the next effective date will shift to January 1, 2026. The rules around ADMT are especially complex, with broad definitions that could apply to any tool or system that processes personal data to make significant decisions about consumers. Beyond ADMT, certain companies will also need to conduct comprehensive cybersecurity audits through an independent auditor, a process that may be challenging for smaller organizations. Risk assessments impose an additional obligation by requiring reviews of activities such as processing, selling, or sharing sensitive data, and using ADMT for significant decision-making, among others, with attestations submitted to regulators. The new rules make it clear that California regulators also expect companies to maintain detailed documentation and demonstrate accountability through governance. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Daniel Goldberg, Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC, about how companies can navigate the CCPA’s new requirements. From ADMT to mandatory cybersecurity audits and risk assessments, Daniel provides a detailed overview of the complex requirements, explaining the scope and its impact on companies. He also outlines how these new rules set the tone for future privacy and AI regulations, why documentation and governance are central to compliance, and shares practical tips on the importance of reviewing AI tool settings to ensure sensitive data and confidential information are not used for AI model training.

Duración:00:32:01

Pídele al anfitrión que permita compartir el control de reproducción

How AI Is Rewriting the Rules of Cybersecurity

8/28/2025
John Graves is an innovative legal leader and Senior Counsel at Nisos Holdings, Inc. He has a diverse legal background at the intersection of law, highly regulated industry, and technology. John has over two decades of legal experience advising business leaders, global privacy teams, CISOs and security teams, product groups, and compliance functions. He is a graduate of the University of Oklahoma. In this episode… AI is fundamentally changing the cybersecurity landscape. Threat actors are using AI to move faster, scale attacks, and create synthetic identities that are difficult for companies to detect. At the same time, defenders rely on AI to sift through large amounts of data and separate the signal from noise to determine whether usernames and email addresses are tied to legitimate users or malicious actors. As businesses rush to adopt AI, how can they do so without creating gaps that leave them vulnerable to risks and cyber threats? To stay ahead of evolving cyber risks, organizations should conduct tabletop exercises with security and technical teams. These exercises help business leaders understand risks like prompt injection, poisoned data, and social engineering by walking through how AI systems operate and asking what would happen if certain situations occurred. They are most effective when conducted early in the AI lifecycle, giving companies the chance to simulate attack scenarios and identify risks before systems are deployed. Companies also need to establish AI governance because, without oversight of inputs, processes, and outputs, AI adoption carries significant risk. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with John Graves, Senior Counsel at Nisos Holdings, Inc., about how AI is reshaping cyber threats and defenses. John shares how threat actors leverage AI to scale ransomware, impersonate real people, and improve social engineering tactics, while defenders use the technology to analyze data and uncover hidden risks. He explains why public digital footprints of executives and their families are becoming prime targets for attackers and why companies must take human risk management seriously. John also highlights why establishing governance and conducting tabletop exercises are essential for identifying vulnerabilities and preparing leaders to respond to real-world challenges.

Duración:00:27:34

Pídele al anfitrión que permita compartir el control de reproducción

The Blueprint for a Global Privacy and Security Program

8/21/2025
Robert S. Jett III (“Bob”) serves as the first Global Chief Data Privacy Officer at Bunge, where he leads global privacy initiatives and supports key projects in digital transformation, AI, and data management. With over 30 years of legal and in-house counsel experience across manufacturing, insurance, and financial services, he has built and managed global programs for compliance, data privacy, and incident response. Bob has worked extensively across IT, cybersecurity, information security, and corporate compliance teams. He holds a BA in international relations and political science from Hobart College and a JD from the University of Baltimore School of Law. Bob is active in the ACC, IAPP, Georgia Bar Privacy & Law Section, and the Maryland State Bar Association. In this episode… Managing privacy and security across multiple jurisdictions has never been more challenging for global companies, as regulations evolve and privacy, security, and AI risks accelerate at the same time. The challenge becomes particularly acute for businesses managing supply chains that span dozens of countries, where they must navigate geopolitical shifts and comply with strict employee data regulations that differ by region. These organizations also face the added complexity of governing AI tools to protect sensitive data. Navigating these challenges requires close coordination between privacy, security, and operational teams so risks can be identified quickly and addressed in real time. A simple way global companies can address these challenges is by embedding privacy leaders into operational teams. For global companies, like Bunge, regular communication between privacy, IT, and cybersecurity teams keeps threats visible in real time, while cross-collaboration helps identify vulnerabilities and mitigate weak points. The company also incorporates environmental, social, and governance (ESG) principles into its privacy framework, using traceability to validate supply chain data and meet regulatory requirements. When it comes to managing emerging technologies like AI, foundational privacy principles apply. Companies need to establish governance for data quality, prompt management, third-party vendors, and automated tools, such as AI notetakers. These steps build transparency, reduce risk, and strengthen trust across the organization. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Robert “Bob” Jett, Global Chief Data Privacy Officer at Bunge, about building and leading a global privacy program. Bob emphasizes the importance of embedding privacy leadership into operational teams, like IT departments, to enable collaboration and build trust. He discusses strategies for adhering to ESG principles, managing global employee data privacy, and applying privacy fundamentals to AI governance. Bob also provides tips for responsible AI use, including the importance of prompt engineering oversight, and explains why relationship-building and transparency are essential for effective global privacy and security programs.

Duración:00:30:31

Pídele al anfitrión que permita compartir el control de reproducción

Navigating Privacy Compliance When AI Changes Everything

8/14/2025
Mason Clutter is a Partner and Privacy Lead at Frost Brown Todd Attorneys, previously serving as Chief Privacy Officer for the US Department of Homeland Security. Mason’s practice is at the intersection of privacy, security, and technology. She works with clients to operationalize privacy and security, helping them achieve their goals and build and maintain trust with their clients. In this episode… Companies are facing new challenges trying to build privacy programs that keep up with evolving privacy laws and new AI tools. Laws, like Maryland’s new privacy law, are adding pressure with strict data minimization requirements and expanded protections for sensitive and children’s data. These shifts are driving companies to reconsider how and when privacy is built into operations. So, how can companies effectively design privacy programs that address regulatory, operational, and AI-driven risks? Companies can start by embedding privacy and security measures into their products and services from the start. AI adds another layer of complexity. While organizations are trying to use AI for efficiency, confidential or personal information is often entered into AI tools without knowing how it will be used or where it will go. Vague third-party vendor contract terms and downstream data sharing compound the risk. Staying compliant means understanding each AI use case, reviewing vendor contracts closely, and choosing AI tools that reflect a company’s risk tolerance and privacy and security practices. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Mason Clutter, Partner and Privacy Lead at Frost Brown Todd Attorneys, about how companies can navigate complex privacy, security, and AI challenges. Mason shares practical insights on navigating Maryland’s new privacy law, managing vendor contracts, and downstream AI risks. She explores common privacy misconceptions, including why privacy should not be one-size-fits-all or checkbox compliance exercise. Mason also addresses growing concerns around AI deepfakes and why regulation alone is not enough without broader public education.

Duración:00:36:14

Pídele al anfitrión que permita compartir el control de reproducción

How Privacy is Reshaping the Ad Tech Industry

8/7/2025
Allison Schiff is the Managing Editor at AdExchanger, where she covers mobile, Meta, measurement, privacy, and the app economy. Allison received her MA in journalism from the Dublin Institute of Technology in Ireland (her favorite place) and a BA in history and English from Brandeis University in Waltham, Mass. In this episode… Ad tech companies are under increasing pressure to evolve their privacy practices. What was once considered a “wild west,” loosely regulated environment, is now being reshaped by regulatory enforcement actions and shifting consumer expectations. Many companies are becoming more selective about their vendors, implementing privacy by design, and embracing data minimization practices after years of unchecked data collection. While at the same time, many ad tech companies are rushing to position themselves as AI companies, often without a clear understanding of the risks and how these claims align with consumer trust. To meet rising regulatory and consumer expectations, some ad tech companies are taking concrete steps to improve their privacy posture. This includes auditing third-party tools, removing unnecessary tracking pixels from websites, and gaining more visibility into how data flows through partner systems. On the AI front, research shows that consumer trust drops when AI-generated content is not clearly labeled and that marketing products as AI-powered makes them less appealing. These findings point to the need for greater transparency in company data collection practices and marketing and AI transparency. In this episode of the She Said Privacy/He Said Security podcast, Jodi and Justin Daniels speak with Allison Schiff, managing editor at AdExchanger, about how ad tech companies are adapting to regulatory scrutiny and evolving consumer privacy expectations. Allison shares how the ad tech industry’s approach to privacy is maturing, and explains how companies are implementing privacy by design, reassessing vendor relationships, and using consent tools more intentionally. She offers insight into how journalists utilize AI while maintaining editorial judgment and presents concerns about AI’s impact on critical thinking. Allison also describes the disconnect between AI marketing hype and consumer preferences, and the need for companies to disclose the use of AI-generated content to maintain trust.

Duración:00:37:44

Pídele al anfitrión que permita compartir el control de reproducción

How to Build a Global Privacy Program That Enables Growth

7/31/2025
Heather Kuhn is Privacy, Security, and Technology Counsel at Genuine Parts Company. She is a privacy and technology attorney with nearly two decades of professional cross-industry experience. She teaches at Georgia State College of Law, serves on the Georgia Bar’s AI Committee, and formerly chaired its Privacy & Technology Section, leading conversations at the intersection of law, AI, and innovation. In this episode… Embedding privacy and security practices into a large, global business requires more than policies. It takes early collaboration, constant relationship building across teams, and a deep understanding of business goals. Privacy programs are most effective when they build consumer trust, increase operational efficiency, meet privacy requirements, and support strategic business goals, like revenue growth and product development. And as companies continue to adopt AI, the same principles apply to managing AI risk. Teams need to evaluate how data is used, assess risks, and adapt existing privacy and security measures to new technologies. Managing privacy across a massive global company requires building the right partnerships and embedding privacy-by-design principles from the start of projects. Most companies have small but mighty privacy teams, so the key is finding privacy champions across the business to handle operational functions while the privacy team sets global policies and procedures. Data mapping and privacy impact assessments are critical tools that help identify risks and right-size privacy programs. This also extends to the customer experience, where meaningful consent, clear privacy notices, and giving users control strengthens trust. Privacy training is also essential for internal teams and works best when it’s interactive and relevant to an employee’s daily work rather than abstract compliance requirements. In this episode of She Said Privacy/He Said Security, Jodi Daniels and Justin Daniels speak with Heather Kuhn, Privacy and Technology Counsel at Genuine Parts Company, about operationalizing privacy and security across a global enterprise. Heather explains how early engagement, strong internal relationships, and cross-functional collaboration make it possible to scale privacy programs without slowing the business. She shares how her team uses data mapping and privacy impact assessments to right-size privacy programs and privacy requirements and emphasizes the need to embed privacy into customer experiences through clear privacy notices and meaningful consent. Heather also highlights the importance of privacy training tied to employee roles, delivered through in-person sessions and gamified content. And she explains how her department uses generative AI to enhance legal team efficiency, and how she approaches privacy risks associated with AI tools and automation.

Duración:00:22:31

Pídele al anfitrión que permita compartir el control de reproducción

Helping Seniors Avoid Digital Scams, One Click at a Time

7/24/2025
Alexandria “Lexi” Lutz is a privacy attorney and the Founder of Opt-Inspire, Inc., a nonprofit dedicated to helping seniors and youth build digital confidence and avoid online scams. By day, she serves as Senior Corporate Counsel at Nordstrom, advising on privacy, cybersecurity, and AI across the retail and technology landscape. In this episode… Online scams are becoming more sophisticated, targeting older adults with devastating financial consequences that often reach tens of thousands of dollars with little recourse. From tech support fraud to AI-driven deepfakes that mimic loved ones’ voices, these scams prey on isolation, fear, and digital inexperience. Many families struggle to protect their aging parents and grandparents, especially when conversations about digital risks are met with resistance from loved ones. How can we bridge the digital literacy gap across generations and empower seniors to navigate these evolving threats? The urgency is real. In 2024, seniors lost nearly $5 billion to scams, a 43 percent increase from the previous year. Scammers are using voice cloning, fake emergencies, and fear-based messaging to pressure people into giving up money or sensitive personal information. Education can be a powerful defense, and that's why Opt-Inspire delivers engaging, volunteer-led workshops tailored to senior living communities, teaching practical skills like recognizing fake emails and enabling two-factor authentication. Protecting aging loved ones against technology and AI-driven scams requires proactive and hands-on education. Opt-Inspire equips seniors with the tools and knowledge to stay safe online through engaging, community-based seminars. The nonprofit delivers in-person and volunteer-led workshops tailored to senior living communities, addressing both technical literacy and emotional manipulation tactics. Through scripts, visuals, and a "Make It Personal" toolkit with conversation starters, Opt-Inspire also equips families with resources to discuss digital safety with loved ones in a constructive and relatable way. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Alexandria (Lexi) Lutz, Senior Corporate Counsel at Nordstrom and Founder of Opt-Inspire, about building digital confidence among seniors. Lexi shares how a personal family experience inspired her to launch a nonprofit focused on preventing elder fraud. She delves into the most common scams targeting older adults today, including government impersonation, romance cons, and AI-generated deepfakes. Lexi emphasizes the importance of proactive education, enabling two-factor authentication, and weekly family check-ins. She also offers practical advice and resources for privacy professionals and family members alike who want to make a positive impact.

Duración:00:40:14

Pídele al anfitrión que permita compartir el control de reproducción

Real AI Risks No One Wants To Talk About And What Companies Can Do About Them

7/17/2025
Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. In this episode… AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools? Managing AI risk requires governance and the ability to test AI tools before deploying them. That’s why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it’s important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.

Duración:00:36:50

Pídele al anfitrión que permita compartir el control de reproducción

Privacy in the Loop: Why Human Training Is AI’s Greatest Weakness and Strength

7/10/2025
Nick Oldham is the Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax Inc. A forward-thinking legal and operations executive, Nick has a proven track record of driving large-scale transformations by integrating legal expertise with strategic operational leadership. He oversees all enterprise-wide second-line functions, leading initiatives to embed AI, enable data-driven decision-making, and deliver innovative, compliant solutions across a $1.9B business unit. His focus is on building efficient, scalable systems that align with both compliance standards and long-term strategic goals. In this episode… Many companies are rushing to adopt AI tools without adequately training their workforce on how to use them responsibly. As AI becomes embedded in daily business operations, the biggest risk isn’t the technology itself, but the lack of human understanding around how AI works and what it can do. When teams struggle to understand the differences between machine learning and generative AI, it creates risks and makes it harder to establish appropriate privacy and security guardrails. Human training is AI's greatest weakness and strength, and closing that gap involves rethinking how companies educate and train employees at every level. The responsible use of AI depends on human judgment. Companies need to embed privacy education, critical thinking, and AI risk awareness into training programs from the start. Employees should be taught how to ask questions, evaluate model behavior, and recognize when personal information is being misused. AI literacy should also extend beyond the workplace. Introducing it in high school or even earlier helps prepare future professionals to navigate complex AI tools and make thoughtful, responsible decisions. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Nick Oldham, Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax, about the role of human training in AI literacy. Nick breaks down the components of AI literacy, explains why everyone needs a foundational understanding, and emphasizes the importance of prioritizing privacy awareness when using AI tools. He also highlights ways to embed privacy and security into AI governance programs and provides actionable steps organizations can take to strengthen AI literacy across teams.

Duración:00:28:22

Pídele al anfitrión que permita compartir el control de reproducción

Where Strategy Meets Reality in AI Governance

7/3/2025
Andrew Clearwater is a Partner at Dentons’ Privacy and Cybersecurity Team and a recognized authority in privacy and AI governance. Formerly a founding leader at OneTrust, he oversaw privacy and AI initiatives, contributed to key data protection standards, and holds over 20 patents. Andrew advises businesses on responsible tech implementation, helping navigate global regulations in AI, data privacy, and cybersecurity. A frequent speaker, he offers insight into emerging compliance challenges and ethical technology use. In this episode… Many companies are diving into AI without first putting governance in place. They often move forward without defined goals, leadership, or alignment across privacy, security, and legal teams. This leads to confusion about how AI is being used, what risks it creates, and how to manage those risks. Without coordination and structure, programs lose momentum, transactions are delayed, and expectations become harder to meet. So how can companies build a responsible AI governance program? Building effective AI governance programs starts with knowing what’s in use, why it’s in use, what data AI tools and systems collect, the risk it creates, and how to manage it. Standards like ISO 42001 and the NIST AI Risk Management Framework help companies guide this process. ISO 42001 offers the benefit of certification and supports cross-functional consistency, while NIST may be better suited for organizations already using it in related areas. Both frameworks help companies define the scope of AI use cases, understand the risks, and inform policies before jumping into controls. Conducting data inventories and utilizing existing risk management processes are also essential in identifying shadow AI introduced by employees or third-party vendors. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Andrew Clearwater, Partner at Dentons, about how companies can build responsible AI governance programs. Andrew explains how standards and legal frameworks support consistent AI governance implementation and how to encourage alignment between privacy, security, legal, and ethics teams. He also outlines the importance of monitoring shadow AI across third-party vendors and practical steps companies can take to effectively structure their AI governance programs.

Duración:00:29:22

Pídele al anfitrión que permita compartir el control de reproducción

Endpoints-on-Wheels: Protecting Company and Employee Data in Cars

6/26/2025
Merry Marwig is the VP Global Communications & Advocacy at Privacy4Cars. Merry is a pro-consumer, pro-business privacy advocate who is optimistic about what data privacy rights mean for everyday people — and for the companies they do business with. At Privacy4Cars, she helps protect drivers’ and passengers’ personal data while creating business opportunities for automotive companies. In this episode… Modern cars are like computers on wheels, collecting and storing data just like smartphones or laptops. Unlike those devices, however, vehicle data is often left unencrypted and persists long after a car is sold, rented, or reassigned. This is especially problematic for businesses that use corporate cars, rental vehicles, fleet vehicles, or personal vehicles for work purposes. Sensitive information such as contact lists, text messages, navigation history, and even security credentials can remain stored in vehicles long after they change hands, posing significant privacy, security, and even physical safety risks. To take control of sensitive data, companies need to establish data deletion policies for all vehicles used in a business context. This includes requiring rental agencies and fleet management providers to delete stored data and offer certificates of deletion when cars are returned or decommissioned. Companies should also require automotive providers to provide VIN-specific data disclosures so drivers understand what data the vehicle collects and how it's used and shared. Additionally, companies need to consider how privacy regulations like GDPR and CCPA apply to vehicle data collection and use it to inform their internal policies and third-party contracts. In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Merry Marwig, VP Global Communications & Advocacy at Privacy4Cars, about the privacy and security risks of data collected and stored in vehicles. Merry explains how cars used for work, whether rental, fleet, or personal, retain unencrypted personal and company data that can be exploited when vehicles change ownership or are decommissioned. She shares real-world case studies involving sensitive information left behind in cars, including banking credentials, contact lists, and patient health records. Merry also outlines how data deletion policies and VIN-specific disclosures, required through contracts with automotive providers, help companies reduce privacy and security risks.

Duración:00:39:04

Pídele al anfitrión que permita compartir el control de reproducción

Agentic AI for Software Security: Eliminate More Vulnerabilities, Triage Less

6/18/2025
Ian Riopel is the CEO and Co-founder of Root, applying agentic AI to fix vulnerabilities instantly. A US Army veteran and former Counterintelligence Agent, he’s held roles at Cisco, CloudLock, and Rapid7. Ian brings military-grade security expertise to software supply chains. John Amaral is the CTO and Co-founder of Root. Previously, he scaled Cisco Cloud Security to $500M in revenue and led CloudLock to a $300M acquisition. With five exits behind him, John specializes in building cybersecurity startups with strong technical vision. In this episode… Patching software vulnerabilities remains one of the biggest security challenges for many organizations. Security teams are often stretched thin as they try to keep up with vulnerabilities that can quickly be exploited. Open-source components and containerized deployments add even more complexity, especially when updates risk breaking production systems. As compliance requirements tighten and the volume of vulnerabilities grows, how can businesses eliminate software security risks without sacrificing productivity? Companies like Root are transforming how organizations approach software vulnerability remediation by applying agentic AI to streamline their approach. Rather than relying on engineers to triage and prioritize thousands of issues, Root’s AI-driven platform scans container images, applies safe patches where available, and generates custom patches for outdated components that lack official fixes. Root's AI automation resolves approximately 95% or more vulnerabilities without breaking production systems, allowing organizations to meet compliance requirements while developers stay focused on building and delivering software. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Ian Riopel and John Amaral, Co-founders of Root, about how AI streamlines software vulnerability detection. Together, they explain how Root’s agentic AI platform uses specialized agents to automate patching while maintaining software stability. John and Ian also discuss how regulations and compliance pressures are driving the need for faster remediation, and how Root differs from threat detection solutions. They also explain how AI can reduce security workloads without replacing human expertise.

Duración:00:29:17

Pídele al anfitrión que permita compartir el control de reproducción

Operationalizing Privacy Across Teams, Tools, and Tech

6/12/2025
Sarah Stalnecker is the Global Privacy Director at New Balance Athletics, Inc., where she leads the integration of privacy principles across the organization, driving awareness and compliance through education, streamlined processes, and technology solutions. In this episode… Operationalizing privacy programs starts with translating legal requirements into actions that work across teams. This means aligning privacy with existing tools and workflows while meeting evolving privacy regulations and adapting to new technologies. Today’s consumers also demand both personalization and privacy, and building trust means fulfilling these expectations without crossing the line. So, how can companies build a privacy program that meets regulatory requirements, integrates into daily operations, and earns consumer trust? Embedding privacy into business operations involves more than just meeting regulatory requirements. It requires cultural change, leadership buy-in, and teamwork. Rather than forcing company teams to adapt to new privacy processes, organizations need to embed privacy requirements into existing workflows and systems that departments already use. Leading with consumer expectations instead of legal mandates helps shift mindsets and encourages collaborative dialogue about responsible data use. Documenting AI use cases and establishing an AI governance program also helps assess risks without reactive scrambling. Teams should also leverage privacy technology to scale processes and streamline compliance to ensure privacy becomes an embedded, organization-wide function rather than a siloed concern. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Sarah Stalnecker, Global Privacy Director at New Balance Athletics, about operationalizing privacy programs. Sarah shares how her team approaches data collection, embeds privacy into existing workflows, and uses consumer expectations to drive internal engagement. She also highlights the importance of documenting AI use cases and establishing AI governance to assess risk. Sarah provides tips on selecting and evaluating privacy technology and how to measure privacy program success beyond traditional metrics.

Duración:00:27:47

Pídele al anfitrión que permita compartir el control de reproducción

Outsmarting Threats: How AI is Changing the Cyber Game

6/5/2025
Brett Ewing is the Founder and CEO of AXE.AI, a cutting-edge cybersecurity SaaS start-up, and the Chief Information Security Officer at 3DCloud. He has built a career in offensive cybersecurity, focusing on driving exponential improvement. Brett progressed from a Junior Penetration Tester to Chief Operating Officer at Strong Crypto, a provider of cybersecurity solutions. He brings over 15 years of experience in information technology, with the past six years focused on penetration testing, incident response, advanced persistent threat simulation, and business development. He holds degrees in secure systems administration and cybersecurity, and is currently completing a Masters in cybersecurity with a focus area in AI/ML security at the SANS Technology Institute. Brett also holds more than a dozen certifications in IT, coding, and security from the SANS Institute, CompTIA, AWS, and other industry vendors. In this episode… Penetration testing plays a vital role in cybersecurity, but the traditional manual process is often slow and resource-heavy. Traditional testing cycles can take weeks, creating gaps that leave organizations vulnerable to fast-moving threats. With growing interest in more efficient approaches, organizations are exploring new AI tools to automate tasks like tool configuration, project management, and data analysis. How can cybersecurity teams use AI to test environments faster without increasing risk? AXE.AI offers an AI-powered platform that supports ethical hackers and red teamers by automating key components of the penetration testing process. The platform reduces overhead by configuring tools, analyzing output, and building task lists during live engagements. This allows teams to complete high-quality tests in days instead of weeks. AXE.AI’s approach supports complex environments, improves data visibility for testers, and scales efficiently across enterprise networks. The company emphasizes a human-centered approach and advocates for workforce education and training as a foundation for secure AI adoption. In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Brett Ewing, Founder and CEO of AXE.AI, about leveraging AI for offensive cybersecurity. Brett explains how AXE.AI’s platform enhances penetration testing and improves speed and coverage for large-scale networks. He also shares how AI is changing both attack and defense strategies, highlighting the risks posed by large language models (LLMs) and deepfakes, and explains why investing in continuous workforce training remains the most important cyber defense for companies today.

Duración:00:21:32

Pídele al anfitrión que permita compartir el control de reproducción

Privacy Reform Is Coming to Australia: What Businesses Need To Know

5/29/2025
James Patto is the Partner of Helios Salinger. He is a leading voice in Australia’s tech law landscape, trusted by business and government on privacy, cybersecurity, and AI issues. With over a decade of experience as a digital lawyer, he helps organizations turn regulation into opportunity — bridging law, innovation, and strategy to build trust and thrive in a digital world. In this episode… Australian privacy law stands at a critical juncture as organizations potentially face the country's most significant regulatory transformation yet. While the current principles-based Australian Privacy Act has been the foundation for over a decade, it contains notable gaps, like limited individual rights and broad exemptions for small businesses, employee data, and political parties. 84% of Australians want more control over how their personal information is collected and used, and with recent enforcement changes introducing civil penalties and on-the-spot fines, regulators now have stronger tools to hold organizations accountable. As lawmakers consider the next phase of reforms, how can businesses prepare for new compliance requirements while navigating an uncertain implementation timeline? Businesses can adapt to evolving privacy regulations and position themselves for success by strengthening their current privacy practices, including focusing on privacy notice quality, direct marketing opt-out procedures, and data breach response notice accuracy. Conducting a privacy maturity assessment and implementing streamlined, risk-based privacy impact assessments can help identify gaps and prepare for new compliance obligations. It’s also critical for organizations to understand the data they collect, where it resides, how it’s used, shared, or sold by building a comprehensive data inventory. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with James Patto, Partner at Helios Salinger, about the current state and future of Australia’s privacy law. James discusses the major shifts in Australia’s privacy landscape and the broader implications for businesses. He shares how Australia’s strong small business sector influences privacy policymaking and how the Privacy Review Report's 89 proposals might reshape Australia's regulatory framework. James also explores the differences between Australia’s privacy law and the GDPR, the timeline for proposed reforms, and what companies should do now to prepare.

Duración:00:32:01

Pídele al anfitrión que permita compartir el control de reproducción

Terms, Tech & Trust: A Privacy Deep Dive With Harvey AI

5/22/2025
Anita Gorney is the Head of Privacy and AI Legal at Harvey. Harvey is an AI tool for legal professionals and professional service providers. Before Harvey, she was Privacy Counsel at Stripe. Anita studied law in Sydney and began her career there before moving to London and then New York. In this episode… Legal professionals often spend time on manual tasks that are repetitive and difficult to scale. Emerging AI platforms, like Harvey AI, are addressing this challenge by offering tools that help lawyers handle tasks such as legal research and contract review more efficiently. As legal professionals adopt AI to streamline their work, they are placing greater focus on data confidentiality and the secure handling of client information. Harvey AI addresses these concerns through its strict privacy and security controls, customer-controlled retention and deletion settings, and a commitment to not train on customer data. Harvey AI provides a purpose-built platform tailored for legal professionals. The company’s suite of tools — Assistant, Vault, and Workflow — automates repetitive legal work like summarizing documents, performing contract reviews, and managing due diligence processes. Harvey AI emphasizes privacy and security through features like zero data retention, encrypted processing, and workspace isolation, ensuring customer data remains confidential and is never used for model training. With a transparent, customer-first approach, Harvey AI empowers legal teams to work more efficiently without compromising trust or user data. In this episode of the She Said Privacy/He Said Security Podcast, Jodi and Justin Daniels speak with Anita Gorney, Head of Privacy and AI Legal at Harvey AI, about how legal professionals use specialized artificial intelligence to streamline their work. Anita explains how Harvey AI's platform helps with tasks like contract analysis and due diligence, while addressing privacy and security concerns through measures like customizable data retention periods and workspace isolation. She also discusses the importance of privacy by design in AI tools, conducting privacy impact assessments, and implementing user-controlled privacy settings.

Duración:00:30:13

Pídele al anfitrión que permita compartir el control de reproducción

Silent Threats Lurking in Your Child’s Devices and How To Avoid Them

5/15/2025
Ben Halpert is a cybersecurity leader, educator, and advocate dedicated to empowering digital citizens. As a Fractional CISO, author, and the founder of Savvy Cyber Kids, he advances cyber safety and ethics. A sought-after speaker, Ben shares insights globally, shaping secure digital futures at work, school, and home. In this episode… Many parents mistakenly believe that technology companies have built-in safety controls that keep children safe online. In reality, these protections are often inadequate and misleading. From AI chatbots posing as friends to online predators targeting children through gaming platforms and social media, young users, whose brains are still developing, struggle to distinguish the differences between real human interactions and programmed responses. How can parents and caregivers proactively safeguard their children’s digital experiences while fostering healthy tech habits? Addressing these risks starts with parental oversight and consistent, age-appropriate education and guidance. Devices should be removed from kids’ bedrooms at night to prevent unsupervised use and reduce exposure to online threats. Parents should actively monitor every app, game, and online interaction, ensuring children only engage with people they know in real life. Families should also establish device-free times, like during meals, to encourage face-to-face communication and teach healthy social habits. Savvy Cyber Kids supports these efforts by providing age-appropriate educational resources, including children’s picture books, classroom activities, and digital parenting guides that help families navigate online safety. By focusing on direct education for young children and providing tools for parents and schools, the organization instills foundational privacy and cybersecurity awareness from an early age. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels welcome Ben Halpert, Founder of Savvy Cyber Kids, back to the podcast to discuss the growing digital threats facing kids’ today. Ben explains how AI chatbots are being treated as real friends, how social media messaging misleads parents, and why depending on tech companies for protection is risky. He also shares how predators use games and platforms to target kids, and how parental involvement and early education can help build safer digital habits. He shares steps parents can take to monitor and guide their children’s tech use at home, and explains how Savvy Cyber Kids helps by educating young children in schools and providing families with tools to teach online safety.

Duración:00:31:09

Pídele al anfitrión que permita compartir el control de reproducción

Improving Cyber Readiness: Lessons from Real-World Investigations

5/8/2025
Todd Renner is a seasoned cybersecurity professional with over 25 years of experience leading global cyber investigations, incident response efforts, and digital asset recovery operations. He advises clients on a wide range of cybersecurity and data privacy matters, combining deep technical knowledge with a strategic understanding of risk, compliance, and regulatory frameworks. With a distinguished background at the Federal Bureau of Investigation (FBI) and National Security Agency (NSA), Mr. Renner has contributed to national security, international cyber collaboration, and has played a key role in mentoring the next generation of cybersecurity professionals. In this episode… The rising complexity of cyber threats continues to test how businesses prepare, respond, and recover. Sophisticated threat actors are exploiting these vulnerabilities of private companies and leveraging AI tools to accelerate their attacks. Despite these dangers, many organizations hesitate to involve law enforcement when a cyber event occurs. This hesitation often stems from misconceptions about what law enforcement involvement entails, including fears of losing control over their systems or exposing sensitive company information. As a result, companies may prioritize quickly restoring operations over pursuing retribution from the attackers, leaving critical security gaps unaddressed. Collaborating with law enforcement doesn’t mean forfeiting control or exposing confidential data unnecessarily. Investigations often reveal repeated issues, including mobile device compromises, missing multifactor authentication, and failing to improve cybersecurity measures after a breach. To be better prepared, companies need to develop and practice incident response plans, ensure leadership remains involved, and build security programs that evolve beyond incident response. And, as threat actors actively use AI to accelerate data aggregation and create convincing deepfakes, companies need to start thinking about how to better detect these threats. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Todd Renner, Senior Managing Director at FTI Consulting, about how organizations are responding to modern cyber threats and where many still fall short. Todd shares why companies hesitate to engage law enforcement, how threat actors are using AI for faster targeting and impersonation, and why many businesses fail to strengthen their cybersecurity programs after a breach. He also discusses why deepfakes are eroding trust and raising new challenges for companies, and he provides practical tips for keeping both organizations and families safe from evolving threats.

Duración:00:22:13

Pídele al anfitrión que permita compartir el control de reproducción

Top Takeaways From IAPP GPS 2025 and Atlanta AI Week

5/1/2025
Jodi Daniels is the Founder and CEO of Red Clover Advisors, a privacy consultancy, that integrates data privacy strategy and compliance into a flexible, scalable approach that simplifies complex privacy challenges. A Certified Information Privacy Professional, Jodi brings over 27 years of experience in privacy, marketing, strategy, and finance across diverse sectors, working and supporting startups to Fortune 500 companies. Jodi Daniels is a national keynote speaker, and she has also been featured in CNBC, The Economist, WSJ, Forbes, Inc., and many more publications. Jodi holds a MBA and BBA from Emory University’s Goizueta Business School. Read her full bio. Justin Daniels is a corporate attorney who advises domestic and international companies on business growth, M&A, and technology transactions, with over $2 billion in closed deals. He helps clients navigate complex issues involving data privacy, cybersecurity, and emerging technologies like AI, autonomous vehicles, blockchain, and fintech. Justin partners with C-suites and boards to manage cybersecurity as a strategic enterprise risk and leads breach response efforts across industries such as healthcare, logistics, and manufacturing. A frequent keynote speaker and media contributor, Justin has presented at top events including the RSA Conference, covering topics like cybersecurity in M&A, AI risk, and the intersection of privacy and innovation. Together, Jodi and Justin host the top ranked She Said Privacy / He Said Security Podcast and are authors of WSJ best-selling book, Data Reimagined: Building Trust One Byte at a Time. In this episode… From a major privacy summit to a regional AI event, experts across sectors are emphasizing that regulatory scrutiny is intensifying while AI capabilities and risks are accelerating. State privacy regulators are coordinating enforcement efforts, actively monitoring how companies handle privacy rights requests and whether cookie consent platforms work as they should. At the same time, AI tools are advancing rapidly with limited regulatory oversight, raising serious ethical and societal concerns. What practical lessons can businesses take from IAPP’s 2025 Global Privacy Summit and Atlanta’s AI Week to strengthen compliance, reduce risk, and prepare for what’s ahead? At the 2025 IAPP Global Privacy Summit, a major theme emerged: state privacy regulators are collaborating on enforcement more closely than ever before. When it comes to honoring privacy rights, this collaboration spans early inquiry stages through active enforcement, making it critical for businesses to establish, regularly test, and monitor their privacy rights processes. It also means that companies need to audit cookie consent platforms regularly, ensure compliance with universal opt-out signals like the Global Privacy Control, and align privacy notices with actual practices. Regulatory enforcement advisories and FAQs should be treated as essential readings to stay current on regulators' priorities. Likewise at the inaugural Atlanta AI Week, national security and ethical concerns came into sharper focus. Despite promises of localized data storage, some social media platforms and apps continue to raise alarms over foreign governments’ potential access to personal data. While experts encourage experimentation and practical application of AI tools, they are also urging businesses to remain vigilant to threats such as deepfakes, AI-driven misinformation, and the broader societal implications of unchecked AI development. In this episode of She Said Privacy/He Said Security, Jodi Daniels, Founder and CEO of Red Clover Advisors, and Justin Daniels, Shareholder and Corporate Attorney at Baker Donelson, share their top takeaways from the IAPP Global Privacy Summit 2025 and the inaugural Atlanta AI Week. Jodi highlights practical steps for improving privacy rights request handling, the importance of regularly testing cookie consent management platforms, and ensuring...

Duración:00:19:07

Pídele al anfitrión que permita compartir el control de reproducción

From Principle to Practice: What Privacy Pros Need to Succeed

4/17/2025
Peter Kosmala is a course developer and instructor at York University in Canada and leads its Information Privacy Program. Peter is a former marketer, technologist, lobbyist, and association leader and a current consultant, educator, and international speaker. He served the IAPP as Vice President and led the launch of the CIPP certification in the early 2000s. In this episode… As data privacy continues to evolve, privacy professionals need to stay sharp by reinforcing their foundational knowledge and refining their practical skills. It’s no longer enough to just understand and comply with regulatory requirements. Today’s privacy work also demands cultural awareness, ethical judgment, and the ability to apply privacy principles to real-world settings. How can privacy professionals expand their expertise and remain effective in an ever-changing environment? Privacy professionals can’t rely on legal knowledge alone to stay ahead. Privacy frameworks like the Fair Information Practice Principles (FIPPs), OECD Guidelines, and others offer principles that help privacy pros navigate shifting global privacy laws and emerging technologies. Privacy pros should also deepen their cultural literacy, recognizing the societal and political drivers behind laws like GDPR to align privacy practices with public expectations. Hands-on operational experience is just as important. Conducting privacy impact assessments (PIAs), responding to data subject access requests (DSARs), and developing clear communications are just a few ways privacy pros can turn knowledge into practical applications. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Peter Kosmala, Course Developer and Instructor at York University, about how privacy professionals can future-proof their skills. Peter discusses the value of foundational privacy frameworks, the tension between personalization and privacy, the limits of law-based compliance, and the growing need for ethical data use. He also explains the importance of privacy certifications, hands-on learning, and principled thinking to build programs that work in the real world.

Duración:00:34:03