The Cyberlaw Podcast-logo

The Cyberlaw Podcast

Technology Podcasts

The Cyberlaw Podcast is a weekly interview series and discussion offering an opinionated roundup of the latest events in technology, security, privacy, and government. It features in-depth interviews of a wide variety of guests, including academics, politicians, authors, reporters, and other technology and policy newsmakers. Hosted by cybersecurity attorney Stewart Baker, whose views expressed are his own.

Location:

United States

Description:

The Cyberlaw Podcast is a weekly interview series and discussion offering an opinionated roundup of the latest events in technology, security, privacy, and government. It features in-depth interviews of a wide variety of guests, including academics, politicians, authors, reporters, and other technology and policy newsmakers. Hosted by cybersecurity attorney Stewart Baker, whose views expressed are his own.

Language:

English


Episodes
Ask host to enable sharing for playback control

World on the Brink with Dmitri Alperovitch

4/22/2024
Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century." I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out. So sue me. I'm back on hiatus. The conversation in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet Union as a child through the founding of Crowdstrike and becoming a founder of Silverado Policy Accelerator and an advisor to the Defense Department. Dmitri shares his journey, including his early start in cryptography and his role in investigating the 2010 Chinese hack of Google and other companies, which he named Operation Aurora. Dmitri opens his book with a chillingly realistic scenario of a Chinese invasion of Taiwan. He explains that this is not merely a hypothetical exercise, but a well-researched depiction based on his extensive discussions with Taiwanese leadership, military experts, and his own analysis of the terrain. Then, we dive into the main themes of his book -- which is how to prevent his scenario from coming true. Dmitri stresses the similarities and differences between the US-Soviet Cold War and what he sees as Cold War II between the U.S. and China. He argues that, like Cold War I, Cold War II will require a comprehensive strategy, leveraging military, economic, diplomatic, and technological deterrence. Dmitri also highlights the structural economic problems facing China, such as the middle-income trap and a looming population collapse. Despite these challenges, he stresses that the U.S. will face tough decisions as it seeks to deter conflict with China while maintaining its other global obligations. We talk about diversifying critical supply chains away from China and slowing China's technological progress in areas like semiconductors. This will require continuing collaboration with allies like Japan and the Netherlands to restrict China's access to advanced chip-making equipment. Finally, I note the remarkable role played in Cold War I by Henry Kissinger and Zbigniew Brzezinski, two influential national security advisers who were also first-generation immigrants. I ask whether it's too late to nominate Dmitri to play the same role in Cold War II. You heard it here first!

Duration:00:49:36

Ask host to enable sharing for playback control

Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?

4/11/2024
There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity. Microsoft’s sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they’ve been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They’ve likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn’t be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court’s Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel’s approach; I am less so. But there’s no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we’ve heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn’t seem to be having much effect. Nick answers the question, “Will AI companies...

Duration:01:11:13

Ask host to enable sharing for playback control

Taking AI Existential Risk Seriously

4/2/2024
This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere. Having promised to take stock of the podcast when it reached episode 500, I’ve decided that I, the podcast, and the listeners all deserve a break. So I’ll be taking one after the next episode. No final decisions have been made, so don’t delete your subscription, but don’t expect a new episode any time soon. It’s been a great run, from the dawn of the podcast age, through the ad-fueled podcast boom, which I manfully resisted, to the market correction that’s still under way. It was a pleasure to engage with listeners from all over the world. Yes, even the EU! As they say, in the podcast age, everyone is famous for fifteen people. That’s certainly been true for me, and I’ll always be grateful for your support – not to mention for all the great contributors who’ve joined the podcast over the years Back to cyberlaw, there are a surprising number of people arguing that there’s no reason to worry about existential and catastrophic risks from proliferating or runaway AI risks. Some of that is people seeking clever takes; a lot of it is ideological, driven by fear that worrying about the end of the world will distract attention from the dire but unidentified dangers of face recognition. One useful antidote is the Gladstone Report, written for the State Department’s export control agency. David Kris gives an overview of the report for this episode of the Cyberlaw Podcast. The report explains the dynamic, and some of the evidence, behind all the doom-saying, a discussion that is more persuasive than its prescriptions for regulation. Speaking of the dire but unidentified dangers of face recognition, Paul Stephan and I unpack a New York Times piece saying that Israel is using face recognition in its Gaza conflict. Actually, we don’t so much unpack it as turn it over and shake it, only to discover it’s largely empty. Apparently the editors of the NYT thought that tying face recognition to Israel and Gaza was all we needed to understand that the technology is evil. More interesting is the story arguing that the National Security Agency, traditionally at the forefront of computers and national security, may have to sit out the AI revolution. The reason, David tells us, is that NSA’s access to mass quantities of data for training is complicated by rules and traditions against intelligence agencies accessing data about Americans. And there are few training databases not contaminated with data about and by Americans. While we’re feeling sorry for the intelligence community as it struggles with new technology, Paul notes that Yahoo News has assembled a long analysis of all the ways that personalized technology is making undercover operations impossible for CIA and FBI alike. Michael Ellis weighs in with a review of a report by the Foundation for the Defence of Democracies on the need for a US Cyber Force to man, train, and equip fighting nerds for Cyber Command. It’s a bit of an inside baseball solution, heavy on organizational boxology, but we’re both persuaded that the current system for attracting and retaining cyberwarriors is not working. In the spirit of “Yes, Minister,” we must do something, and this is something. In that same spirit, it’s fair to say that the latest Senate Judiciary proposal for a “compromise” 702 renewal bill is nothing much – a largely phony compromise chock full of ideological baggage. David Kris and I are unimpressed, and surprised at how muted the Biden administration has been in trying to wrangle the Democratic Senate into producing a workable bill. Paul and Michael review the latest trouble for TikTok – a likely FTC lawsuit over privacy. Michael and I puzzle over the stories claiming that Meta may have “wiretapped” Snapchat analytic data. It comes from a trial lawyer suing Meta, and there are a lot of unanswered questions, such as whether users...

Duration:01:01:45

Ask host to enable sharing for playback control

The Fourth Antitrust Shoe Drops, on Apple This Time

3/26/2024
The Biden administration has been aggressively pursuing antitrust cases against Silicon Valley giants like Amazon, Google, and Facebook. This week it was Apple’s turn. The Justice Department (joined by several state AGs) filed a gracefully written complaint accusing Apple of improperly monopolizing the market for “performance smartphones.” The market definition will be a weakness for the government throughout the case, but the complaint does a good job of identifying ways in which Apple has built a moat around its business without an obvious benefit for its customers. The complaint focuses on Apple’s discouraging of multipurpose apps and cloud streaming games, its lack of message interoperability, the tying of Apple watches to the iPhone to make switching to Android expensive, and its insistence on restricting digital wallets on its platform. This lawsuit will continue well into the next presidential administration, so much depends on the outcome of the election this fall. Volt Typhoon is still in the news, Andrew Adams tells us, as the government continues to sound the alarm about Chinese intent to ravage American critical infrastructure in the event of a conflict. Water systems are getting most of the attention this week. I can’t help wondering how we expect the understaffed and underresourced water and sewage companies in this country to defeat sophisticated state-sponsored attackers. This leads Cristin and i to a discussion of how the SEC’s pursuit of CISO Tim Brown and demands for more security disclosures will improve the country’s cybersecurity. Short answer: It won’t. Cristin covers the legislative effort to force a divestiture of Tiktok. The bill has gone to the Senate, where it is moving slowly, if at all. Speaking as a parent of teenagers and voters, Cristin is not surprised. Meanwhile, the House has sent a second bill to the Senate by a unanimous vote. This one would block data brokers from selling American’s data to foreign adversaries. Andrew notes that the House bill covers data brokers. Other data holders, like Google and Apple, would face a similar restriction, under executive order, so the Senate will have plenty of opportunity to deal with Chinese access to American personal data. In the wake of the Murthy argument over administration jawboning in favor of censorship of mostly right-wing posts, Andrew reports that the FBI has resumed outreach to social media companies, at least where it identifies foreign influence campaigns. And the FDA, which piled on to criticize ivermectin advocates, has withdrawn its dubious and condescending tweets. Cristin reports on the spyware agreement sponsored by the United States. It has collected several new supporters. Whether this will reduce spyware installations or simply change the countries that supply the spyware remains to be seen.

Duration:00:46:25

Ask host to enable sharing for playback control

Social Speech and the Supreme Court

3/19/2024
The Supreme Court is getting a heavy serving of first amendment social media cases. Gus Hurwitz covers two that made the news last week. In the first, Justice Barrett spoke for a unanimous court in spelling out the very factbound rules that determine when a public official may use a platform’s tools to suppress critics posting on his or her social media page. Gus and I agree that this might mean a lot of litigation, unless public officials wise up and simply follow the Court’s broad hint: If you don’t want your page to be treated as official, simply say up top that it isn’t official. The second social media case making news was being argued as we recorded. Murthy v. Missouri appealed a broad injunction against the US government pressuring social media companies to take down posts the government disagrees with. The Court was plainly struggling with a host of justiciability issues and a factual record that the government challenged vigorously. If the Court reaches the merits, it will likely address the question of when encouraging the suppression of particular speech slides into coerced censorship. Gus and Jeffrey Atik review the week’s biggest news – the House has passed a bill to force the divestment of TikTok, despite the outcry of millions of influencers. Whether the Senate will be quick to follow suit is deeply uncertain. Melanie Teplinsky covers the news that data about Americans’ driving habits is increasingly being sent to insurance companies to help them adjust their rates. Melanie also describes the FCC’s new Cyber Trust Mark for IOT devices. Like the Commission, our commentators think this is a good idea. Gus takes us back to more contest territory: What should be done about the use of technology to generate fake pictures, especially nude fake pictures. We also touch on a UK debate about a snippet of audio that many believe is a fake meant to embarrass a British Labour politician. Gus tells us the latest news from the SVR’s compromise of a Microsoft network. This leads us to a meditation on the unintended consequences of the SEC’s new cyber incident reporting requirements. Jeffrey explains the bitter conflict over app store sales between Apple and Epic games. Melanie outlines a possible solution to the lack of cybersecurity standards (not to mention a lack of cybersecurity) in water systems. It’s interesting but it’s too early to judge its chances of being adopted. Melanie also tells us why JetBrains and Rapid7 have been fighting over “silent patching.” Finally, Gus and I dig into Meta’s high-stakes fight with the FTC, and the rough reception it got from a DC district court.

Duration:01:00:16

Ask host to enable sharing for playback control

Preventing Sales of Personal Data to Adversary Nations

3/14/2024
This bonus episode of the Cyberlaw Podcast focuses on the national security implications of sensitive personal information. Sales of personal data have been largely unregulated as the growth of adtech has turned personal data into a widely traded commodity. This, in turn, has produced a variety of policy proposals – comprehensive privacy regulation, a weird proposal from Sen. Wyden (D-OR) to ensure that the US governments cannot buy such data while China and Russia can, and most recently an Executive Order to prohibit or restrict commercial transactions affording China, Russia, and other adversary nations with access to Americans’ bulk sensitive personal data and government related data. To get a deeper understanding of the executive order, and the Justice Department’s plans for implementing it, Stewart interviews Lee Licata, Deputy Section Chief for National Security Data Risk.

Duration:00:31:52

Ask host to enable sharing for playback control

The National Cybersecurity Strategy – How Does it Look After a Year?

3/13/2024
Kemba Walden and Stewart revisit the National Cybersecurity Strategy a year later. Sultan Meghji examines the ransomware attack on Change Healthcare and its consequences. Brandon Pugh reminds us that even large companies like Google are not immune to having their intellectual property stolen. The group conducts a thorough analysis of a "public option" model for AI development. Brandon discusses the latest developments in personal data and child online protection. Lastly, Stewart inquires about Kemba's new position at Paladin Global Institute, following her departure from the role of Acting National Cyber Director.

Duration:00:56:30

Ask host to enable sharing for playback control

Episode 495: The National Cybersecurity Strategy – How Does it Look After a Year?

3/10/2024
Kemba Walden and Stewart revisit the National Cybersecurity Strategy a year later. Sultan Meghji examines the ransomware attack on Change Healthcare and its consequences. Brandon Pugh reminds us that even large companies like Google are not immune to having their intellectual property stolen. The group conducts a thorough analysis of a "public option" model for AI development. Brandon discusses the latest developments in personal data and child online protection. Lastly, Stewart inquires about Kemba's new position at Paladin Global Institute, following her departure from the role of Acting National Cyber Director.

Duration:00:56:29

Ask host to enable sharing for playback control

Regulating personal data for national security

3/7/2024
The United States is in the process of rolling out a sweeping regulation for personal data transfers. But the rulemaking is getting limited attention because it targets transfers to our rivals in the new Cold War – China, Russia, and their allies. Adam Hickey, whose old office is drafting the rules, explains the history of the initiative, which stems from endless Committee on Foreign Investment in the United States efforts to impose such controls on a company-by-company basis. Now, with an executive order as the foundation, the Department of Justice has published an advance notice of proposed rulemaking that promises what could be years of slow-motion regulation. Faced with a similar issue—the national security risk posed by connected vehicles, particularly those sourced in China—the Commerce Department issues a laconic notice whose telegraphic style contrasts sharply with the highly detailed Justice draft. I take a stab at the riskiest of ventures—predicting the results in two Supreme Court cases about social media regulations adopted by Florida and Texas. Four hours of strong appellate advocacy and a highly engaged Court make predictions risky, but here goes. I divide the Court into two camps—the Justices (Thomas, Alito, probably Gorsuch) who think that the censorship we should worry about comes from powerful speech-monopolizing platforms and the Justices (Kavanagh, the Chief) who see the cases through a lens that values corporate free speech. Many of the remainder (Kagan, Sotomayor, Jackson) see social media content moderation as understandable and justified, but they’re uneasy about the power of large platforms and reluctant to grant a sweeping immunity to those companies. To my mind, this foretells a decision striking down the laws insofar as they restrict content moderation. But that decision won’t resolve all the issues raised by the two laws, and industry’s effort to overturn them entirely on the current record is also likely to fail. There are too many provisions in those laws that some of the justices considered reasonable for Netchoice to win a sweeping victory. So I look for an opinion that rejects the “private censorship” framing but expressly leaves open or even approves other, narrower measures disciplining platform power, leaving the lower courts to deal with them on remand. Kurt Sanger and I dig into the Securities Exchange Commission's amended complaint against Tim Brown and SolarWinds, alleging material misrepresentation with respect to company cybersecurity. The amended complaint tries to bolster the case against the company and its CISO, but at the end of the day it’s less than fully persuasive. SolarWinds didn’t have the best security, and it was slow to recognize how much harm its compromised software was causing its customers. But the SEC’s case for disclosure feels like 20-20 hindsight. Unfortunately, CISOs are likely to spend the next five years trying to guess which intrusions will look bad in hindsight. I cover the National Institute of Standards and Technology’s (NIST) release of version 2.0 of the Cybersecurity Framework, particularly its new governance and supply chain features. Adam reviews the latest update on section 702 of FISA, which likely means the program will stumble into 2025, thanks to a certification expected in April. We agree that Silicon Valley is likely to seize on the opportunity to engage in virtue-signaling litigation over the final certification. Kurt explains the remarkable power of adtech data for intelligence purposes, and Senator Ron Wyden’s (D-OR) effort to make sure such data is denied to U.S. agencies but not to the rest of the world. He also pulls Adam and me into the debate over whether we need a federal backup for cyber insurance. Bruce Schneier thinks we do, but none of us is persuaded. Finally, Adam and I consider the divide between CISA and GOP election officials. We agree that it has its roots in CISA’s imprudently allowing election security mission...

Duration:00:53:10

Ask host to enable sharing for playback control

Episode 494: Regulating Personal Data for National Security

3/4/2024
Adam Hickey opens this week's episode by covering the Executive Order and ANPR on data transfer restrictions. Stewart takes a stab at the riskiest of ventures – predicting Supreme Court outcomes in cases on social media regulations adopted by Florida and Texas. Kurt Sanger and Stewart dig into the SEC's amended complaint against SolarWinds. The panel covers NIST 2.0, updates to section 702, the potential of federal cyber insurance, and CISA's role in election security.

Duration:00:53:09

Ask host to enable sharing for playback control

Google’s Gemini tells us exactly what’s wrong with Silicon Valley

2/27/2024
This episode of the Cyberlaw Podcast kicks off with the Babylon Bee’s take on Google Gemini’s woke determination to inject a phony diversity into images of historical characters, The Bee purports to quote a black woman commenting on the AI engine’s performance: "After decades of nothing but white Nazis, I can finally see a strong, confident black female wearing a swastika. Thanks, Google!" Jim Dempsey and Mark MacCarthy join the discussion because Gemini’s preposterous diversity quotas deserve more than snark. In fact, I argue, they were not errors; they were entirely deliberate efforts by Google to give its users not what they want but what Google in its wisdom thinks they should want. That such bizarre results were achieved by Google’s sneakily editing prompts to ask for, say, “indigenous” founding fathers simply shows that Google has found a unique combination of hubris and incompetence. More broadly, Mark and Jim suggest, the collapse of Google’s effort to control its users raises this question: Can we trust AI developers when they say they have installed guardrails to make their systems safe? The same might be asked of the latest in what seems an endless stream of experts demanding that AI models defeat their users by preventing them from creating “harmful” deepfake images. Later, Mark points out that most of Silicon Valley recently signed on to promises to combat election-related deepfakes. Speaking of hubris, Michael Ellis covers the State Department’s stonewalling of a House committee trying to find out how generously the Department funded a group of ideologues trying to cut off advertising revenues for right-of-center news and comment sites. We take this story a little personally, having contributed op-eds to several of the blacklisted sites. Michael explains just how much fun Western governments had taking down the infamous Lockbit ransomware service. I credit the Brits for the humor displayed as governments imitated Lockbit’s graphics, gimmicks, and attitude. There were arrests, cryptocurrency seizures, indictments, and more. But a week later, Lockbit was claiming that its infrastructure was slowly coming back on line. Jim unpacks the FTC’s case against Avast for collecting the browsing habits of its antivirus customers. He sees this as another battle in the FTC’s war against “de-identified” data as a response to privacy concerns. Mark notes the EU’s latest investigation into TikTok. And Michael explains how the Computer Fraud and Abuse Act ties to Tucker Carlson’s ouster from the Fox network. Mark and I take a moment to tease next week’s review of the Supreme Court oral argument over Texas and Florida social media laws. The argument was happening while we were recording, but it’s clear that the outcome will be a mixed bag. Tune in next week for more. Jim explains why the administration has produced an executive order about cybersecurity in America’s ports, and the legal steps needed to bolster port security. Finally, in quick hits: We dip into the trove of leaked files exposing how China’s cyberespionage contractors do business I wish Rob Joyce well as he departs NSA and prepares for a career in cyberlaw podcasting I recommend the most cringey and irresistible long read of the week: How I Fell for an Amazon Scam Call and Handed Over $50,000 And in a scary taste of the near future, a new paper discloses that advanced LLMs make pretty good autonomous hacking agents. Download 493rd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect...

Duration:00:55:04

Ask host to enable sharing for playback control

Episode 493: Google’s Gemini Tells Us Exactly What’s Wrong With Silicon Valley

2/27/2024
Stewart kicks off this week’s episode with the Babylon Bee’s take on Google Gemini’s release. Michael Ellis covers an investigation of the State Department’s funding of NGOs to combat misinformation and comments on the FBI’s takedown of LockBit. Jim Dempsey unpacks the FTC’s case against Avast and an Executive Order on port cybersecurity. Mark MacCarthy speaks on the EU’s latest investigation, and Cindy Cohn from the EFF swings by to promote a new podcast.

Duration:00:55:03

Ask host to enable sharing for playback control

Are AI models learning to generalize?

2/20/2024
We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. It’s lawyers holding forth on the frontiers of technology, so take it with a grain of salt. Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it’s not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There’s no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. But the public measures adopted by the West do not seem likely to effectively defeat or deter China’s strategy. The group is less impressed by the New York Times’ claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don’t do it well, I argue. Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer. Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations. We will be eagerly waiting for resolution of the European fight over Facebook’s subscription fee and the move by websites to “Pay or Consent” privacy terms fight. I predict that Eurocrats’ hypocrisy will be tested by an effort to rule for elite European media sites, which already embrace “Pay or Consent” while ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task. Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more. Cristin also covers a detailed new Google TAG report on commercial spyware. And in quick hits, House Republicans tried and failed to find common ground on renewal of FISA Section 702 I recommend Goody-2, the ‘World’s ‘Most Responsible’ AI Chatbot Dechert has settled a wealthy businessman’s lawsuit claiming that the law firm hacked his computer Imran Khan is using AI to make impressively realistic speeches about his performance in Pakistani elections The Kids Online Safety Act secured sixty votes in the U.S. Senate, but whether the House will act on the bill remains to be seen Download 492nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their...

Duration:00:49:37

Ask host to enable sharing for playback control

Episode 492: Are AI Models Learning To Generalize?

2/20/2024
Stewart returns in one piece from his Canadian Ski Marathon. Paul Rosenzweig discusses AI text-to-speech advancements and emergent capabilities. Cristin Flynn Goodwin and Paul Stephan evaluate the Western reaction to Volt Typhoon and assess China's influence operations in US elections relative to Russia's. The group discusses digital privacy in Europe, the debate over software liability, and Stewart finds an unlikely ally in the EFF in opposition to a UN Cybercrime Treaty.

Duration:00:49:36

Ask host to enable sharing for playback control

Death, Taxes, and Data Regulation

2/16/2024
On the latest episode of The Cyberlaw Podcast, guest host Brian Fleming, along with panelists Jane Bambauer, Gus Hurwitz, and Nate Jones, discuss the latest U.S. government efforts to protect sensitive personal data, including the FTC’s lawsuit against data broker Kochava and the forthcoming executive order restricting certain bulk sensitive data flows to China and other countries of concern. Nate and Brian then discuss whether Congress has a realistic path to end the Section 702 reauthorization standoff before the April expiration and debate what to make of a recent multilateral meeting in London to discuss curbing spyware abuses. Gus and Jane then talk about the big news for cord-cutting sports fans, as well as Amazon’s ad data deal with Reach, in an effort to understand some broader difficulties facing internet-based ad and subscription revenue models. Nate considers the implications of Ukraine’s “defend forward” cyber strategy in its war against Russia. Jane next tackles a trio of stories detailing challenges, of the policy and economic varieties, facing Meta on the content moderation front, as well as an emerging problem policing sexual assaults in the Metaverse. Bringing it back to data, Gus wraps the news roundup by highlighting a novel FTC case brought against Blackbaud stemming from its data retention practices. In this week’s quick hits, Gus and Jane reflect on the FCC’s ban on AI-generated voice cloning in robocalls, Nate touches on an alert from CISA and FBI on the threat presented by Chinese hackers to critical infrastructure, Gus comments on South Korea’s pause on implementation of its anti-monopoly platform act and the apparent futility of nudges (with respect to climate change attitudes or otherwise), and finally Brian closes with a few words on possible broad U.S. import restrictions on Chinese EVs and how even the abundance of mediocre AI-related ads couldn’t ruin Taylor Swift’s Super Bowl. Download 491st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Duration:01:04:16

Ask host to enable sharing for playback control

Episode 491: Death, Taxes, and Data Regulation

2/13/2024

Duration:01:04:15

Ask host to enable sharing for playback control

Serious threats, unserious responses

2/6/2024
It was a week of serious cybersecurity incidents paired with unimpressive responses. As Melanie Teplinsky reminds us, the U.S. government has been agitated for months about China’s apparent strategic decision to hold U.S. infrastructure hostage to cyberattack in a crisis. Now the government has struck back at Volt Typhoon, the Chinese threat actor pursuing that strategy. It claimed recently to have disrupted a Volt Typhoon botnet by taking over a batch of compromised routers. Andrew Adams explains how the takeover was managed through the court system. It was a lot of work, and there is reason to doubt the effectiveness of the effort. The compromised routers can be re-compromised if they are turned off and on again. And the only ones that were fixed by the U.S. seizure are within U.S. jurisdiction, leaving open the possibility of DDOS attacks from abroad. And, really, how vulnerable is our critical infrastructure to DDOS attack? I argue that there’s a serious disconnect between the government’s hair-on-fire talk about Volt Typhoon and its business-as-usual response. Speaking of cyberstuff we could be overestimating, Taiwan just had an election that China cared a lot about. According to one detailed report, China threw a lot of cyber at Taiwanese voters without making much of an impression. Richard Stiennon and I mix it up over whether China would do better in trying to influence the 2024 outcome here. While we’re covering humdrum responses to cyberattacks, Melanie explains U.S. sanctions on Iranian military hackers for their hack of U.S. water systems. For comic relief, Richard lays out the latest drama around the EU AI Act, now being amended in a series of backroom deals and informal promises. I predict that the effort to pile incoherent provisions on top of anti-American protectionism will not end in a GDPR-style triumph for Europe, whose market is now small enough for AI companies to ignore if the regulatory heat is turned up arbitrarily. The U.S. is not the only player whose response to cyberintrusions is looking inadequate this week. Richard explains Microsoft’s recent disclosure of a Midnight Blizzard attack on the company and a number of its customers. The company’s obscure explanation of how its technology contributed to the attack and, worse, its effort to turn the disaster into an upsell opportunity earned Microsoft a patented Alex Stamos spanking. Andrew explains the recent Justice Department charges against three people who facilitated the big $400m FTX hack that coincided with the exchange’s collapse. Does that mean it wasn’t an inside job? Not so fast, Andrew cautions. The government didn’t recover the $400m, and it isn’t claiming the three SIM-swappers it has charged are the only conspirators. Melanie explains why we’ve seen a sudden surge in state privacy legislation. It turns out that industry has stopped fighting the idea of state privacy laws and is now selling a light-touch model law that skips things like private rights of action. I give a lick and a promise to a “privacy” regulation now being pursued by CFPB for consumer financial information. I put privacy in quotes, because it’s really an opportunity to create a whole new market for data that will assure better data management while breaking up the advantage of incumbents’ big data holdings. Bruce Schneier likes the idea. So do I, in principle, except that it sounds like a massive re-engineering of a big industry by technocrats who may not be quite as smart as they think they are. Bruce, if you want to come on the podcast to explain the whole thing, send me an email! Spies are notoriously nasty, and often petty, but surely the nastiest and pettiest of American spies, Joshua Schulte, was sentenced to 40 years in prison last week. Andrew has the details. There may be some good news on the ransomware front. More victims are refusing to pay. Melanie, Richard, and I explore ways to keep that trend going. I continue to agitate for...

Duration:00:54:19

Ask host to enable sharing for playback control

Episode 490: Serious Threats, Unserious Responses

2/6/2024

Duration:00:54:18

Ask host to enable sharing for playback control

Going Deep on Deep Fakes—Plus a Bonus Interview with Rob Silvers on the Cyber Safety Review Board.

1/30/2024
It was a big week for deep fakes generated by artificial intelligence. Sultan Meghji, who’s got a new AI startup, walked us through three stories that illustrate the ways AI will lead to more confusion about who’s really talking to us. First, a fake Biden robocall urged people not to vote in the New Hampshire primary. Second, a bot purporting to offer Dean Phillips’s views on the issues was sanctioned by OpenAI because it didn’t have Phillips’s consent. Third, fake nudes of Taylor Swift led to a ban on Twitter searches for her image. And, finally, podcasters used AI to resurrect George Carlin and got sued by his family. The moral panic over AI fakery meant that all of these stories were long on “end of the world” and short on “we’ll live through this.” Regulators of AI are not doing a better job of maintaining perspective. Mark MacCarthy reports that New York City’s AI hiring law, which has punitive disparate-impact disclosure requirements for automated hiring decision engines, seems to have persuaded NYC employers that they aren’t making any automated hiring decisions, so they don’t have to do any disclosures. Not to be outdone, the European Court of Justice has decided that pretty much any tool to aid in decisions is likely to be an automated decision making technology subject to special (and mostly nonsensical) data protection rules. Is AI regulation creating its own backlash? Could be. Sultan and I report on a very plausible Republican plan to attack the Biden AI executive order on the ground that its main enforcement mechanism relies, the Defense Production Act, simply doesn’t authorize what the order calls for. Speaking of regulation, Maury Shenk covers the EU’s application of the Digital Markets Act to big tech companies like Apple and Google. Apple isn’t used to being treated like just another company, and its contemptuous response to the EU’s rules for its app market could easily lead to regulatory sanctions. Looking at Apple’s proposed compliance with the California court ruling in the Epic case and the European Digital Market Act, Mark says it's time to think about price regulating mobile app stores. Even handing out big checks to technology companies turns out to be harder than it first sounds. Sultan and I talk about the slow pace of payments to chip makers, and the political imperative to get the deals done before November (and probably before March). Senator Ron Wyden, D-Ore. is still flogging NSA and the danger of government access to personal data. This time, he’s on about NSA’s purchases of commercial data. So far, so predictable. But this time, he’s misrepresented the facts by saying without restriction that NSA buys domestic metadata, omitting NSA’s clear statement that its netflow “domestic” data consists of communications with one end outside the country. Maury and I review an absent colleague’s effort to construct a liability regime for insecure software. Jim Dempsey's proposal looks quite reasonable, but Maury reminds me that he and I produced something similar twenty years ago, and it’s not even close to adoption anywhere in the U.S. I can’t help but rant about Amazon’s arrogant, virtue-signaling, and customer-hating decision to drop a feature that makes it easy for Ring doorbell users to share their videos with the police. Whose data is it, anyway, Amazon? Sadly, we know the answer. It looks as though there’s only one place where hasty, ill-conceived tech regulation is being rolled back. Maury reports on the People’s Republic of China, which canned its video game regulations, and its video game regulator for good measure, and started approving new games at a rapid clip, after a proposed regulatory crackdown knocked more than $60 bn off the value of its industry. We close the news roundup with a few quick hits: Outside of AI, VCs are closing their wallets and letting startups run out of money Apple launched an expensive dud – the Vision Pro Quantum winter may be back as...

Duration:01:12:14

Ask host to enable sharing for playback control

Episode 489: Going Deep on Deep Fakes – Plus a Bonus Interview with Rob Silvers on the Cyber Safety Review Board

1/30/2024

Duration:01:12:13