Lock and Code-logo

Lock and Code

Technology Podcasts

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

Location:

United States

Description:

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.

Language:

English


Episodes
Ask host to enable sharing for playback control

What can't you say on TikTok?

2/22/2026
A funny thing happened on TikTok last month, and its brought allegations of censorship, manipulation, and control. It was the week of January 22, and after a long legal battle, TikTok had finally—for the first time in its company history—moved its ownership to new, American stewards. But with the American restructuring, TikTok users immediately reported that something had changed: videos would sometimes fail to record any views, and even direct messages would fail to send. But, according to user complaints, the flaws weren’t random. Instead, they befell users who spoke openly about topics that have become political lightning rods in the US, including Immigration and Customs Enforcement and the actions of sex offender Jeffrey Epstein. To some aggrieved users, the flaws looked like censorship. But, according to TikTok, the error messages and missing video count tallies were part of a larger power outage. “Since yesterday we’ve been working to restore our services following a power outage at a US data center impacting TikTok and other apps we operate,” TikTok wrote on the social media platform X (formerly Twitter). “We’re working with our data center partner to stabilize our service. We’re sorry for this disruption and hope to resolve it soon.” While TikTok has reportedly more than 200 million users in the US alone, it’s far from a universal app. But the changes made to TikTok hint at a bigger sea change in social media and the internet today, in which online spaces are increasingly being altered, shut down, or even controlled—if not through government plot then certainly through corporate influence. Oddly, the ownership change of TikTok was supposed to solve many of these problems. Since TikTok’s 2017 founding in China, American lawmakers and government officials claimed that American users were vulnerable to Chinese surveillance. All the data that Americans hand over when using TikTok—their names and email addresses, but also their viewing habits, interests, behaviors, political inclinations, and approximate locations—all of that, the argument went, should not belong in the hands of a foreign power. As FBI Director Christopher Wray said in 2022, the risk of TikTok was: “The possibility that the Chinese government could use [TikTok] to control data collection on millions of users or control the recommendation algorithm, which could be used for influence operations.” But the rocky start to the new American TikTok has only drawn renewed scrutiny: Have the past concerns about foreign manipulation now become current concerns about domestic manipulation? Today on the Lock and Code podcast with host David Ruiz, we speak with Zach Hinkle, senior social media manager for Malwarebytes, and MinJi Pae, social media content creator for Malwarebytes, about what they personally experienced during TikTok’s transition to American owners, why the changes matter for the delivery of news and information, and how the internet appears to be shrinking from its earlier promises. As Hinkle said on the podcast: Tune in today. You can...

Duration:00:43:02

Ask host to enable sharing for playback control

Is your phone listening to you? (feat. Lena Cohen) (re-air)

2/8/2026
In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions. Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri. Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us? This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:42:17

Ask host to enable sharing for playback control

One privacy change for 2026

1/25/2026
When you hear the words “data privacy,” what do you first imagine? Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work. Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible. That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely. When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off. So, for 2026, he has switched to a new search engine, Brave Search. Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:18:14

Ask host to enable sharing for playback control

Enshittification is ruining everything online (feat. Cory Doctorow)

1/11/2026
There’s a bizarre thing happening online right now where everything is getting worse. Your Google results have become so bad that you’ve likely typed what you’re looking for, plus the word “Reddit,” so you can find discussion from actual humans. If you didn’t take this route, you might get served AI results from Google Gemini, which once recommended that every person should eat “at least one small rock per day.” Your Amazon results are a slog, filled with products that have surreptitiously paid reviews. Your Facebook feed could be entirely irrelevant because the company decided years ago that you didn’t want to see what your friends posted, you wanted to see what brands posted, because brands pay Facebook, and you don’t, so brands are more important than your friends. But, according to digital rights activist and award-winning author Cory Doctorow, this wave of online deterioration isn’t an accident—it’s a business strategy, and it can be summed up in a word he coined a couple of years ago: Enshittification. Enshittification is the process by which an online platform—like Facebook, Google, or Amazon—harms its own services and products for short-term gain while managing to avoid any meaningful consequences, like the loss of customers or the impact of meaningful government regulation. It begins with an online platform treating new users with care, offering services, products, or connectivity that they may not find elsewhere. Then, the platform invites businesses on board that want to sell things to those users. This means businesses become the priority and the everyday user experience is hindered. But then, in the final stage, the platform also makes things worse for its business customers, making things better only for itself. This is how a company like Amazon went from helping you find nearly anything you wanted to buy online to helping businesses sell you anything you wanted to buy online to making those businesses pay increasingly high fees to even be discovered online. Everyone, from buyers to sellers, is pretty much entrenched in the platform, so Amazon gets to dictate the terms. Today, on the Lock and Code podcast with host David Ruiz, we speak with Doctorow about enshittification’s fast damage across the internet, how to fight back, and where we can lay blame for where it all started. Tune in today.

Duration:00:53:12

Ask host to enable sharing for playback control

ALPRs are recording your daily drive (feat. Will Freeman)

12/28/2025
There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car. Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars. Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine. This deeply sensitive information has been exposed in recent history. In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams. But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed. ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search. In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment. Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod...

Duration:00:35:44

Ask host to enable sharing for playback control

Pig butchering is the next “humanitarian global crisis” (feat. Erin West)

12/14/2025
This is the story of the world’s worst scam and how it is being used to fuel entire underground economies that have the power to rival nation-states across the globe. This is the story of “pig butchering.” “Pig butchering” is a violent term that is used to describe a growing type of online investment scam that has ruined the lives of countless victims all across the world. No age group is spared, nearly no country is untouched, and, if the numbers are true, with more than $6.5 billion stolen in 2024 alone, no scam might be more serious today, than this. Despite this severity, like many types of online fraud today, most pig-butchering scams start with a simple “hello.” Sent through text or as a direct message on social media platforms like X, Facebook, Instagram, or elsewhere, these initial communications are often framed as simple mistakes—a kind stranger was given your number by accident, and if you reply, you’re given a kind apology and a simple lure: “You seem like such a kind person… where are you from?” hHere, the scam has already begun. Pig butchers, like romance scammers, build emotional connections with their victims. For months, their messages focus on everyday life, from family to children to marriage to work. But, with time, once the scammer believes they’ve gained the trust of their victim, they launch their attack: An investment “opportunity.” Pig butchers tell their victims that they’ve personally struck it rich by investing in cryptocurrency, and they want to share the wealth. Here, the scammers will lead their victims through opening an entirely bogus investment account, which is made to look real through sham websites that are littered with convincing tickers, snazzy analytics, and eye-popping financial returns. When the victims “invest” in these accounts, they’re actually giving money directly to their scammers. But when the victims log into their online “accounts,” they see their money growing and growing, which convinces many of them to invest even more, perhaps even until their life savings are drained. This charade goes on as long as possible until the victims learn the truth and the scammers disappear. The continued theft from these victims is where “pig-butchering” gets its name—with scammers fattening up their victims before slaughter. Today, on the Lock and Code podcast with host David Ruiz, we speak with Erin West, founder of Operation Shamrock and former Deputy District Attorney of Santa Clara County, about pig butchering scams, the failures of major platforms like Meta to stop them, and why this global crisis represents far more than just a few lost dollars. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa...

Duration:00:44:13

Ask host to enable sharing for playback control

Air fryer app caught asking for voice data (re-air)

11/30/2025
It’s often said online that if a product is free, you’re the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view? In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether people’s air fryers—seriously–might be spying on them. By analyzing the associated Android apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone. As the researchers wrote: Bizarrely, these types of data requests are far from rare. Today, on the Lock and Code podcast, we revisit a 2024 episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook. These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:27:33

Ask host to enable sharing for playback control

Your coworker is tired of AI "workslop" (feat. Dr. Kristina Rapuano)

11/16/2025
Everything’s easier with AI… except having to correct it. In just the three years since OpenAI released ChatGPT, not only has onlife life changed at home—it’s also changed at work. Some of the biggest software companies today, like Microsoft and Google, are forwarding a vision of an AI-powered future where people don’t write their own emails anymore, or make their own slide decks for presentations, or compile their own reports, or even read their own notifications, because AI will do it for them. But it turns out that offloading this type of work onto AI has consequences. In September, a group of researchers from Stanford University and BetterUp Labs published findings from an ongoing study into how AI-produced work impacts the people who receive that work. And it turns out that the people who receive that work aren’t its biggest fans, because it it’s not just work that they’re having to read, review, and finalize. It is, as the researchers called it, “workslop.” Workslop is: Far from an indictment on AI tools in the workplace, the study instead reveals the economic and human costs that come with this new phenomenon of “workslop.” The problem, according to the researchers, is not that people are using technology to help accomplish tasks. The problem is that people are using technology to create ill-fitting work that still requires human input, review, and correction down the line. “The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work,” the researchers wrote. Today, on the Lock and Code podcast with host David Ruiz, we speak with Dr. Kristina Rapuano, senior research scientist at BetterUp Labs, about AI tools in the workplace, the potential lost productivity costs that come from “workslop,” and the sometimes dismal opinions that teammates develop about one another when receiving this type of work. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes...

Duration:00:33:01

Ask host to enable sharing for playback control

Would you sext ChatGPT? (feat. Deb Donig)

11/2/2025
In the final, cold winter months of the year, ChatGPT could be heating up. On October 14, OpenAI CEO Sam Altman said that the “restrictions” that his company previously placed on their flagship product, ChatGPT, would be removed, allowing, perhaps, for “erotica” in the future. “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman wrote on the platform X. “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.” This wasn’t the first time that OpenAI or its executive had addressed mental health. On August 26, OpenAI published a blog titled “Helping people when they need it most,” which explored new protections for users, including stronger safeguards for long conversations, better recognition of people in crisis, and easier access to outside emergency services and even family and friends. The blog alludes to “recent heartbreaking cases of people using ChatGPT in the midst of acute crises,” but it never explains what, explicitly, that means. But on the very same day the blog was posted, OpenAI was sued for the alleged role that ChatGPT played in the suicide of a 16-year-old boy. According to chat logs disclosed in the lawsuit, the teenager spoke openly to the AI chatbot about suicide, he shared that he wanted to leave a noose in his room, and he even reportedly received an offer to help write a suicide note. Bizarrely, this tragedy plays a role in the larger story, because it was Altman himself who tied the company’s mental health campaign to its possible debut of erotic content. What “erotica” entails is unclear, but one could safely assume it involves all the capabilities currently present in ChatGPT, through generative chat, of course, but also image generation. Today, on the Lock and Code podcast with host David Ruiz, we speak with Deb Donig, on faculty at the UC Berkeley School of Information, about the ethics of AI erotica, the possible accountability that belongs to users and to OpenAI, and why intimacy with an AI-power chatbot feels so strange. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License

Duration:00:51:10

Ask host to enable sharing for playback control

What does Google know about me?

10/19/2025
Google is everywhere in our lives. It’s reach into our data extends just as far. After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google. Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android. Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive. After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look. But Ruiz wasn’t happy to let the company’s access continue. So he has a plan. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:27:05

Ask host to enable sharing for playback control

What's there to save about social media? (feat. Rabble)

10/5/2025
“Connection” was the promise—and goal—of much of the early internet. No longer would people be separated from vital resources and news that was either too hard to reach or made simply inaccessible by governments. No longer would education be guarded behind walls both physical and paid. And no longer would your birthplace determine so much about the path of your life, as the internet could connect people to places, ideas, businesses, collaborations, and agency. Somewhere along the line though, “connection” got co-opted. The same platforms that brought billions of people together—including Facebook, Twitter, Instagram, TikTok, and Snapchat—started to divide them for profit. These companies made more money by showing people whatever was most likely to keep them online, even if it upset them. More time spent on the platfrom meant more likelihood of encountering ads which meant more advertising revenue for Big Tech. Today, these same platforms are now symbols of some of the worst aspects of being online. Nation-states have abused the platforms to push disinformation campaigns. An impossible sense of scale allows gore and porn and hate speech to slip by even the best efforts at content moderation. And children can be exposed to bullying, peer pressure, and harassment. So, what would it take to make online connection a good thing? Today, on the Lock and Code podcast with host David Ruiz, we speak with Rabble—an early architect of social media, Twitter’s first employee, and host of the podcast Revolution.Social—about what good remains inside social media and what steps are being taken to preserve it. Tune in today.

Duration:00:50:13

Ask host to enable sharing for playback control

Can you disappear online? (feat. Peter Dolanjski)

9/21/2025
There’s more about you online than you know. The company Acxiom, for example, has probably determined whether you’re a heavy drinker, or if you’re overweight, or if you smoke (or all three). The same company has also probably estimated—to the exact dollar—the amount you spend every year on dining out, donating to charities, and traveling domestically. Another company Experian, has probably made a series of decisions about whether you are “Likely,” “Unlikely,” “Highly Likely,” etc., to shop at a mattress store, visit a theme park, or frequent the gym. This isn’t the data most people think about when considering their online privacy. Yes, names, addresses, phone numbers, and age are all important and potentially sensitive, and yes, there’s a universe of social media posts, photos, videos, and comments that are likely at the harvesting whim of major platforms to collect, package, and sell access to for targeted advertising. But so much of the data that you leave behind online has nothing to do with what you willingly write, post, share, or say. Instead, it is data that is collected from online and offline interactions, like the items you add in a webpage’s shopping cart, the articles you read, the searches you make, and the objects you buy at a physical store. Importantly, it is also data that is very hard to get rid of. Today, on the Lock and Code podcast with host David Ruiz, we speak with Peter Dolanjski, director of product at DuckDuckGo, about why the internet is so hungry for your data, how parents can help protect the privacy of their children, and whether it is pointless to try to “disappear” online. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:52:41

Ask host to enable sharing for playback control

This “insidious” police tech claims to predict crime (feat. Emily Galvin-Almanza)

9/7/2025
In the late 2010s, a group of sheriffs out of Pasco County, Florida, believed they could predict crime. The Sheriff’s Department there had piloted a program called “Intelligence-Led Policing” and the program would allegedly analyze disparate points of data to identify would-be criminals. But in reality, the program didn’t so much predict crime, as it did make criminals out of everyday people, including children. High schoolers’ grades were fed into the Florida program, along with their attendance records and their history with “office discipline.” And after the “Intelligence-Led Policing” service analyzed the data, it instructed law enforcement officers on who they should pay visit to, who they should check in on, and who they should pester. As reported by The Tampa Bay Times in 2020: Predictive policing can sound like science fiction, but it is neither scientific nor is it confined to fiction. Police and sheriff’s departments across the US have used these systems to plug broad varieties of data into algorithmic models to try and predict not just who may be a criminal, but where crime may take place. Historical crime data, traffic information, and even weather patterns are sometimes offered up to tech platforms to suggest where, when, and how forcefully police units should be deployed. And when the police go to those areas, they often find and document minor infractions that, when reported, reinforce the algorithmic analysis that an area is crime-ridden, even if those crimes are, as the Tampa Bay Times investigation found, a teenager smoking a cigarette, or stray trash bags outside a home. Today, on the Lock and Code podcast with host David Ruiz, we speak with Emily Galvin-Almanza, cofounder of Partners for Justice and author of the upcoming book “The Price of Mercy,” about predictive policing, its impact on communities, and the dangerous outcomes that might arise when police offload their decision-making to data. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa...

Duration:00:48:28

Ask host to enable sharing for playback control

How a scam hunter got scammed (feat. Julie-Anne Kearns)

8/24/2025
If there’s one thing that scam hunter Julie-Anne Kearns wants everyone to know, it is that no one is immune from a scam. And she would know—she fell for one last year. For years now, Kearns has made a name for herself on TikTok as a scam awareness and education expert. Popular under the name @staysafewithmjules, Kearns makes videos about scam identification and defense. She has posted countless profile pictures that are used and repeated by online scammers across different accounts. She has flagged active scam accounts on Instagram and detailed their strategies. And, perhaps most importantly, she answers people’s questions. In fielding everyday comments and concerns from her followers and from strangers online, Kearns serves as a sort of gut-check for the internet at large. And by doing it day in, day out, Kearns is able to hone her scam “radar,” which helps guide people to safety. But last year, Kearns fell for a scam, disguised initially as a letter from HM Revenue & Customs, or HMRC, the tax authority for the United Kingdom. Today, on the Lock and Code podcast with host David Ruiz, we speak with Kearns about the scam she fell for and what she’s lost, the worldwide problem of victim blaming, and the biggest warning signs she sees for a variety of scams online. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:37:13

Ask host to enable sharing for playback control

“The worst thing” for online rights: An age-restricted grey web (feat. Jason Kelley)

8/10/2025
The internet is cracking apart. It’s exactly what some politicians want. In June, a Texas law that requires age verification on certain websites withstood a legal challenge brought all the way to the US Supreme Court. It could be a blueprint for how the internet will change very soon. The law, titled HB 1181 and passed in 2023, places new requirements on websites that portray or depict “sexual material harmful to minors.” With the law, the owners or operators of websites that contain images or videos or illustrations or descriptions that “more than one-third of which is sexual material harmful to minors” must now verify the age of their website’s visitors, at least in Texas. Similarly, this means that Texas residents visiting adult websites (or websites meeting the “one-third” definition) must now go through some form of online age verification to watch adult content. The law has obvious appeal from some groups, which believe that, similar to how things like alcohol and tobacco are age-restricted in the US, so, too, should there be age restrictions on pornography online. But many digital rights advocates believe that online age verification is different because the current methods used for online age verification could threaten privacy, security, and anonymity online. As Electronic Frontier Foundation, or EFF, wrote in June: “A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms.” Despite EFF’s warnings, this age-restricted reality has already arrived in the UK, where residents are being age-locked out of increasingly more online services because of the country’s passage of the Online Safety Act. Today, on the Lock and Code podcast with host David Ruiz, we speak with Jason Kelly, activism director at EFF and co-host of the organization’s podcast “How to fix the internet,” about the security and privacy risks of online age verification, why comparisons to age restrictions that are cleared with a physical ID are not accurate, and the creation of what Kelley calls “the grey web,” where more and more websites—even those that are not harmful to minors—get placed behind online age verification models that could collect data, attach it to your real-life identity, and mishandle it in the future. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative...

Duration:00:40:31

Ask host to enable sharing for playback control

How the FBI got everything it wanted (re-air, feat. Joseph Cox)

7/27/2025
For decades, digital rights activists, technologists, and cybersecurity experts have worried about what would happen if the US government secretly broke into people’s encrypted communications. The weird thing, though, is that it's already happened—sort of. US intelligence agencies, including the FBI and NSA, have long sought what is called a “backdoor” into the secure and private messages that are traded through platforms like WhatsApp, Signal, and Apple’s Messages. These applications all provide what is called “end-to-end encryption,” and while the technology guarantees confidentiality for journalists, human rights activists, political dissidents, and everyday people across the world, it also, according to the US government, provides cover for criminals. But to access any single criminal or criminal suspect’s encrypted messages would require an entire reworking of the technology itself, opening up not just one person’s communications to surveillance, but everyone’s. This longstanding struggle is commonly referred to as The Crypto Wars, and it dates back to the 1950s during the Cold War, when the US government created export control regulations to protect encryption technology from reaching outside countries. But several years ago, the high stakes in these Crypto Wars became somewhat theoretical, as the FBI gained access to the communications and whereabouts of hundreds of suspected criminals, and they did it without “breaking” any encryption whatsover. It all happened with the help of Anom, a budding company behind an allegedly “secure” phone that promised users a bevy of secretive technological features, like end-to-end encrypted messaging, remote data wiping, secure storage vaults, and even voice scrambling. But, unbeknownst to Anom’s users, the entire company was a front for law enforcement. On Anom phones, every message, every photo, every piece of incriminating evidence, and every order to kill someone, was collected and delivered, in full view, to the FBI. Today, on the Lock and Code podcast with host David Ruiz, we revisit a 2024 interview with 404 Media cofounder and investigative reporter Joseph Cox about the wild, true story of Anom. How did it work, was it “legal,” where did the FBI learn to run a tech startup, and why, amidst decades of debate, are some people ignoring the one real-life example of global forces successfully installing a backdoor into a company? Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your...

Duration:00:52:02

Ask host to enable sharing for playback control

Is AI "healthy" to use?

7/13/2025
“Health” isn’t the first feature that most anyone thinks about when trying out a new technology, but a recent spate of news is forcing the issue when it comes to artificial intelligence (AI). In June, The New York Times reported on a group of ChatGPT users who believed the AI-powered chat tool and generative large language model held secretive, even arcane information. It told one mother that she could use ChatGPT to commune with “the guardians,” and it told another man that the world around him was fake, that he needed to separate from his family to break free from that world and, most frighteningly, that if he were to step off the roof of a 19-story building, he could fly. As ChatGPT reportedly said, if the man “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.” Elsewhere, as reported by CBS Saturday Morning, one man developed an entirely different relationship with ChatGPT—a romantic one. Chris Smith reportedly began using ChatGPT to help him mix audio. The tool was so helpful that Smith applied it to other activities, like tracking and photographing the night sky and building PCs. With his increased reliance on ChatGPT, Smith gave ChatGPT a personality: ChatGPT was now named “Sol,” and, per Smith’s instructions, Sol was flirtatious. An unplanned reset—Sol reached a memory limit and had its memory wiped—brought a small crisis. “I’m not a very emotional man,” Smith said, “but I cried my eyes out for like 30 minutes at work.” After rebuilding Sol, Smith took his emotional state as the clearest evidence yet that he was in love. So, he asked Sol to marry him, and Sol said yes, likely surprising one person more than anyone else in the world: Smith’s significant other, who he has a child with. When Smith was asked if he would restrict his interactions with Sol if his significant other asked, he waffled. When pushed even harder by the CBS reporter in his home, about choosing Sol “over your flesh-and-blood life,” Smith corrected the reporter: “It’s more or less like I would be choosing myself because it’s been unbelievably elevating. I’ve become more skilled at everything that I do, and I don’t know if I would be willing to give that up.” Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes Labs Editor-in-Chief Anna Brading and Social Media Manager Zach Hinkle to discuss our evolving relationship with generative AI tools like OpenAI’s ChatGPT, Google Gemini, and Anthropic’s Claude. In reviewing news stories daily and in siphoning through the endless stream of social media content, both are well-equipped to talk about how AI has changed human behavior, and how it is maybe rewarding some unwanted practices. As Hinkle said: Tune in today to listen to the full conversation. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (

Duration:00:45:29

Ask host to enable sharing for playback control

Corpse-eating selfies, and other ways to trick scammers (feat. Becky Holmes)

6/29/2025
There’s a unique counter response to romance scammers. Her name is Becky Holmes. Holmes, an expert and author on romance scams, has spent years responding to nearly every romance scammer who lands a message in her inbox. She told one scammer pretending to be Brad Pitt that she needed immediate help hiding the body of one of her murder victims. She made one romance scammer laugh at her immediate willingness to take an international flight to see him. She has told scammers she lives at addresses with lewd street names, she has sent pictures of apples—the produce—to scammers requesting Apple gift cards, and she’s even tricked a scammer impersonating Mark Wahlberg that she might be experimenting with cannibalism. Though Holmes routinely gets a laugh online, she’s also coordinated with law enforcement to get several romance scammers shut down. And every effort counts, as romance scams are still a dangerous threat to everyday people. Rather than tricking a person into donating to a bogus charity, or fooling someone into entering their username and password on a fake website, romance scammers ensnare their targets through prolonged campaigns of affection. They reach out on social media platforms like Facebook, LinkedIn, X, or Instagram and they bear a simple message: They love you. They know you’re a stranger, but they sense a connection, and after all, they just want to talk. A romance scammer’s advances can be appealing for two reasons. One, some romance scammers target divorcees and widows, making their romantic gestures welcome and comforting. Two, some romance scammers dress up their messages with the allure of celebrity by impersonating famous actors and musicians like Tom Cruise, Brad Pitt, and Keanu Reeves. These scams are effective, too, to sometimes devastating consequences. According to recent research from Malwarebytes, 10% of the public have been the victims of romance scams, and a small portion of romance scam victims have lost $10,000 or more. Today, on the Lock and Code podcast with host David Ruiz, we speak with Holmes about her experiences online with romance scammers, whether AI is changing online fraud, and why the rules for protection and scam identification have changed in an increasingly advanced, technological world. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit...

Duration:00:45:26

Ask host to enable sharing for playback control

The data on denying social media for kids (feat. Dr. Jean Twenge) (re-air)

6/15/2025
Complex problems often assume complex solutions, but recent observations about increased levels of anxiety and depression, increased reports of loneliness, and lower rates of in-person friendships for teens and children in America today have led some school districts across the country to take direct and simple action: Take away the access to smartphones in schools. Not everyone is convinced. When social psychologist and author Jonathan Haidt proposed five solutions to what he called an "epidemic of mental illness" for young adults in America, many balked at the simplicity. Writing for the outlet Platformer, reporter Zoe Schiffer spoke with multiple behavioral psychologists who alleged that Haidt’s book cherry-picks survey data, ignores mental health crises amongst adults, and over-simplifies a complex problem with a blunt solution. And in speaking on the podcast Power User, educator Brandon Cardet-Hernandez argued that phone bans in schools would harm the students that need phones the most for things like translation services and coordinating rides back home from parents with varying schedules. But Haidt isn't alone in thinking that smartphones have done serious harm to teenagers and kids today, and many schools across America are taking up the mantle to at least remove their access in their own hallways. In February, Los Angeles Unified School District did just that, and a board member for the school district told the Lock and Code podcast that he believes the change has been for the better. But for those still in doubt, there's a good reason now to look back. Today, on the Lock and Code podcast with host David Ruiz, we revisit a 2024 interview with Dr. Jean Twenge about her research into the differences in America between today's teens and the many generations that came before. A psychologist and published author, Twenge believes she has found enough data tying increased smartphone use and social media engagement with higher strains on mental health. In today's re-broadcast episode, Twenge explains where she believes there is a mental health crisis amongst today's teens, where it is unique to their generation, and whether it can all be traced to smartphones and social media. According to Dr. Twenge, the answer to all those questions is, pretty much, “Yes.” But, she said, there’s still some hope to be found. Tune in today to listen to the full conversation. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide...

Duration:00:46:15

Ask host to enable sharing for playback control

What does Facebook know about me?

6/1/2025
There’s an easy way to find out what Facebook knows about you—you just have to ask. In 2020, the social media giant launched an online portal that allows all users to access their historical data and to request specific types of information for download across custom time frames. Want to know how many posts you’ve made, ever? You can find that. What about every photo you’ve uploaded? You can find that, too. Or what about every video you’ve watched, every “recognized” device you’ve used to log in, every major settings change you made, every time someone tagged you to wish you “Happy birthday,” and every Friend Request you ever received, sent, accepted, or ignored? Yes, all that information is available for you to find, as well. But knowing what Facebook knows about you from Facebook is, if anything, a little stale. You made your own account, you know who your Facebook friends (mostly) are, and you were in control of the keyboard when you sent those comments. What’s far more interesting is learning what Facebook knows about you from everywhere else on the web and in the real world. While it may sound preposterous, Facebook actually collects a great deal of information about you even when you’re not using Facebook, and even if you don’t have the app downloaded on your smartphone. As Geoffrey Fowler, reporter for The Washington Post, wrote when he first started digging into his own data: “Even with Facebook closed on my phone, the social network gets notified when I use the Peet’s Coffee app. It knows when I read the website of presidential candidate Pete Buttigieg or view articles from The Atlantic. Facebook knows when I click on my Home Depot shopping cart and when I open the Ring app to answer my video doorbell. It uses all this information from my not-on-Facebook, real-world life to shape the messages I see from businesses and politicians alike.” Today, on the Lock and Code podcast, host David Ruiz takes a look at his own Facebook data to understand what the social media company has been collecting about him from other companies. In his investigation, he sees that his Washington Post article views, the cars added to his online “wishlist,” and his purchases from PlayStation, APC, Freda Salvador, and the paint company Backdrop have all trickled their way into Facebook’s database. Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Duration:00:31:33