The Cyber Business Podcast-logo

The Cyber Business Podcast

Business & Economics Podcasts

Welcome to The Cyber Business Podcast where we feature top founders and entrepreneurs and share their inspiring stories.

Location:

United States

Description:

Welcome to The Cyber Business Podcast where we feature top founders and entrepreneurs and share their inspiring stories.

Language:

English


Episodes
Ask host to enable sharing for playback control

Why Your SaaS Vendor's New AI Button May Be Your Biggest Security Risk Right Now with Fletus Poston III - Ep 208

4/21/2026
Guest Introduction Fletus Poston III is the Director of Security and Systems Architecture at 3D Systems Corporation, one of the world's leading additive manufacturers. In his role, he oversees the overall architecture for both the cybersecurity and IT functions, including infrastructure, cloud, and the governance models that tie them together. 3D Systems builds commercial 3D printers from the board level up, produces its own materials including plastics and metal powders, serves the personal healthcare market with surgical guides and aids, and has recently entered the DoD and aerospace space, currently pursuing CMMC Level 2 certification. Beyond his industry role, Fletus is an adjunct professor at Appalachian State University teaching an AI business cybersecurity course, giving him a dual perspective as both a practitioner shaping enterprise architecture and an educator preparing the next generation of security professionals. Here's a Glimpse of What You'll Learn In This Episode Fletus opens from a vantage point that most security leaders do not occupy simultaneously: practitioner, architect, educator, and manufacturer all at once. His perspective on AI governance is grounded not in abstract frameworks but in the operational reality of a company that builds printers, produces materials, serves surgical teams, and is now entering the DoD supply chain. In that context, the question of where your data goes when a SaaS product adds a new AI button is not theoretical. It is a compliance and contractual issue that most organizations have not yet accounted for, and Fletus is among the clearest voices in the industry on why that needs to change. He draws a direct line from API gateway concepts that security teams have understood for years to the new vocabulary of AI governance: tokenization, rate limiting, caching, MCP servers, and hybrid model decisions that now sit at the center of every serious enterprise AI conversation. The machine learning versus LLM distinction lands differently coming from someone who has been teaching it and living it. Fletus explains that machine learning handles normalization and data analytics, the backbone of every security tool since the early days of cloud adoption. LLMs handle language temperature checks, learning the ambiguity and context of how humans actually communicate. The two are not interchangeable, and organizations that treat them as the same thing will deploy the wrong tool in the wrong place and wonder why they got the wrong result. He pairs this with a candid look at the hardware shift happening in parallel, where the processing power once requiring data centers is now sitting in M series laptops and NPU-equipped devices, bringing AI back to the edge in ways that change both capability and attack surface simultaneously. The deepfake and identity fraud section of this episode is among the most practically urgent content on the podcast this year. Fletus walks through how North Korean actors embedded themselves in remote workforces using AI-assisted personas, how a deepfake video board meeting in Hong Kong led to an approximately 32 million dollar wire transfer, and why current HR systems and ATS platforms have no reliable way to verify human identity in a world where AI can generate resumes, conduct screening calls, and clone voices at scale. His countermeasures are low-tech and effective: challenge phrases that no AI can answer correctly, geolocation-specific questions that test genuine local knowledge, and the simple physical act of standing up and taking a breath before responding to any urgent financial request. In a landscape of increasingly sophisticated threats, the three-second pause remains one of the most powerful defenses available.

Duration:00:34:45

Ask host to enable sharing for playback control

Why Fighting AI in the Classroom Is the Wrong Battle with Chris Campbell - Ep 207

4/20/2026
Guest Introduction Chris Campbell is the CIO at DeVry University, a career-aligned institution focused on delivering education in high-demand technology fields that translates directly into workforce-ready skills. In his role, Chris leads technology, cybersecurity, and digital innovation across the university, with a current focus on integrating AI into both operations and curriculum in ways that prepare working adult students for the jobs they are already heading into. DeVry has made a public commitment to embed AI literacy into every program and every course regardless of discipline, and Chris is the technology leader driving that commitment from the inside out. Here's a Glimpse of What You'll Learn In This Episode Chris opens with a position that should be the default for every higher education institution but still is not: if students are walking into an AI-influenced workforce, then keeping AI out of the classroom is not protecting academic integrity, it is leaving students unprepared. DeVry has gone further than most by making a formal public commitment to embed AI literacy across every program, from accounting to software engineering, not as a standalone course but as a thread running through everything. The framing matters too. Chris does not talk about AI replacing work. He talks about AI redefining it, and that distinction shapes every decision his team makes about how students learn, how faculty teach, and how the university itself operates. The internal structure Chris has built is worth noting. Rather than letting AI initiatives proliferate without direction, DeVry stood up an organization called AI Labs that routes every AI initiative through a single prioritization and ROI framework. That discipline has allowed them to evaluate tools seriously, adopt what works, and avoid the trap of chasing every new product that claims to be AI-powered. On the security side, Chris draws a sharp distinction between machine learning applied purposefully in products like Darktrace and Abnormal versus traditional security products with an LLM bolted on. He describes the emerging dynamic as an LLM war, where the organization is putting AI in on one side and the threat actors are putting AI in on the other, and the organizations that do not deploy deliberately will find themselves on the losing end of that exchange. The conversation about durable skills is where Chris brings his most forward-looking thinking. He rejects the term soft skills precisely because it undersells what these capabilities are worth in an AI-powered world. Critical thinking, problem solving, the ability to articulate how you arrived at a conclusion, and the ability to collaborate with both human and agentic workers are the skills that will carry people across their entire career arc regardless of what the technology landscape looks like in five or ten years. DeVry builds these into every level of the curriculum, from certificates through bachelor's degrees, because the institution's defining purpose is relevance. What they teach must align with where the workforce is going, not where it was.

Duration:00:37:02

Ask host to enable sharing for playback control

Maritime Cybersecurity, AI Governance, and the Threats No One Sees Coming with Amit Basu - Ep 206

4/15/2026
Guest Introduction Amit Basu serves as both CIO and CISO at International Seaways, one of the world's largest publicly traded tanker companies, listed on the New York Stock Exchange under ticker INSW and headquartered in New York City. The company owns and operates 75 large ocean going tankers that transport crude oil and petroleum products across major global routes. In his dual role, Amit is responsible for the digital infrastructure and cybersecurity protecting both onshore operations and a highly distributed fleet at sea, an environment that includes satellite connectivity, automated navigation systems, cloud-connected operational technology, and crews of 15 to 20 people with no dedicated IT expertise on board. His work sits at the intersection of maritime operations, OT security, AI governance, and geopolitical risk in ways that few technology leaders anywhere in the world can match. Here's a Glimpse of What You'll Learn In This Episode Amit opens with a framing that most cybersecurity conversations miss entirely. Ninety percent of global trade moves by sea. Every time something is ordered online, every time a gas station charges more per gallon, every time a supermarket shelf goes empty, a ship is almost certainly part of the reason. That context matters because it explains why maritime infrastructure has become an increasingly attractive target for threat actors operating during geopolitical conflicts. Ships that were once isolated mechanical vessels are now connected digital platforms running satellite communications, automated navigation, cloud-connected operational systems, and remote diagnostics, all managed by crews of 15 to 20 people who are expert sailors, not IT professionals. That combination of high value and low IT coverage makes them, in Amit's words, a sitting duck. The AI threat discussion takes a dimension here that most enterprise security conversations do not reach. Amit describes the challenge of deepfake voice and video calls in a maritime environment where the hierarchy functions almost like a military chain of command. If a crew member at sea receives what appears to be a video call from the company's CEO giving an instruction or redirecting the ship, the cultural expectation is compliance. That social engineering vector, powered by AI that can convincingly clone voice and appearance, is one of the most difficult threats to defend against because it exploits human deference rather than technical vulnerability. Alongside this, sophisticated polymorphic malware is now evading even advanced endpoint solutions, reinforcing why machine learning-based behavioral anomaly detection is the right tool for shipboard environments where traditional signature-based tools cannot keep pace. Amit frames AI governance with a clarity that reflects both his dual role and his experience operating in a highly regulated, globally distributed environment. He distinguishes between the experimentation phase, which he believes is over, and the accountability phase, which has arrived. Boards and executives are no longer asking when the pilot will be ready. They are asking who is accountable for the outcome. That shift demands that CIOs and CISOs move from innovation framing to production framing, with real governance, real policies, and real human oversight baked in. He also raises a third-party AI risk dimension that deserves far more attention than it currently gets: even organizations that have not deployed AI internally are already having their data processed by AI through the vendors, insurers, legal firms, and supply chain partners they rely on every day.

Duration:00:36:35

Ask host to enable sharing for playback control

The Arms Race, the Energy Gap, and the Ethics of Teaching AI to Be Good with Alex Dalay - Ep 205

4/14/2026
Guest Introduction Alex Dalay is the CISO at IDB Bank, a New York-headquartered commercial, private banking, and broker dealer institution with more than 70 years of history. As the security leader of a financial institution that sits squarely in the crosshairs of modern threat actors, Alex brings a perspective grounded in operational reality rather than theoretical frameworks. His approach to security leadership strips away the noise and returns consistently to the fundamentals: know what you have, know who has access to it, and build everything else from there. Here's a Glimpse of What You'll Learn In This Episode Alex opens with a perspective that cuts through the noise immediately: security does not need to be complicated, and the organizations that struggle most are usually the ones that skipped the basics in pursuit of advanced capabilities. Asset inventory and identity management are unglamorous but they are the foundation everything else is built on. If you do not know what is in your environment and who has access to it, no tool, AI-powered or otherwise, will save you. That philosophy of fundamentals-first shapes how he approaches the role of CISO at a financial institution that faces a significantly higher volume of attacks than most industries simply because money is involved. The AI conversation takes a sharp turn toward the offensive side of the ledger. Alex identifies the most consequential change AI has made to the threat landscape as the ability to evaluate responses in real time during an attack. Historically, automated tools ran scripts and moved on when something failed. Human attackers could pivot off unexpected responses. Now AI can do both, at machine speed. That shift has compressed the window between vulnerability disclosure and active exploitation to near real time in many cases, fundamentally changing how urgently defenders must act. He also draws an important distinction that often gets lost in the noise: a critical vulnerability rating from a vendor like Microsoft assumes the worst-case configuration. Whether it is actually critical in your specific environment requires human and increasingly AI-assisted contextual analysis before you drop everything to patch it. Alex closes with a wide-angle view of where AI is taking both the profession and society. He draws a comparison to the nuclear arms race, arguing that whichever nation cracks AGI first will hold a form of leverage that reshapes global power. He connects that to an underappreciated dependency: energy. Without the infrastructure to power the data centers that run AI at scale, the United States risks falling behind adversaries who face fewer environmental or political constraints on energy expansion. On the ethical side, he raises a point that goes beyond guardrails. We are racing to make AI intelligent without taking the time to teach it to be good, and the consequences of that gap may be the most important and least discussed challenge in the entire AI conversation.

Duration:00:33:00

Ask host to enable sharing for playback control

Role-Based AI, Culture-First Hiring, and the Future of Human-Centered Tech with Laurel Cipriani - Ep 204

4/8/2026
Guest Introduction Laurel Cipriani returns to the Cyber Business Podcast for a second conversation that goes deeper and broader than the first. As CIO at AffirmedRX, a transparent pharmacy benefits management company and public benefit corporation legally obligated to put patients ahead of profits, Laurel brings a background unlike almost any other CIO in the industry. She trained in psychology, became a registered nurse, spent years in health administration and clinical quality, and arrived in IT through a path that has given her a perspective on people, culture, and human-centered technology that is genuinely rare at the executive level. She is also an active member of the Digital Economist think tank in Washington DC and is joining the World Technology Congress, a Switzerland-based international think tank, as this episode records. Here's a Glimpse of What You'll Learn In This Episode Laurel opens this return visit with an origin story that sets the tone for everything that follows. From aspiring grief therapist to floor nurse to health informaticist to CIO of a public benefit corporation, her path into technology was never linear and never conventional. What runs through all of it is a single thread: a desire to help people and a belief that technology is most powerful when it is built around human needs rather than the other way around. That philosophy is now embedded in how she is building the AI strategy at AffirmedRX, where every steward in the company will have a clearly defined set of tools, permissions, and accountability structures tied directly to their role. No one gets unfettered access. No output goes unreviewed. And no AI system will ever make a decision without a human signing off. The conversation on women in IT leadership is honest and specific in ways that broader industry discussions rarely are. Laurel notes that virtually every person on her own team is male, not by design but by the reality of a candidate pipeline that still skews heavily toward men. Her response is not to lower the bar but to raise the profile of culture as the primary filter in hiring, something AffirmedRX does formally through a culture screening call before any other evaluation takes place. She makes the case that as AI raises the floor on individual capability, the differentiator between good teams and great ones will increasingly be how people work together, not what any individual can produce alone. That shift, she argues, naturally favors the holistic, relationship-oriented thinking that women have historically been undervalued for bringing to technical roles. The deepest thread in this episode is the one that connects AI governance to human development in ways that go well beyond the enterprise. Laurel is conducting original research through the Digital Economist on how AI and internet anonymity are amplifying harmful behavior toward women, how gender bias baked into training data is being reinforced at scale in AI models, and what it would take to actually interrupt those cycles rather than just acknowledge them. Her conclusion is not pessimistic. She believes AI, if governed with the same intentionality she is applying at AffirmedRX, could become the most powerful tool ever built for identifying and dismantling the cultural patterns that have kept inequality in place for generations. Getting there requires the same thing everything else in this conversation requires: humans staying in charge, staying accountable, and refusing to let speed become an excuse for carelessness.

Duration:00:59:05

Ask host to enable sharing for playback control

Why Every CISO Must Use AI Now and How to Do It Without Losing Control with Greg McCord - Ep 203

4/6/2026
Guest Introduction Greg McCord is a career security leader operating across two roles simultaneously. As CISO at Lightcast.io, a leading labor market analytics firm, he protects one of the most data-intensive organizations in the workforce intelligence space. As founder and CISO of McCord Keystone Advisory, launched in late 2025, he extends fractional CISO services to small and mid-sized businesses that need executive-level security leadership but cannot sustain a full-time hire. His background spans government, public sector, and private enterprise, and includes time as an Army interrogator at the SERE school for special forces, an experience that informs how he thinks about intelligence, data relevance, and the psychology of adversarial pressure. Here's a Glimpse of What You'll Learn In This Episode Greg opens with a position that is both practical and urgent. Security leaders who choose not to adopt AI are not playing it safe. They are falling behind adversaries who are already deploying it against them. His counsel is specific: adopt AI, but do it in a non-attributable way. The moment confidential data is connected to an uncontrolled AI system, positive control of that data is gone and there is no reliable way to get it back. The traditional tools still matter. The telemetry and signal they provide remains valuable. But they need to be augmented with AI that can act faster, identify patterns earlier, and close the gap between detection and response before attackers achieve their objective inside your environment. The quantum computing thread is where Greg brings one of the most forward-looking and underappreciated risks in the conversation. Governments and sophisticated threat actors are collecting encrypted breach data today with no current ability to decrypt it. Once quantum computing matures, that changes. Everything collected now becomes readable later. Greg draws on his Army interrogator background to frame it clearly: the goal is for your data to be irrelevant by the time anyone can crack it, but not all of it will be, and the organizations that are not thinking about this now will have no recourse when it arrives. That reality, combined with the convergence of quantum processing and AI training models, is what makes the current moment unlike anything the industry has faced before. Greg closes with a perspective on frameworks and governance that is both honest about the pace problem and constructive about the path forward. By the time a framework is written and discussed, the technology it describes has already evolved. That is not an argument against frameworks. It is an argument for building continuous feedback loops between practitioners in the field and the people writing the standards. AIUC-1 and CSA Maestro represent serious efforts to govern AI agent behavior, prompt handling, and LLM risk in a structured way. The organizations that engage with those frameworks now, rather than waiting for mandates, will be the ones with the governance foundation in place when the next wave of threats arrives.

Duration:00:37:33

Ask host to enable sharing for playback control

Identity Is the New Perimeter: A Cybersecurity Director's Playbook with Jason Lawrence - Ep 202

4/1/2026
Guest Introduction Jason Lawrence is the Cybersecurity Director at Yancey Brothers, the oldest Caterpillar dealer in the United States and a company that has been in business since 1914. As the first person to hold this role at the organization, Jason is building the cybersecurity program from the ground up, reporting directly to the CIO. Before joining Yancey Brothers, Jason built a career spanning security operations, identity management, and strategic risk, and he also co-founded Security Reimagined, a firm focused on securing small businesses and communities across Georgia. His approach to cybersecurity is rooted in business outcome thinking, treating cyber defense not as a technology problem but as a revenue protection function. Here's a Glimpse of What You'll Learn In This Episode Jason opens with a framework that reframes how most people think about AI in security. Rather than treating AI as a single category, he separates generative AI from machine learning and assigns each a distinct role. Generative AI helps analysts make sense of massive data volumes quickly, turning raw signals into actionable observations. Machine learning, the kind Darktrace has been applying for well over a decade, automates detection and response in ways that rule-based systems simply cannot match. The real objective, he argues, is not just prevention but disrupting the attacker's OODA loop before they achieve their goal inside your environment. Getting in is not the win for threat actors. What they do after getting in is what matters, and that is where speed of detection and response becomes everything. The identity conversation is where Jason brings the most urgent and underappreciated insight of the episode. The perimeter is gone. Identities are the new perimeter. And for every human identity in an enterprise, there are now estimated to be up to 144 non-human identities, including devices, data systems, and increasingly, agentic AI and RAG systems that have been granted privileged access to an organization's most sensitive assets. The Stryker breach is the defining example: a compromised Intune instance handed the attacker complete control of the environment. Jason's prescription is direct. Harden the tools you use to manage your infrastructure, roll out MFA everywhere, adopt passkeys, and build a complete identity inventory that accounts for everything in your environment, not just the humans. Jason closes with a perspective on cybersecurity's role in the business that every security leader should hear. If a user has to stop and think about whether an email is safe, that is a cybersecurity failure because it is pulling that person away from the work that generates revenue. His job, as he frames it, is to make sure the business can do business with as little friction as possible. The department of no has to become the department of know, finding the secure path forward rather than simply blocking the unsafe one. That philosophy, grounded in humble inquiry and genuine understanding of business processes, is what separates security functions that protect the organization from those that simply slow it down.

Duration:00:37:49

Ask host to enable sharing for playback control

How AffirmedRX Is Using Technology to Fix a Broken Healthcare System with Laurel Cipriani

3/30/2026
Guest Introduction Laurel Cipriani is the Chief Information Officer at AffirmedRX, a transparent pharmacy benefits management company built on a mission to make medications accessible and affordable for everyone. A clinician by training and a registered nurse originally, Laurel brings a rare combination of frontline healthcare experience, executive technology leadership, and global policy engagement to her role. She joined AffirmedRX in December 2025 and is currently building the company's IT department, data and analytics function, and AI strategy from the ground up at a company that has been operating for approximately four years. Beyond her work at AffirmedRX, Laurel is an active AI ethicist and member of the Digital Economist, a Washington DC-based think tank focused on the intersection of technology, ethics, and global policy. She has represented that organization at the World Economic Forum in Davos and participated on panels at New York Fashion Week through her involvement with the Fashion Fusion Technology Group, an organization working to apply technology to sustainable and circular fashion. Her perspective spans healthcare transparency, responsible AI adoption, data security, and the broader social and economic forces that technology either reinforces or disrupts. Here's a Glimpse of What You'll Learn In This Episode Laurel opens with a clear-eyed description of what AffirmedRX is attempting to do in one of the most entrenched and resistant markets in American healthcare. The big three pharmacy benefit managers have decades of history, established relationships, and enormous switching costs working in their favor. AffirmedRX is betting that transparency, outcomes, and a genuinely patient-first model through its Patient Care Advocates will eventually make the choice obvious for employers. Laurel is direct about the challenge: even people who love the mission in writing hesitate to put their employees through the disruption of changing plans. The company's answer is to let results do the talking, including a white paper in progress at the time of recording detailing the outcomes they have already achieved. The conversation around AI is where Laurel's dual identity as practitioner and ethicist comes through most clearly. AffirmedRX is using AI, but strictly for internal business process optimization and not yet for anything that touches protected health information. Every recommendation made by AI requires a human to sign off. Pharmacists are designing the models and reviewing the outputs. That discipline is not timidity. It is the product of a CIO who understands that in healthcare, the cost of getting AI wrong is not just financial. It is human. Laurel also introduces a goal she has set for the entire organization: every steward at AffirmedRX should be able to speak confidently about the responsible use of AI in their own role by the end of the year. The Davos segment brings an unexpected and unusually candid thread to the conversation. Laurel describes arriving at the World Economic Forum with what she calls a naive impression that this was where the world's problems get solved, and encountering something far more complicated. Billboards targeting attendees, luxury fashion as social currency, and a pervasive sense of conflict between the forum's stated ideals and its visible reality. She dealt with it by asking every stranger she met whether they felt the same discomfort. The answer was universally yes. Her conclusion: you cannot make global change if you are not willing to be in the room, even when the room makes you uncomfortable. That philosophy connects directly to the work she is doing at AffirmedRX, at the think tank, and in the fashion sustainability space. The episode closes with a wide-ranging discussion about the relationship between technology, economic inequality, and systemic change. Laurel draws a line from fast fashion's hidden costs to the misaligned incentives that keep people...

Duration:00:48:03

Ask host to enable sharing for playback control

The Two AI Attack Paths Every Security Leader Needs to Understand Now with Sinan Al Taie

3/25/2026
Guest Introduction Sinan Al Taie is the Cybersecurity Manager at Master Electronics, a leading global authorized distributor of electronic components with more than half a century of history as a family owned business headquartered in Phoenix, Arizona. His path into cybersecurity is one built from firsthand experience, having transitioned into the field after being hacked himself while working as a database engineer with the United Nations and USAID missions. That personal encounter with a breach sparked a pursuit of professional development through Northeastern Illinois University and hands-on penetration testing work before he joined Master Electronics as a cybersecurity analyst. He grew with the company into his current leadership role, gaining end-to-end exposure to building and evolving a full security posture from the ground up. Today Sinan operates at the intersection of threat intelligence, agentic AI defense strategy, and organizational security architecture, bringing both the practitioner's instinct and the strategist's perspective to one of the most rapidly shifting threat landscapes in recent memory. Here's a Glimpse of What You'll Learn In This Episode Sinan brings a framework to the conversation that cuts through the noise surrounding AI in cybersecurity. He identifies two distinct attack paths organizations are now facing simultaneously: attacks on AI agents, where the autonomous nature of those agents amplifies the speed and scale of damage when something goes wrong, and attacks by agents, where threat actors use AI to generate polymorphic malware, automate entire ransomware kill chains, and launch phishing campaigns sophisticated enough that grammar errors are no longer a reliable tell. The compression of attack timelines from 197 minutes in earlier incidents down to 77 seconds in late 2025 makes clear that human defenders operating alone cannot keep pace. His response to that reality is not to simply add more tools. Sinan introduces the concept of agentic cyber defense, deploying autonomous agents that can reason, investigate, and act alongside security teams in parallel with traditional infrastructure. These agents are not a replacement for the existing security posture but an additional intelligence layer capable of detecting the micro-processes and behavioral anomalies that traditional tools are not designed to catch. He pairs this with his own framework of detection in depth, a complement to the established defense in depth model, where each layer of the security stack carries its own detection and response capability rather than relying on perimeter defense to carry the full load. Sinan is direct that there is no silver bullet and no environment where the human element can be fully removed. Social engineering remains the most reliable entry point for threat actors precisely because it bypasses technology entirely. His answer is wide-eyed inclusion, deploying AI with minimum permissions, rigorous review processes, and a clear understanding of what each tool can and cannot do. Even smaller organizations can harden their posture meaningfully by choosing endpoint and security tools that incorporate AI features without needing enterprise-scale budgets to do it. He closes with a forward-looking take on the profession itself. AI will not take jobs, but people who know how to use AI will replace those who do not. The skill set shifting across security and IT is moving from hands-on execution toward orchestration, directing AI agents the way a manager directs a team, reviewing outputs, catching errors, and making judgment calls that autonomous systems are not yet equipped to handle. The human firewall still matters. What changes is where human attention is most valuable and how professionals need to position themselves to lead alongside the tools rather than behind them.

Duration:00:53:28

Ask host to enable sharing for playback control

IT Leadership in Regulated Industries: Service Management, AI Risk, and the CIO Mindset with Bryan Younger

3/23/2026
Guest Introduction Brian Younger is the Chief Information Officer at Liberty Dental Plan of Oklahoma, the largest privately held dental benefits administrator in the United States. With nearly 30 years of experience in IT, Younger has built a career that spans desktop support, network infrastructure, information security, ITSM operational excellence, and executive leadership. Before joining Liberty, he spent a decade working in Medicaid IT for the state of Oklahoma, giving him a deep understanding of regulated healthcare environments from both the public and private sector sides. At Liberty, which serves approximately 8 million members nationwide across Medicare, Medicaid, commercial, and exchange markets, Younger oversees a technology organization that must balance strict compliance requirements, including HiTrust, SOC 2 Type 2, SOC 1 Type 2, and HIPAA, with the need to adopt modern tools and AI-driven capabilities responsibly. His background spans enterprise service management, change management, information security, and IT governance, making him a practitioner who understands both the tactical and strategic dimensions of running IT in a high-stakes, member-focused organization. Here's a Glimpse of What You'll Learn In This Episode Brian Younger brings a grounded perspective on IT service management, opening with a clear case for why change management is not bureaucratic friction but a proven mechanism for limiting downtime. He points to real-world data showing that 80 percent of outages trace back to a bad change and draws a direct line between disciplined change processes and financial protection, illustrating how stopping even a handful of avoidable outages each year can translate into millions of dollars saved for an organization. The CrowdStrike incident serves as a vivid reference point for what happens when QA and change control break down at scale. The conversation moves into AI governance with notable specificity. Younger explains how Liberty approaches AI adoption through a formal AI governing board that evaluates every new tool for compliance risk, data handling, and architectural integrity. He draws a sharp distinction between products that bolt an LLM onto existing services for market appeal and those that apply machine learning in a contained, purposeful way, citing Darktrace as an example of AI done right in the security context. He is direct about the risk of employees using tools like ChatGPT with sensitive data, noting that once information enters those platforms, ownership and use become unclear, a serious concern in a HiTrust, HIPAA-governed environment. Younger and host Matthew Connor explore the tension between convenience and security, arriving at a framing that will resonate with anyone managing enterprise IT. Security will always prioritize protection while the rest of the business defaults to ease of use. The job of IT leadership is to find the balance that enables the business rather than obstructs it, offering governance as a feature rather than a gate. That philosophy runs through Younger's broader view of IT: a non-revenue-producing department that no one in the organization can operate without, and one that earns its seat by co-creating value rather than holding the line on hardware. For those considering a career in IT, Younger offers advice that is both practical and forward-looking. He encourages early-career professionals to look past the help desk and identify their target specialty before choosing certifications, comparing the IT landscape to medicine, where a general practitioner and a specialist require fundamentally different training paths. He acknowledges the anxiety around AI displacing IT jobs but reframes it as an argument for staying curious, specializing deliberately, and understanding that the people who will thrive are the ones who know how to direct and govern the tools, not just use them.

Duration:00:34:39

Ask host to enable sharing for playback control

Leadership Awareness and Technology Strategy in Higher Education with Mark Bojeun

3/12/2026
Guest Introduction Mark Bojeun serves as Chief Information Officer at Seward County Community College in southwest Kansas. In addition to leading the institution's technology strategy, he is also the author of Awakening Leadership: The Journey to Conscious Influence, a book focused on leadership awareness, personal growth, and the development of stronger organizational cultures. His career blends higher education technology leadership with a deep interest in leadership psychology and human development. In this episode of The Cyber Business Podcast, Mark discusses how leadership awareness shapes technology teams, how community colleges are evolving through digital transformation, and why modern CIOs must balance technical strategy with personal influence. The conversation explores how leadership mindset, culture, and communication determine whether technology initiatives succeed or stall. Here's a Glimpse of What You'll Learn How community colleges are evolving their technology infrastructure to support modern learning environments Why leadership awareness is a critical skill for CIOs and IT executives How personal development impacts technology leadership and decision making Why communication and influence are often more important than technical authority How higher education institutions balance innovation with limited resources Why strong leadership culture improves the success of IT initiatives The connection between conscious leadership and long term organizational impact In This Episode Mark Bojeun explains how community colleges are experiencing rapid technological change as digital learning environments expand and student expectations continue to evolve. As CIO of Seward County Community College, he describes how smaller institutions must often innovate creatively while operating with limited resources. Technology leaders in higher education must balance modernization with financial realities while still delivering reliable systems for students, faculty, and staff. Mark also highlights how leadership perspective directly shapes the success of technology initiatives. Many IT projects fail not because of technical issues but because of communication gaps, lack of alignment, or leadership blind spots. His work and writing focus on helping leaders develop stronger awareness of how their actions influence teams and organizational outcomes. The conversation also explores Mark's book Awakening Leadership: The Journey to Conscious Influence. He explains that leadership development begins with understanding personal behavior patterns, communication styles, and how leaders affect the people around them. Technology leaders who develop this awareness often build stronger teams, encourage collaboration, and achieve more consistent results. Mark's perspective highlights a growing shift in the CIO role. Modern technology leaders are no longer defined solely by infrastructure knowledge or system architecture. Instead, the most effective CIOs combine technical expertise with emotional intelligence, communication skills, and a clear leadership philosophy. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:49:03

Ask host to enable sharing for playback control

Women in IT, Allyship, and the Future of Technology Leadership with Shannon Thomas

3/3/2026
Guest Introduction Shannon Thomas serves as Chief Information Officer at Mitchell Hamline School of Law in Saint Paul, Minnesota, one of the largest independent law schools in the United States. In addition to leading IT strategy and execution, she is completing her dissertation focused on women in IT and operates a leadership focused LLC. Her work centers on the intersection of technology, culture, leadership, and human behavior, with a particular emphasis on how bias, allyship, and organizational systems shape the future of the IT workforce. Here's a Glimpse of What You'll Learn Why microaggressions still impact women in technology careers How mental load at home influences retention in demanding IT roles What allyship looks like in real workplace scenarios Why leadership should focus on managing people, not positions How unconscious bias subtly shapes workplace dynamics The connection between culture, media, and leadership expectations Why flexibility increases both productivity and loyalty How inclusive leadership strengthens retention and performance In This Episode Shannon Thomas explains how systemic and cultural factors continue to shape the experience of women in IT. She discusses how women are often dissuaded from entering technology early in their academic journeys and how microaggressions persist even at senior leadership levels. From vendors directing technical questions to male subordinates to assumptions about who makes final decisions, she provides concrete examples of how bias still manifests in everyday interactions. The conversation explores the concept of mental load and how it disproportionately affects women in demanding technology roles. Shannon describes how cybersecurity and IT leadership positions rarely pause, while family responsibilities also remain constant. She argues that retention challenges are not simply about technical capability, but about how organizations structure flexibility, policy, and leadership expectations. Allyship emerges as a central theme. Shannon emphasizes that real progress requires colleagues to redirect conversations, correct behavior, and actively support women in decision making spaces. She explains that meaningful change does not always require confrontation, but it does require awareness and intentional redirection. The discussion ultimately reframes the issue as a human leadership challenge rather than a gender specific one. Shannon makes the case that organizations perform better when leaders treat employees as whole people. Flexibility, empathy, and accountability create stronger cultures, improve retention, and allow diverse talent to thrive in high demand technical environments. Sponsor for this episode... This episode is brought to you by CyberLynx CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:51:28

Ask host to enable sharing for playback control

Securing AI and Modernizing Care Delivery in Long Term Facilities with Vikas Sachdeva

3/3/2026
Guest Introduction Vikas Sachdeva serves as Chief Information Officer at HealthDrive Corporation, a healthcare organization delivering care to patients in long term care facilities across more than 20 states and over 4,000 facilities. With prior leadership roles spanning financial services, retail, AI driven digital engineering, and healthcare, Vikas has built a career focused on digital transformation that drives measurable business outcomes. At HealthDrive, his role centers on enabling clinicians with the right technologies, embedding responsible AI practices, strengthening security posture, and aligning innovation directly with improved patient care and operational performance. Here's a Glimpse of What You'll Learn How AI powered ambient listening and clinical assistance tools are augmenting providers in long term care settings Why responsible AI principles such as transparency, fairness, accountability, and human oversight are essential in healthcare How security and AI must evolve together to address protected health information risks Why AI should augment human workflows rather than replace employees How involving resistant stakeholders early turns them into champions of change Why transformation must start with business outcomes, not technology hype How data driven proof points reduce fear around automation initiatives In This Episode Vikas Sachdeva explains how HealthDrive leverages innovation to improve care delivery for underserved populations in long term care facilities. AI tools assist clinicians through ambient note capture, diagnosis support, and treatment guidance, allowing providers to focus more fully on patient interaction. He emphasizes that AI must remain augmentative rather than substitutive, particularly in healthcare where trust, ethics, and human accountability are foundational. Security plays a parallel role in the transformation. Vikas outlines the importance of responsible AI, especially when working with protected health information. He discusses transparency, bias mitigation, reliability, and human oversight as non negotiable guardrails when deploying AI systems. He also addresses the reality that adversaries are leveraging AI as well, making automation and proactive security measures essential to stay competitive. A major theme of the discussion centers on change management. Vikas shares a practical example of introducing intelligent document processing to automate unstructured data conversion. Initial resistance focused on trust and error rates, but by involving stakeholders early and comparing AI performance to existing human error rates, confidence grew. Error rates dropped from 17 percent to 4 percent, demonstrating measurable improvement rather than theoretical promise. Throughout the episode, Vikas reinforces a consistent philosophy. Innovation is not about chasing trends. It is about identifying business outcomes first, then selecting the right technology to support them. AI becomes powerful when aligned with mission, patient care, operational efficiency, and employee empowerment. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:35:55

Ask host to enable sharing for playback control

Securing AI, Data, and Infrastructure at Government Scale with Steve Orrin

2/4/2026
Guest Introduction Steve Orrin serves as Chief Technology Officer and Senior Principal Engineer at Intel Federal, where he operates at the intersection of advanced computing, cybersecurity, and national security missions. In his role, Steve works closely with U.S. federal agencies and the Defense Industrial Base to translate mission requirements into hardware, firmware, and software capabilities that can operate at massive scale and under elevated security demands. He also feeds those real world requirements back into Intel's product and research teams, helping shape future platforms that support government, critical infrastructure, and highly regulated industries. His background places him in a unique position to explain how technologies pioneered for government use often become the next standards adopted across the commercial sector. Here's a Glimpse of What You'll Learn Why federal government requirements often predict future commercial security standards How AI and cybersecurity must be addressed across the full lifecycle Where AI delivers real value in security operations versus where expectations fall short What confidential computing solves and why data in use is the next security frontier How post quantum cryptography timelines are being driven by government mandates Why hardware based security controls matter for cloud, edge, and mission systems How memory safe technologies can eliminate entire classes of cyber attacks In This Episode Steve explains his role at Intel Federal as a three part function. He helps government agencies adopt the right technologies for their missions, translates those requirements back to Intel's internal product and engineering teams, and supports innovation where standard commercial solutions do not fully meet government needs. This two way translation ensures that future platforms align with real world mission and security demands. The discussion moves into AI and cybersecurity, which Steve frames across three dimensions. Organizations must secure AI systems themselves, use AI responsibly to improve cybersecurity operations, and defend against adversaries that are also leveraging AI. He emphasizes that AI cannot be treated like traditional software. It requires governance, validation, and continuous monitoring across data sourcing, training, tuning, and deployment. Steve outlines where AI is delivering tangible value today. Rather than detecting entirely new threats in isolation, AI excels at automating repetitive, high volume security tasks. By reducing the operational burden of routine alerts, patching, and triage, AI allows security teams to focus their expertise on higher impact risks and emerging threats. A key segment of the conversation focuses on confidential computing. Steve explains how protecting data in use closes a long standing security gap that encryption at rest and in transit cannot address. Through trusted execution environments, memory encryption, isolation, and attestation, organizations can protect sensitive workloads even from compromised operating systems or untrusted cloud environments. This capability is especially relevant for AI models, intellectual property, and mission critical workloads deployed across cloud, edge, and disconnected environments. The episode concludes with a forward looking discussion on post quantum cryptography and secure mission platforms. Steve explains why the threat is not limited to future quantum computers, but to data being harvested and stored today for later decryption. Government driven timelines are accelerating adoption, and commercial industries will benefit from following the same path as compliant products become broadly available. Sponsor for this Episode This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and...

Duration:00:42:20

Ask host to enable sharing for playback control

Building Security After a Ransomware Wake Up Call with Brett Talmadge

1/28/2026
Guest Introduction Brett Talmadge served as Chief Information Officer at Nisqually Red Wind Casino during one of the most critical periods in the organization's history. Brought in following a ransomware incident that disrupted operations and exposed long standing technology gaps, Brett was tasked with stabilizing systems, rebuilding trust, and creating a sustainable security and IT foundation. His background spans highly regulated and mission critical environments, including financial services in New York City and work tied to federal defense operations. That experience shaped his disciplined approach to cybersecurity, operational resilience, and leadership communication. Here's a Glimpse of What You'll Learn How ransomware incidents expose deeper organizational and governance issues Why paying ransomware creates long term risk rather than resolution The importance of defining a clear IT end state before implementing tools How leadership misunderstanding of IT roles creates security blind spots Why cybersecurity is an ongoing process, not a finish line How AI driven security tools reduce noise but still require human oversight Why communication with executives matters as much as technical controls In This Episode Brett walks through the reality of stepping into an organization that had recently paid ransomware and was still recovering from operational and cultural fallout. He explains how legacy systems, siloed ownership, and the absence of a long term IT vision created an environment where a single phishing click could cripple the business. Rather than focusing on surface level fixes, Brett prioritized rebuilding structure, visibility, and accountability across systems and teams. The conversation highlights a recurring challenge faced by many IT leaders: executive teams often view cybersecurity as a state that can be achieved and checked off. Brett pushes back on that assumption, emphasizing that security is an ongoing process shaped by constant threat evolution, user behavior, and organizational entropy. Tools like Darktrace and Varonis provided meaningful visibility and alert quality, but only when paired with trained staff and leadership engagement. A key theme throughout the episode is communication. Brett shares a pivotal moment when leadership questioned why IT staff needed desks, revealing a fundamental misunderstanding of modern IT roles. That moment underscored why many organizations struggle with security maturity. Without executive clarity on what IT actually does, even strong technical programs can be undervalued or dismantled prematurely. Sponsor for This Episode This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:35:48

Ask host to enable sharing for playback control

Building Resilient Security Programs Across Industries with Jess Vachon

1/26/2026
Guest Introduction Jess Vachon is a three time CISO, the founder of Vigilant Violet LLC, and the host of the Voices of the Vigilant Podcast. With a career spanning manufacturing, defense, robotics, software, healthcare, and global financial services, Jess brings a uniquely broad perspective to cybersecurity leadership. Her journey reflects a deep commitment to building security programs that balance technical rigor with human centered leadership. Across every role, Jess has focused on developing resilient teams, pragmatic security strategies, and leaders who understand both risk and responsibility. Here's a Glimpse of What You'll Learn Why diverse industry experience strengthens security leadership How human centered leadership improves security outcomes Where AI helps security teams and where it creates new risk Why doing the basics well still matters more than new tools How AI can reduce user friction while improving protection What reasonable security looks like in an era of nation state threats Why investing in teams delivers better long term defense In This Episode Jess Vachon explains how her path to becoming a CISO was shaped by working across multiple industries and building security programs from the ground up. She shares how creating a full security program at a defense manufacturer helped confirm that security leadership was where she could make the greatest impact. That experience also reinforced her belief that hard problems with visible outcomes are the most rewarding. The conversation explores the role of AI in modern security, with Jess emphasizing that productivity gains should not come at the expense of people. She challenges the idea that AI should simply replace staff and instead argues for using it to increase effectiveness, retain institutional knowledge, and reduce unnecessary friction for employees. Her perspective reframes AI as a tool that supports humans rather than one that sidelines them. Jess and Matthew also discuss why security tools must be purpose built rather than bolted on with buzzwords. Using real world examples, she explains how machine learning can quietly protect users by understanding behavior and stopping threats before employees even see them. This approach reduces blame, improves trust, and shifts security closer to being invisible but effective. The episode closes with a powerful leadership discussion shaped by Jess's Marine Corps experience. She shares how military service taught her to lead under pressure, maintain perspective during crises, and focus on outcomes without losing sight of people. That mindset continues to inform how she views risk, response, and the responsibility of modern security leaders. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:46:15

Ask host to enable sharing for playback control

The Human Side of Cybersecurity Leadership with Kara Schlageter

1/23/2026
Guest Introduction Kara Schlageter is a cybersecurity executive with a career that bridges human resources, technology, and security leadership. Formerly Deputy CISO at First Citizens Bank, she brings a rare perspective shaped by early consulting experience, large scale transformation work at Bank of America, and deep exposure to identity and access management. Her path into cybersecurity began not with firewalls or endpoints, but with people, culture, and organizational change. Today, Kara is known for advocating a human centered approach to cybersecurity that treats leadership, empathy, and ethics as core security controls. Here's a Glimpse of What You'll Learn Why cybersecurity failures are driven more by people than by technology How an HR background can strengthen security leadership Why culture and empathy are critical security enablers How AI should complement human judgment rather than replace it The ethical risks of AI adoption without governance Why risk tolerance and values must guide technology decisions How leadership roles like the CISO are evolving beyond technical expertise In This Episode Kara Schlageter explains why cybersecurity must be demystified and understood as a human problem first. She challenges the common perception that security is primarily about tools, arguing instead that breaches happen because of human behavior, incentives, and culture. Her background in HR allows her to view cybersecurity through the lens of motivation, trust, and organizational design rather than purely technical controls. She shares how her career evolved through consulting, identity and access management, and large scale transformation at Bank of America. While helping organizations grow rapidly, Kara learned that hiring decisions, culture, and leadership alignment matter as much as technical skill. That experience shaped her belief that understanding people is a force multiplier in cybersecurity. The conversation also explores AI and its growing role in both security and leadership. Kara emphasizes that AI is a powerful tool, but one that must be governed carefully. She stresses the importance of transparency, ethical use, and intentional guardrails, especially as organizations rush to adopt AI driven capabilities without fully understanding long term risk. As the discussion turns toward leadership, Kara outlines how the CISO role is changing. Modern security leaders must communicate risk in business terms, define culture, and align technology decisions with organizational values. Technical expertise still matters, but it is no longer sufficient on its own. The future of cybersecurity leadership belongs to those who can balance innovation with humanity. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:47:06

Ask host to enable sharing for playback control

Zero Trust, AI, and Security Leadership in Healthcare with William O'Connell

1/21/2026
Guest Introduction William O'Connell serves as the Information Security Officer at VHC Health, a hospital system based in Arlington, Virginia, just outside Washington, DC. With more than seven years at the organization, O'Connell was brought in to help jump start and mature the healthcare system's cybersecurity program. His background spans network engineering, firewalls, VPNs, and early infrastructure security, giving him a practitioner's perspective on how security has evolved from perimeter defense to continuous risk management. Today, his work focuses on balancing patient care, operational access, and modern security controls in one of the most complex and regulated environments in IT. Here's a Glimpse of What You'll Learn Why zero trust should be treated as an ongoing strategy rather than a finished project How hospital security mirrors physical access control in real world healthcare settings Where AI adds value in cybersecurity and where it introduces new risks Why agentic AI still requires strong human oversight How CISOs should evaluate AI tools in regulated environments like healthcare The importance of governance and third party risk assessment for AI adoption Why storytelling matters when communicating security metrics to executive leadership In This Episode William O'Connell explains that zero trust is often misunderstood as a project with an end date, when in reality it is a guiding security concept that requires continuous improvement. He uses a healthcare analogy to clarify the idea, explaining that hospitals must allow access to many people while still protecting highly sensitive areas. This same principle applies to digital environments where access must be intentional, segmented, and constantly reviewed. The conversation also explores the role of AI in modern security operations. O'Connell shares how healthcare organizations must carefully assess AI tools to ensure patient data is not exposed or reused in unintended ways. While AI can dramatically improve visibility and response time, he cautions against blindly attaching large language models to every system without understanding the risks, including prompt injection and unintended data exposure. As the discussion turns to agentic AI, O'Connell highlights both the promise and the concern. Automation can reduce repetitive tasks and improve efficiency, but it also removes traditional learning paths for junior staff and introduces trust challenges when AI is given autonomy. He emphasizes the importance of maintaining a human in the loop and applying zero trust principles even to AI driven systems. The episode closes with practical leadership insight on reporting and communication. O'Connell stresses that security leaders must translate metrics into stories that resonate with executive teams. Data alone is not enough. Clear narratives tied to business outcomes are what drive understanding, alignment, and investment in cybersecurity initiatives. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

Duration:00:40:56

Ask host to enable sharing for playback control

How Movie Studios Defend IP at Massive Scale with Dan Meacham

1/19/2026
Guest Introduction Dan Meacham serves as Vice President of Cyber and Content Security at Legendary Entertainment, a global film and television production company behind some of the most recognizable franchises in modern media. In his role, Dan is responsible for securing not only traditional enterprise systems, but also the creative content, intellectual property, and complex supply chains that power large scale movie and television production. His work spans cyber defense, digital forensics, vendor risk, and emerging AI driven security models in an industry where collaboration extends far beyond corporate boundaries. Here's a Glimpse of What You'll Learn Why securing a movie studio is fundamentally different from securing a traditional enterprise How content production relies on thousands of external collaborators and temporary environments The role of digital forensics and watermarking in protecting unreleased media How sophisticated attackers target individuals through social engineering and custom applications Why AI driven analytics are essential for threat detection at massive scale How long term log retention enables rapid decision making during incidents What shared learning intelligence could mean for the future of security operations In This Episode Dan Meacham explains how Legendary's business model reshapes cybersecurity strategy. Each film or television project operates like its own company, complete with a unique technology stack, vendor ecosystem, and lifecycle. Security must adapt quickly to environments that appear and disappear over months or years. He walks through the realities of protecting creative content across the production pipeline. From dailies and post production workflows to global distribution, large media files are constantly replicated, shared, and transformed. Watermarking, stenography, and forensic techniques play a critical role in tracing leaks back to their source. The conversation highlights how attackers exploit human behavior rather than systems alone. Dan shares real world examples where threat actors built targeted applications to extract photos from personal devices, demonstrating how deeply personal and contextual modern attacks have become. Dan also outlines how AI and machine learning have long existed in both filmmaking and cybersecurity. Today's challenge is not adopting AI, but governing it across devices, platforms, and supply chains. He introduces the concept of shared learning intelligence as a way to aggregate insights from multiple AI systems without centralizing sensitive data. The episode closes with a discussion on scale and speed. By retaining over a decade of security logs, Dan's team can quickly identify anomalous behavior and shut down access before damage spreads. AI accelerates analysis, but human accountability remains central to every decision.

Duration:00:58:07

Ask host to enable sharing for playback control

Securing Aviation, Education, and Innovation with David Mashburn

1/16/2026
Guest Introduction David Mashburn serves as Chief Information Security Officer at Embry-Riddle Aeronautical University, one of the world's leading institutions focused on aviation, aerospace, and applied engineering. With residential campuses in Florida and Arizona alongside a large global online population, Embry Riddle operates in a highly complex technology and security environment. David oversees cybersecurity across academic, research, and administrative systems, balancing innovation, safety, and operational resilience. His background spans enterprise security, incident response, and leadership roles in both higher education and large scale commercial environments, giving him a pragmatic perspective on how security must enable the mission it protects. Here's a Glimpse of What You'll Learn Why higher education security resembles a large scale Zero Trust environment by design How AI in cybersecurity is an evolution of long standing machine learning practices The challenges of securing unmanaged student and faculty devices at scale Why governance and guardrails matter more than outright restriction How identity and behavior drive modern security decisions Where AI can accelerate analysts without replacing human accountability How leadership and coaching experience shapes effective security teams In This Episode David Mashburn explains how Embry Riddle's aviation focused mission creates unique security requirements. With flight training, aerospace research, and global online education, systems must remain available and trusted at all times. Security exists to support learning and operations rather than slow them down. He shares why AI in cybersecurity should be viewed as a natural progression of existing analytics. From SIEM platforms to cloud security tools, machine learning has been embedded in security workflows for years. The current wave of AI expands scale and speed while introducing new governance considerations. The conversation dives deep into Zero Trust principles as a practical necessity. With thousands of unmanaged devices accessing university systems daily, security decisions rely on identity verification, behavior analysis, and continuous monitoring instead of network location. David also discusses the balance between automation and accountability. While AI can reduce analyst workload and surface insights faster, final decisions must remain human. Automation supports judgment but does not replace responsibility. The episode closes with David's career journey, from early exposure to technology through his family, to coaching athletics, to enterprise security leadership. He explains how coaching shaped his leadership philosophy and how those lessons translate directly into managing security teams under pressure.

Duration:00:51:51