ISF Podcast-logo

ISF Podcast

Technology Podcasts

The ISF Podcast brings you cutting-edge conversation, tailored to CISOs, CTOs, CROs, and other global security pros. In every episode of the ISF Podcast, Chief Executive, Steve Durbin speaks with rule-breakers, collaborators, culture builders, and business creatives who manage their enterprise with vision, transparency, authenticity, and integrity. From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Location:

United Kingdom

Description:

The ISF Podcast brings you cutting-edge conversation, tailored to CISOs, CTOs, CROs, and other global security pros. In every episode of the ISF Podcast, Chief Executive, Steve Durbin speaks with rule-breakers, collaborators, culture builders, and business creatives who manage their enterprise with vision, transparency, authenticity, and integrity. From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Language:

English


Episodes
Ask host to enable sharing for playback control

S25 Ep5: Boosting Business Success: Unleashing the potential of human and AI collaboration

4/30/2024
Today, Steve and producer Tavia Gilbert discuss the impact artificial intelligence is having on the threat landscape and how businesses can leverage this new technology and collaborate with it successfully. Key Takeaways: 1. AI risk is best presented in business-friendly terms when seeking to engage executives at the board level. 2. Steve Durbin takes the position that AI will not replace leadership roles, as human strengths like emotional intelligence and complex decision making are still essential. 3. AI risk management must be aligned with business objectives while ethical considerations are integrated into AI development. 4. Since AI regulation will be patchy, effective mitigation and security strategies must be built in from the start. Tune in to hear more about: 1. AI’s impact on cybersecurity, including industrialized high-impact attacks and manipulation of data (0:00) 2. AI collaboration with humans, focusing on benefits and risks (4:12) 3. AI adoption in organizations, cybersecurity risks, and board involvement (11:09) 4. AI governance, risk management, and ethics (15:42) Standout Quotes: 1. Cyber leaders have to present security issues in terms that board level executives can understand and act on, and that's certainly the case when it comes to AI. So that means reporting AI risk in financial, economic, operational terms, not just in technical terms. If you report in technical terms, you will lose the room exceptionally quickly. It also involves aligning AI risk management with business needs by you know, identifying how AI risk management and resilience are going to help to meet business objectives. And if you can do that, as opposed to losing the room, you will certainly win the room. -Steve Durbin 2. AI, of course, does provide some solution to that, in that if you can provide it with enough examples of what good looks like and what bad looks like in terms of data integrity, then the systems can, to an extent, differentiate between what is correct and what is incorrect. But the fact remains that data manipulation, changing data, whether that be in software code, whether it be in information that we're storing, all of those things remain a major concern. -Steve Durbin 3. We can’t turn the clock back. So at the ISF, you know, our goal is to try to help organizations figure out how to use this technology wisely. So we're going to be talking about ways humans and AI complement each other, such as collaboration, automation, problem solving, monitoring, oversight, all of those sorts of areas. And I think for these to work, and for us to work effectively with AI, we need to start by recognizing the strengths both we as people and also AI models can bring to the table. -Steve Durbin 4. I also think that boards really need to think through the impact of what they're doing with AI on the workforce, and indeed, on other stakeholders. And last, but certainly not least, what the governance implications of the use of AI might look like. And so therefore, what new policies controls need to be implemented. -Steve Durbin 5. We need to be paying specific attention to things like ethical risk assessment, working to detect and mitigate bias, ensure that there is, of course, informed consent when somebody interacts with AI. And we do need, I think, to be particularly mindful about bias, you know? Bias detection, bias mitigation. Those are fundamental, because we could end up making all sorts of decisions or having the machines make decisions that we didn't really want. So there's always going to be in that area, I think, in particular, a role for human oversight of AI activities. -Steve Durbin Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk...

Duration:00:22:48

Ask host to enable sharing for playback control

S25 Ep4: Brian Lord - AI, Mis-and Disinformation in Election Fraud and Education

4/23/2024
This is the second of a two-part conversation between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. Today, Steve and Brian discuss the proliferation of mis- and disinformation online, the potential security threats posed by AI, and the need for educating children in cyber awareness from a young age. Key Takeaways: 1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI. 2. AI’s increasing ability to create fabricated images poses a particular threat to youth and other vulnerable users. Tune in to hear more about: 1. Brian gives his assessment of cybersecurity threats during election years. (16:04) 2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0) Standout Quotes: 1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord 2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord Mentioned in this episode: • ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:23:07

Ask host to enable sharing for playback control

S25 Ep3: Brian Lord - Lost in Regulation: Bridging the cyber security gap for SMEs

4/16/2024
This episode is the first of two conversations between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. He brings his knowledge of both the public and private sector to bear in this wide-ranging conversation. Steve and Brian touch on the challenges small-midsize enterprises face in implementing cyber defenses, what effective cooperation between government and the private sector looks like, and the role insurance may play in cybersecurity. Key Takeaways: 1. A widespread, societal approach involving both the public and private sectors is essential in order to address the increasingly complex risk landscape of cyber attacks. 2. At the public or governmental levels, there is an increasing need to bring affordable cyber security services to small and mid-sized businesses, because failing to do so puts those businesses and major supply chains at risk. 3. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI. Tune in to hear more about: 1. The National Cybersecurity Organization is part of GCHQ, serving to set regulatory standards and safeguards, communicate novel threats, and uphold national security measures in the digital space. (5:42) 2. Steve and Brian discuss existing challenges of small organizations lacking knowledge and expertise to meet cybersecurity regulations, leading to high costs for external advice and testing. (7:40) Standout Quotes: 1. “...If you buy an external expertise — because you have to do, because either you haven’t got the demand to employ your own, or if you did the cost of employment would be very hard — the cost of buying an external advisor becomes very high. And I think the only way that can be addressed without compromising the standards is of course, to make more people develop more skills and more knowledge. And that, in a challenging way, is a long, long term problem. That is the biggest problem we have in the UK at the moment. And actually, in a lot of countries. The cost of implementing cybersecurity can quite often outweigh, as it may be seen within a smaller business context, the benefit.” -Brian Lord 2. “I think there probably needs to be a lot more tangible support, I think, for the small to medium enterprises. But that can only come out of collaboration with the cybersecurity industry and with government about, how do you make sure that some of the fees around that are capped?” -Brian Lord Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:37:31

Ask host to enable sharing for playback control

S25 Ep2: Eric Siegel - The AI Playbook: Leveraging machine learning to grow your business

4/9/2024
Today, Steve is in conversation with AI expert Eric Siegel. A former professor at Columbia University, Eric is the founder of the long-running Machine Learning Week conference series and a bestselling author. His latest book, The AI Playbook, looks at how businesses outside Big Tech can leverage machine learning to grow. He and Steve discuss the differences between generative and predictive AI, the most effective ways to implement AI into an organization’s operations, and how we might expect this technology to be useful in the future. Key Takeaways: 1. No matter how controlled or well thought out a project is, any project relying on AI is only as good as its data inputs. 2. The more we learn to differentiate types of AI and apply their functions skillfully, the more we will learn about what is possible. 3. As predictive AI systems emerge, applying quality data analysis to a well chosen project could make a measurable difference for a company’s bottom line. Tune in to hear more about: 1. Designing a project involving predictive analytics does require quality data and specific domain areas. (3:00) 2. Generative analytics is still in early stages, and popular notions around its use currently differ from what can reasonably be expected or achieved (4:42) 3. Using AI to work with errors and improve a system requires quality data and carefully applied labels (11:59) Standout Quotes: 1. “It's absolutely critical to have a fine scope, a reasonable scope, well defined for the first project. But the most well defined, sort of, well, scoped project is, in another way, the biggest because really what we're talking about, if you're looking at what should your first opportunity be with predictive AI that you want to pursue, it should be your largest scale operation that stands to improve the most, and that even an incremental improvement provides a tremendous bottom line. -Eric Siegel 2. “ … It's such a funny time, because predictive and generative are really apples and oranges. They're both built on machine learning, which learns from data to predict. But generative isn't a reference to really something specific in terms of the technology; it's just how you're using it, which is to generate new content items. So, writing a first draft in human language, like English, or of code, or creating a first image or video — these endeavors typically need a human in the loop to review everything that it's generated. They're not autonomous. And the question is, how autonomous could they be?” -Eric Siegel 3. “You can only predict better than guessing, which turns out to be more than sufficient to drive an improvement to the bottom line. So who's going to click, buy, lie or die, or commit an act of fraud, or turn out to cancel or be a bad debtor? These are human behaviors for those examples, or it could be a corporate client, or it could be a mechanism like a satellite, or the wheel of a train that might fail. But whatever it is, we don't have clairvoyance or a magic crystal ball. We can't expect your computers to, either. So it's about tipping the odds in these numbers games and predicting better than guessing … no matter how good the data is and how devoid of wrong values and those types of errors, you're still going to have that limitation. There’s still a ceiling. No matter how advanced the method is, it's not going to become supernatural. There's a thing called chaos theory, which basically says that even if you knew all the neurons of every cell of the person's brain, you wouldn't necessarily be able to predict very far into the future. And of course, we don’t. So it's always limited data anyway.” -Eric Siegel 4. “I wrote this new book, The AI Playbook, because we need an organizational practice to make sure that we're sort of planning the project not just technically but organizationally and operationally, so that it actually gets deployed and makes a difference and actually improves operations. And in general, the...

Duration:00:21:40

Ask host to enable sharing for playback control

S25 Ep1: Cyber Warfare and Democracy in the Age of Artificial Intelligence

4/2/2024
Today, Steve is speaking with Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies and Dslt Ethics Fellow at the Alan Turing Institute. Mariarosaria brings her expertise as a philosopher to bear in this discussion of why and how we must develop agreed-upon ethical principles and governance for cyber warfare. Key Takeaways: 1. As cyber attacks increase, international humanitarian law and rules of war require a conceptual shift. 2. To maintain competitive advantage while upholding their values, liberal democracies are needing to move swiftly to develop and integrate regulation of emerging digital technologies and AI. 3. Many new technologies have a direct and harmful impact on the environment, so it’s imperative that any ethical AI be developed sustainably. Tune in to hear more about: 1. The digital revolution affects how we do things, how we think about our environment, and how we interact with the environment. (1:10) 2. Regardless of how individual countries may wield new digital capabilities, liberal democracies as such must endeavor tirelessly to develop digital systems and AI that is well considered, that is ethically sound, and that does not discriminate. (5:20) 3. New digital capabilities may produce CO2 and other environmental impacts that will need to be recognized and accounted for as new technologies are being rolled out. (10:03) Standout Quotes: 1. “The way in which international humanitarian laws works or just war theory works is that we tell you what kind of force, when, and how you can use it to regulate the conduct of states in war. Now, fast forward to 2007, cyber attacks against Estonia, and you have a different kind of war, where you have an aggressive behavior, but we're not using force anymore. How do you regulate this new phenomenon, if so far, we have regulated war by regulating force, but now this new type of war is not a force in itself or does not imply the use of force? So this is a conceptual shift. A concept which is not radically changing, but has acquired or identifies a new phenomenon which is new compared to what we used to do before.” - Mariarosario Taddeo 2. “I joke with my students when they come up with this same objection, I say, well, you know, we didn't stop putting alarms and locking our doors because sooner or later, somebody will break into the house. It's the same principle. The risk is there, it’s present. They’re gonna do things faster in a more dangerous way, but if we give up to the regulations, then we might as well surrender immediately, right?” - Mariarosario Taddeo 3. “LLMs, for example, large language models, ChatGPT for example, they consume a lot of the resources of our environment. We did with some of the students here of AI a few years ago a study where we show that training just one round of ChatGPT-3 would produce as much CO2 as 49 cars in the US for a year. It’s a huge toll on the environment. So ethical AI means also sustainably developed.” - Mariarosario Taddeo Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:25:37

Ask host to enable sharing for playback control

S24 Ep12: Cyber Exercises: Fail to prepare, prepare to fail

3/26/2024
A repeat of one of our top episodes from 2023: October is Cyber Awareness Month, and we’re marking the occasion with a series of three episodes featuring Steve in conversation with ISF’s Regional Director for Europe, the Middle East and Africa, Dan Norman. Today, Steve and Dan discuss the importance of cyber resilience and how organisations can prepare for cyber attacks. Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:16:59

Ask host to enable sharing for playback control

S24 Ep11: Tali Sharot - Changing Behaviours: Why facts alone don't work

3/19/2024
Today’s episode was recorded at ISF’s 2023 Congress in Rotterdam. Steve sat down with Tali Sharot, professor of neuroscience at University College London, to talk about her fascinating research on optimism bias. Tali offers fresh, evidence-based ideas on effective communication for security leaders seeking to present their message to their board and raise cyber awareness within the organisation. Key Takeaways: 1. Innately, the brain is an optimist. 2. Implications for the business community. 3. Present bias means that people care more about now than the future. 4. Data is key, and pairing anecdotes with data can be more effective. Tune in to hear more about: 1. Sharot’s research about how emotion affects memory (0:28) 2. Optimism bias has implications for the way we evaluate risk (4:25) 3. Sharot considers present bias and how it shows up in organisations (9:39) 4. Why storytelling is so effective when paired with data (15:30) Standout Quotes: 1. “It turns out that in behavioral economics, there was quite a lot of research about this thing called the optimism bias, which is our tendency to imagine the future as better than the past, than the present. And that's exactly what I was seeing in this experiment. And that was really the first experiment that I did looking at what goes on inside the brain that causes us to have these kind of rose-colored glasses on when we think about the future.” -Tali Sharot 2. “What we find again and again is that people underestimate the risk. And that's, of course, a problem. And it's not just underestimating risk. People also underestimate how long projects will take to complete, how much it would cost, underestimating budgets. All these are related to this phenomena of the optimism bias. And so it's really difficult to try to convince people that their estimate is incorrect. Because what we found is that if you give people information to try to correct their estimate, and you tell them actually, it's much worse than what you thought, your risk is much higher than what you're thinking, people don't take that information and change their belief to the extent that they should. They do learn a little bit, but not enough … However, if you tell them actually, you don't have as much risk as you think, you're in a great position, then they learn really quickly.” -Tali Sharot 3. “The immediacy is quite important, because we have what's called a present bias. We care more about the now than the future. In general, even if we're not aware of that.” -Tali Sharot 4. “And what stories do, they do a few things. First of all, we're more likely to attend to stories, right to listen, they're more interesting, they're more colorful, they're more detailed, we're more likely to remember them, partially because they usually elicit more emotion than just the data. So it's good to pair the two, to have the anecdote that kind of illustrates the data that you already have in hand.” -Tali Sharot Mentioned in this episode: Human-centred Security: Positively influencing security behaviourISF Analyst Insight Podcastbooks by Tali Sharot Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:20:32

Ask host to enable sharing for playback control

S24 Ep10: Nina Schick - The Future of Information Integrity

3/12/2024
This week, we’ve got another fascinating conversation recorded at the 2023 ISF Congress in Rotterdam. This time, Steve speaks with generative AI expert Nina Schick. Nina and Steve discuss how AI, along with other technological trends that are evolving at exponential speed, are shaping both geopolitics and individual lives. Key Takeaways: 1. Generative AI is reshaping the geopolitical landscape. 2. Educating ourselves and others about the implications of quickly evolving tech in global affairs. 3. Industries struggling to regulate exponential technology. 4. There are more questions than answers as we look to the future in tech. Tune in to hear more about: 1. AI’s geopolitical impacts (3:13) 2. Learning about how tech is impacting global affairs (9:53) 3. Regulation challenges (11:55) 4. Nina Shick’s take on the economics of generative AI (16:27) Standout Quotes: 1. “As the oil economies of Saudi Arabia and UAE seek to diversify away from oil and energy, one of the things that they're doing is trying to become very high tech economies when artificial intelligence is absolutely leading the way with these strategies. And there's so much money going to be invested in the Gulf in the coming decade when it comes to artificial intelligence. Again, even though these are relatively small countries, they are perhaps going to punch above their weight when it comes to power that is harnessed by artificial intelligence. And that means in a military sense, in an economic sense, and ultimately, you know, a geopolitical sense.” -Nina Schick 2. “I think the harder thing also are the non technical solutions – you know, education, literacy – how do people get upskilled in terms of understanding the new capabilities of artificial intelligence and how they will be deployed in their respective domains? So I think it's not only that there are technical solutions, there are also societal and learning solutions which perhaps we're going to have to get on top of very, very quickly.” -Nina Schick 3. “Regulators have to work with industry. There's no way they can do this themselves. And already in many of the kind of more promising areas with dealing with some of the challenges, such as information integrity, when you come to questions like provenance, you see industry championing the way and supporting regulators.” -Nina Schick 4. “Will there be economic value associated with AI? I think, absolutely. But the question is, how's that going to be distributed? And is it going to be monopolized? So that's going to happen with regards to the tech giants, who I think will become very, very, very powerful. I think this will continue to be a priority of utmost importance to governments. I think this challenge, or this kind of race between China and the US with regards to artificial intelligence will continue to play out. I think the Middle East is going to become a strong contender. And I suspect Europe might fall behind a little bit … And actually, I think that this technology is also going to be in the hands of millions of people.” -Nina Schick Mentioned in this episode: Threat Horizon 2024: the Disintegration of TrustISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:20:07

Ask host to enable sharing for playback control

S24 Ep9: Peter Hinssen - The Never Normal

3/5/2024
This week, we have a rare repeat guest on the podcast. Listeners may remember innovator and thought leader Peter Hinssen from the 2019 ISF Congress in Dublin. We had him back this year at ISF’s 2023 Congress in Rotterdam. He and Steve had a chance to talk about the future of work post-pandemic. Their conversation offers lots of practical tips for leaders working to prevent workforce burnout and how boards can approach adopting new technologies like AI. Key Takeaways: 1. COVID has made lasting impacts on the future of work. 2. Annual budgets and other commonly used business practices are in the process of evolving into ones that are more malleable and adaptive. 3. Companies will need to reinvent themselves to thrive in the new “never normal.” 4. With the rise of AI and large language models, organisations do have a lot of work ahead. Tune in to hear more about: 1. COVID’s impact on the future of work (1:40) 2. Sunsetting pre-pandemic business norms while imagining new ones (2:47) 3. Companies in every sector will be reinventing themselves in order to thrive in the new “never normal.” (6:33) 4. As AI and large language models are integrated into global business, what’s next for business leaders? (14:12) Standout Quotes: 1. “One of the good things I think that we've gotten back from COVID or that we've retained from COVID is that I think the acceleration of the future of work has really happened. I think we're now seeing companies that clearly see that the way we did HR, employment, and work pre pandemic, we can't just hope that that is going to come back. And I think that is a fundamental change that I think was really something that the pandemic helped us with.” -Peter Hinssen 2. “But you know, that stronghold, that idea, that framework of an annual budget that we've held on to for such a long time is very difficult to actually give up. But I think it's exciting, because I think in this never normal, we're going to see new mechanisms and new ideas and new concepts and new governance ideas that are going to come to fruition. But at this moment, we're still very much in that transition.” -Peter Hinssen 3. “I really believe after a decade of unicorn applause, we're now going to have a decade of potential phoenixes out there. And I think a big part as a leader in such a phoenix transformation is to actually give your workforce, your people that sense of we're going to do this together.” -Peter Hinssen 4. “We're going to have to deal with that governance of content, unstructured flows, and that's a whole new kettle of fish that we have to understand. New technologies, new mechanisms, new ways of dealing with that. And I think it's going to open up a huge opportunity in terms of thinking about risk and thinking about security in that context. So I don't think you can ignore this. This is the biggest thing that I have ever seen in 30 years in IT. And if you're not on top of this, you're gonna be behind … And I think honestly, more and more boards are going to need to figure out how to build the skills and the mechanisms and the place to discuss these things that are not just compliance with the tsunami, but also the innovations that you cannot afford to miss. And I think, honestly, that's going to change the dialogue in the board quite a bit.” -Peter Hinssen Mentioned in this episode: Remote Working and Cyber RiskThreat Horizon 2025: Scenarios for an uncertain future - full report (ISFLIve)ISF Analyst Insight PodcastThe Phoenix and the Unicorn Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:24:15

Ask host to enable sharing for playback control

S24 Ep8: Christy Pretzinger - Leadership Empathy and the Cultural Balance Sheet

2/27/2024
Today, Steve is in conversation with Christy Pretzinger, founder, president and CEO of WriterGirl. Over the past 20 years, Christy has grown the company from a modest freelance writing business into a healthcare content consultancy. She speaks with Steve about some of the practical tools she has implemented in order to grow the company’s culture, the role of leadership in training and retaining emotionally intelligent employees, and the impact her focus as a leader on the company’s cultural balance sheet has had on their financial balance sheet. March 8th is International Women’s Day and we want to mark the occasion and make sure you haven’t missed our many valuable episodes with Steve in conversation with women in leadership. So we’ve put together a specially curated playlist featuring the best of women in leadership, and we want to give you special access. All we ask in return is this: just rate and review the ISF podcast on Apple Podcasts, Spotify, or wherever you listen, send a screenshot of that rating and review to tavia.gilbert@securityforum.org, and I’ll send you back special access to the curated playlist. Key Takeaways: Tune in to hear more about: 1. Pretzinger’s story of growing her business (1:45) 2. The cultural balance sheet and how leaders can create a corporate culture based on emotional intelligence (2:40) 3. Preventing employee turnover (9:09) 4. Implementing new technological solutions with sensitivity to employee experience and client needs (11:26) 5. The need for human connection in business even was we advance technologically (15:46) 6. Building a team that works from home (16:34) 7. Intentionality when building culture (17:10) Standout Quotes: 1. “Anyone who looks at a balance sheet knows that employee turnover is a hidden cost. It doesn't show up on a balance sheet. And I can count on one hand the number of people that have left our organization. And in fact, I don't even need the whole hand. And many people who leave continue to work with us on a contracted basis, so there is very, very little turnover. And even our younger employees expressed interest in retiring from this organization, which is really great.” - Christy Pretzinger 2. “We had everybody do a day-long workshop. And it was incredibly revealing. It took a lot of time. And it was very … I guess the things that I look for when we do these things are what Brené talks about is what every human wants is love and belonging. They want love and belonging, and they want to know that they matter.” - Christy Pretzinger 3. “About retention: I think about, obviously, a hidden cost on the balance sheet. But what I think about too, is all of that intellectual property walking out your door. You know, you've got ,we have people who have been here, I think, my longest employee is 13 years, I think. She started right after she got married, and now she has five kids. So I've literally watched her grow up. If she walked out that door, and we were so much smaller, she literally built the sales department and built the CRM tool, and still worked very heavily in contributing to that — if she walked out the door, it would be devastating. But yet, that's not going to show up on a balance sheet. .” - Christy Pretzinger 4. “So I still think that there is a tremendous place for — and not only a place but a need and a yearning for true human connection. And because I own a virtual organization, I know that you can have true human connection virtually, but it does require a camera.” - Christy Pretzinger Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:27:41

Ask host to enable sharing for playback control

S24 Ep7: Empowering your team: Lessons from a sports coach

2/20/2024
Today, Steve is speaking with American football coach Randy Jackson. During his 30-year tenure coaching high school football in Texas, Randy earned a reputation for transforming struggling programs. In 2022, in a move reminiscent of Ted Lasso, he moved to Germany to coach the Potsdam Royals, and with Jackson as the offensive coordinator, the Royals went all the way to the German Bowl. When he’s not coaching football, Randy is a business consultant. Today, he and Steve talk about how he applies his experience as a football coach in the business world. They go beyond sport cliches and dig into some concrete ways leaders can build the culture of their organisation. Key Takeaways: 1. At its inception, any organisation can benefit from building relationships and establishing a shared vision. 2. Leaders will do well to speak up frequently, reminding teams of shared aims. 3. When something goes wrong (or right!) it can be a good time to reflect, or as Randy puts it, perform an autopsy. Tune in to hear more about: 1. Establishing a shared vision, charting a collective course. (3:50) 2. Staying vocal as a leader. (6:05) 3. Whether something goes to plan or not, an autopsy of the scenario can be a helpful way forward. (10:06) Standout Quotes: 1. “So this is an activity I always do, and I did this in Germany, but close your eyes and then turn around three times, and then point True north. Well, I don't know how many people are in the room, but let's say I had 50. You're going to have 50 fingers pointing in all different directions. And so what we're going to do is, people will point in the same direction if you give them something to point at. And what you're in on you're in with.”” -Randy Jackson 2. “And if you'll talk about it, you can achieve it, but you can't talk about it once a week – you must talk about it. So whatever you want, I think every leader should say, here are the three things I want. You got to talk about those three things every day.” -Randy Jackson 3. “And the autopsy is about improvement, right? It's not about finger pointing, it's about trying to figure out how the collective can, if they hit that situation, again in the future, can adapt or behave differently.” -Steve Durbin Mentioned in this episode: Building Tomorrow’s Security WorkforceISF Analyst Insight PodcastTitles by Randy Jackson Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:21:57

Ask host to enable sharing for playback control

S24 Ep6: Mo Gawdat - Rethinking the Paradigm of Artificial and Human Intelligence

2/13/2024
Today's installment of the ISF Podcast revisits an earlier episode published February of 2023. In this episode, ISF CEO Steve Durbin is speaking with author and former Chief Business Officer of Google X Mo Gawdat. Mo and Steve discuss the complicated relationship humans have with technology, particularly AI, and how both individuals and businesses can navigate that wisely. Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management

Duration:00:22:40

Ask host to enable sharing for playback control

S24 Ep5: Quantum Computing: The promise and the threat

2/6/2024
Steve is in conversation with quantum computing expert Konstantinos Karagiannis. Konstantinos is the Director of Quantum Computing Services at Protiviti, where he helps companies prepare for quantum opportunities and threats. He talks to Steve about how this nascent technology is already a security concern and what security leaders can do now to prevent problems down the road. He also offers ideas for overcoming the skills shortages that both the security and quantum computing fields face. If you’re interested in discovering more about the technological implications of automation, machine learning and quantum computing, download the ISF’s Threat Horizon 2025: Scenarios for an uncertain future report, available to members on ISF Live. Not a member? Get in touch with your regional director today at https://www.securityforum.org/contact/. Research: Threat Horizon 2025: Scenarios for an uncertain future - full report (ISFLIve) Key Takeaways: 1. It’s a big year for compliance. Per NIST, companies are asked to start their plans for migration in 2024. 2. Konstantinos sees a need for quantum programs at the university level. 3. Where quantum is today is just a glimpse of where it’s going. Tune in to hear more about: 1. The future is now! (4:38) 2. What can be done at the university level to resource the industry (7:45) 3. Quantum computing speeds as an advantage (12:17) Standout Quotes: 1. “It'll be time for companies, starting in 2024, to start their plans for migration. In the US, the White House has already telegraphed what's going to be expected of federal agencies. They published the NSM-10 memo, which states that once the finalists are out, you have to have a plan for migration, the timeline for deprecation of ciphers, all these steps are going to kick in.” -Konstantinos Karagiannis 2. “I don't see any university have that set for a quantum program. Like, you can't just go, come out, and like, we know that we can hire you to like, implement algorithms. There's no such thing. And I'd like to see that kind of preparation, so within a few years, we've got a whole crew of folks ready to at least implement algorithms. They might not be able to create a brand new one, but there's only a few dozen of them in the world anyway.” -Konstantinos Karagiannis 3. “Quantum works well on simulations. You could simulate up to like, 50 qubits, let's say, and you can make sure your algorithm works right. And you could torture test it. And then when you're ready to actually run it, that's when you pay for what we call shots, which is just runs on a quantum computer. So yeah, you might work on this, tweak it all month, and then you spend $1,000, let's say, and you do your runs, and you're good. You're done.” -Konstantinos Karagiannis Mentioned in this episode: Threat Horizon 2025: Scenarios for an uncertain future – executive summaryISF Analyst Insight PodcastChicago Quantum ExchangeRecent work on Quantum Portfolio Optimization Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:20:42

Ask host to enable sharing for playback control

S24 Ep4: The World Economy: Politics & regulations intertwined

1/30/2024
Today’s episode is the second part of journalist Nick Witchell’s conversation with Steve at the 2023 ISF Congress in Rotterdam. As organisations become increasingly data-driven, technologies like artificial intelligence and quantum computing will have a huge impact on data security. Today, Steve looks at how security professionals can help their organisations adopt these technologies safely and smartly. Key Takeaways: 1. Trade policy is feeling the effects of geopolitical conflicts. 2. Major technological advancements are not without environmental impact. 3. Business leaders would do well to remember that data in any quantitity can be faulty, can be tampered with, making regulation and collaboration all the more important. Tune in to hear more about: 1. Conflicts such as the war in Ukraine shine a particular light on organisations’ areas of vulnerability. (2:42) 2. In the context of global warming, quantum computing poses major challeges. (5:50) 3. As quantity of data increases exponentially, so does the importance of quality. (9:33) Standout Quotes: 1. “I think that the situation in the Ukraine, in particular, was a huge wake up call for a lot of organisations and a lot of individuals. I think very few people actually understood the way in which complex supply chains today actually operate. We do take things for granted, don't we?” -Steve Durbin 2. “Quantum computing requires immense computing power. Immense computing power requires a huge amount of electricity and generates a huge amount of heat. So if you think about all of those things in the environmental context, we really do need to figure out how we're going to exist in a world where global warming is a reality, where we are really driving as hard as we can in pursuit of different technological answers.” -Steve Durbin 3. “My biggest concern, the biggest threat that I see is data that has been tampered with. Because you or I may look at something and think that doesn't look quite right, so we'll dig into it. A machine doesn't necessarily do that.” -Steve Durbin Mentioned in this episode: Securing the Supply Chain During periods of instabilityThreat Horizon 2025: Scenarios for an uncertain future – executive summaryISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:21:05

Ask host to enable sharing for playback control

S24 Ep3: Geopolitical Conflict: Are CISOs ready to tackle the impact?

1/23/2024
Today’s episode is the first of two interviews with Steve Durbin in discussion with journalist Nick Witchell. Today they discuss cybersecurity in the current geopolitical moment. Steve looks at the current security landscape in that context, and touches on how security leaders can help guide their organisations in these turbulent times. Key Takeaways: 1. Boards and CISOs need to be ready to step in with the necessary mitigation measures when increased cyber risks manifest themselves and when they are related to geopolitical tensions. 2. Social media presents real advantages, but when it comes to information, users must diligently consider the source. 3. Business leaders have many opportunities to learn from one another and gain supports as they move into the future. Tune in to hear more about: 1. Nick Witchell asks Steve Durbin about companies’ overall readiness to address cyber risks in a global context. (4:07) 2. Steve Durbin reflects on misinformation and disinformation in the age of social media. (7:19) 3. Where business leaders can find support. (11:00) Standout Quotes: 1. “There is, I think, probably two things that give me real comfort that we're moving in the right direction. The first thing is that there is an understanding now in the boardroom, that these things are material, and that they have to pay attention to them. And secondly, there is an enthusiasm in the boardroom to be involved in that, because they understand the implications on the things that they measure: risk, market cap, shareholders, and so on. So I think we're in probably a much better place to deal with some of these challenges this year than perhaps when we last spoke 12 months ago.” - Steve Durbin 2. “Personally, what I like to do is to take a number of different data points. So don't become over reliant on one particular feed, because again, within the social media space, if you think about it, you tend to lead always to people who are perhaps of a similar mind to yourself. And I think in the sorts of times that we're in at the moment, it's very important for everybody to try and get a balanced perspective, a balanced view.” - Steve Durbin 3. “ I think if I were to sum up the major role of the ISF at the moment, it's in that one word, support.” - Steve Durbin Mentioned in this episode: Threat Intelligence: React and prepareRehearsing Your Cyber Incident Response Capability During Periods of InstabilityCISOs Role During Periods of InstabilityISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:14:27

Ask host to enable sharing for playback control

S24 Ep2: Beau Lotto - Thriving in Uncertainty

1/16/2024
At the 2023 ISF Congress in Rotterdam, Steve sits down with keynote speaker Beau Lotto. Beau holds his PhD in Cellular and Molecular Developmental Neuroscience, and he teaches organizations how to apply scientific truths about perception to adapt and thrive in an ever-changing world. He has helped brands like Cirque du Soleil, Microsoft and L’oreal gain valuable, science-backed insights into their businesses and customers. Beau talks to Steve about how security leaders can change their way of being to effect change at their organisations, and he offers practical ways of incorporating play and diversity to improve team outcomes. Key Takeaways: 1. Business leaders have a unique role when it comes to establishing business culture. 2. What you do is your function; what you are about is your business. 3. Authenticity is critical for buy in, and buy in is critical for success. 4. Seemingly small changes in initial conditions are powerful and can yield massive results. 5. Embracing uncertainty is a winning strategy, and it can be fun. Tune in to hear more about: 1. The Host Effect (1:40) 2. Measuring Relevance (3:15) 3. Leading with Authenticity (8:55) 4. Transforming Initial Conditions (12:00) 5. Play as a Mindset (14:30) Standout Quotes: 1. “... have you noticed that the personality of the party is very much the personality of the host? … it's because the brain infects and is infected by other people.” 2. “Your business is how and why are you relevant … what we do with my Lab of Misfits is we then measure their actual relevance on the audience. So we, in that case, we would measure the brain activity of the people during the performance, what happened to people before and after. So now what they can do is take ownership of what we call the human truth of value that they're actually in the business of.” 3. “You can be authentic in any situation … You don't need others to shape that for you; that's intrinsic within you. And that gives you that sense of being proactive, which is essential in times of uncertainty, which is what we're facing all the time.” 4. “...if you look at, say, the initial conditions of the solar system, you have Mars, let’s take Mars. If you were to alter its proximity to the Sun by one millimeter, make it a little one millimeter closer, in 10% of models the whole solar system collapses. If you take Mars and put it one millimeter further away, in 10% of the models the whole cell system explodes and goes off into space … so small change in the initial conditions can have massive transformative effects.” 5. “... play is actually an evolved brain state where we actually choose uncertainty. We don't avoid it. We actually want it. It's not that we hate it, but we're going to turn down our loathing of it. We actually seek it out. Right? And you know, not knowing who's going to win the Rugby World Cup is why it's fun to watch.” Mentioned in this episode: Leadership Insights: Unlocking the business value of securityISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:27:11

Ask host to enable sharing for playback control

S24 Ep1: A sneak peek into Season 24

1/9/2024
Today, we’re previewing some of the best moments from the episodes you’ll be hearing from the podcast this season. Most of these were recorded at ISF Congress 2023 in Rotterdam this past October, with a few others in the mix. Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:15:51

Ask host to enable sharing for playback control

S23 Ep8: Emerging Threats for 2024

12/19/2023
Today’s episode features Steve’s recent presentation to ISF members on Emerging Threats for next year. He offers a picture of the evolution of cyber, and the resulting challenges and opportunities for security professionals, in 2024. Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:21:01

Ask host to enable sharing for playback control

S23 Ep7: AI and US: The path ahead

12/12/2023
Today’s episode features an interview Steve gave for Infosys, the Indian multinational IT consulting firm. Steve addresses how organisations can adopt AI securely, considers how this new technology could change the way we work, and looks at how businesses can make themselves resilient in the face of emerging threats. Mentioned in this episode: ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:37:58

Ask host to enable sharing for playback control

S23 Ep6: Data Dilemmas: Outsmarting the perils of AI

12/5/2023
ISF CEO Steve Durbin and producer Tavia Gilbert discuss Artificial Intelligence and the Board — what they need to know, updates on evolving regulations in the EU and the US, and how security professionals can best communicate with organisational leadership on this topic. Mentioned in this episode: ISF Analyst Insight PodcastEuropean Union AI ActPresident Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Duration:00:27:53