Beneficial Intelligence-logo

Beneficial Intelligence

Technology Podcasts

A weekly podcast with stories and pragmatic advice for CIOs and other IT leaders.

Location:

Denmark

Description:

A weekly podcast with stories and pragmatic advice for CIOs and other IT leaders.

Language:

English


Episodes
Ask host to enable sharing for playback control

Other People's Failures

12/10/2021
In this episode of Beneficial Intelligence, I discuss other people's failures. They can affect you, as the recent Amazon Web Services outage showed. Cat owners who had trusted the feeding of their felines to internet-connected devices came home to find their homes shredded by hungry cats. People who had automated their lighting sat in darkness, yelling in vain at their Alexa devices for more light. More serious problems also occurred as students couldn't submit assignments, Ticketmaster couldn't sell Adele tickets and helpless investors watched their stocks tank while being unable to sell. On a personal level, this dependency is an occasional inconvenience. But for companies, it is a problem. When you buy cloud services directly from Amazon, Microsoft, or Google, at least you know what you depend on, and can take your own precautions. But your SaaS vendors depend on one of the big three cloud providers. You will find that most of them consider using two different data centers with the same cloud vendor to be plenty of redundancy. It isn't. Another problem is your "smart" devices that all communicate via the internet to a server controlled by the device vendor. The vendor is running that server in one of the three big clouds. That means an Amazon outage can lock you out of your building. Some of your systems are business crucial. For these, you need to find out what your vendors depend on. Otherwise, you will be blindsided by other people's failures. ------ Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:35

Ask host to enable sharing for playback control

People Shortage

11/26/2021
In this episode of Beneficial Intelligence, I discuss the people shortage. It isn't real. Complaining about a lack of people is what is known as a "half argument." You say what you want, but not what you are willing to give up. That's like a politician promising to build a new public hospital but won't say where the money will come from. The full argument for missing people is "we cannot get the people we want at the conditions we are willing to offer." If you had a crucial project that will make the business millions of dollars, you would be able to find the resources you need. You could simply offer three times the market rate, full benefits, and a 40-hour workweek with no overtime. Allocating resources is a basic leadership task. You rank your tasks and projects in order of descending business value and allocate available resources to the most valuable. It doesn't make sense for a CIO to say that the organization is "missing" a hundred programmers. A full argument would be that if we had a hundred extra programmers, we could build a specific IT system that is less valuable than all the current projects. There might be a real shortage of money or copper or clean water. But there is no shortage of people. --- Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:05:43

Ask host to enable sharing for playback control

Data Hoarding

10/29/2021
In this episode of Beneficial Intelligence, I discuss data hoarding. Gathering too much data costs money and doesn't add value. We think we need all this data to train our AI, but hoarding data is the wrong place to start. Using a counterproductive metaphor, some say that "data is the new oil." That is a dangerous metaphor with no less than four problems: Gathering data in the hope of extracting value is putting the cart in front of the horse. The right way to work with data is to start with a business goal and a hypothesis about which data might provide insight. Gather the data, run the experiment and evaluate. Don't just hoard data. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:28

Ask host to enable sharing for playback control

Monoculture

10/15/2021
In this episode of Beneficial Intelligence, I discuss monoculture. Just like in farming, monoculture is efficient and dangerous. Modern farmers will plan hundreds or thousands of acres with the same crop. That gives efficiency because the entire crop will respond identically to fertilizer and pesticides. It also means that the entire harvest will be lost if some new pest or disease suddenly appears. Monoculture cost more than a million lives in Ireland in the Great Famine of the 1850s. There is also monoculture in your IT landscape. If all your systems have the same hardware and run the same software, they will all be vulnerable to the same bugs and malware. Your servers are probably many different types because they have been added over the years. But if you run the same virtualization software on most of them, your entire infrastructure is vulnerable to a bug in your virtualization. Your workstations are monoculture, and if something takes out Microsoft Windows, you are dead in the water. But the really dangerous monoculture is found in your network equipment. You probably buy all your gear from one vendor so your network people only need one skill stack. But that means that a vulnerability will expose your entire network. You don't want to put all your eggs in one basket. If you are concerned with robustness and business continuity, beware of monoculture. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:09:04

Ask host to enable sharing for playback control

Trust, but Verify

10/1/2021
In this episode of Beneficial Intelligence, I discuss trusting your vendors. You trust them to make their best effort at producing bug-free code. You probably trust that their software will perform at least 50% of what they promise. You might trust them to eventually build at least some of the features on their roadmap. But can you trust them to not build secret backdoors into the software they give you? Snowdon showed we cannot trust any large American tech company because they send our data straight into the databases of the National Security Agency. Apparently, you cannot trust Chinese smartphone vendor Xiaomi. The Lithuanian National Cyber Security Centre just published the results of their investigation, and they recommend that people with such phones replace them with non-Xiaomi phones "as fast as reasonably possible." It turns out these phones send some kind of encrypted data to a server in Singapore, and that it has censorship built in. Phrases such as "Free Tibet" simply cannot be rendered by the browser or any other app. Right now, that feature is not active in Europe, but it might be enabled at any time. During the nuclear disarmament discussions between the United States and the Soviet Union in the 1980s, Ronald Reagan was fond of quoting a Russian proverb: Doveryay, no proveryay - Trust, but verify. The ability for both parties to verify what the other was doing became a defining feature of the eventual agreement. In software, we can verify Open Source. If you cannot find open source software that does what you need, many enterprise software vendors will make their source code available to you under reasonable non-disclosure provisions. In your organization, there should be both trust and verification. Don't simply trust your software vendors. Trust, but verify. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:09:34

Ask host to enable sharing for playback control

Time to Recover

9/17/2021
In this episode of Beneficial Intelligence, I discuss time to recover. The entire network of the justice ministry of South Africa has been disabled by ransomware, and they don't know when they'll be back. Do you know how long it would take you to recover each system your organization is running? When you have an IT outage, what the business wants most is a realistic timeline for when services will be back. If IT can confidently tell them that it will take 72 hours to restore services, the business knows what they are dealing with. They can inform their stakeholders and make informed decisions about in which areas manual procedures or alternative workflows should be implemented. The worst thing IT can do in such a case is to keep promising "a few hours" for days in a row. In the 1980s, I was working for Hewlett-Packard. They had a large LED scrolling display mounted over their open-plan office. The only time it was ever used was when their main email and calendar system was unexpectedly down, telling everyone when it would be back up. In the 1990s, I was doing military service in the Royal Danish Air Force as a Damage Control Officer. After an attack, I had to tell the base commander how much runway we had available. I had planned our reconnaissance and could confidently say that I would know in less than 28 minutes af the all-clear. In the early 2000s, I was working with database professionals. These people spent much of their time preparing to recover their databases. They had practiced recovery many times and knew exactly how long recovery would take. As the CIO, take a look at the list of your system. It needs to list the expected time to recover for every system. The technical person for the system should verify that this time has been tested recently, and the business responsible should verify that this time is acceptable. If you don't have a documented time to recover per system, you need to put your people to work to create it. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:08:28

Ask host to enable sharing for playback control

Goal Fixation

9/3/2021
In this episode of Beneficial Intelligence, I discuss goal fixation. Richard Branson almost didn't make it back from space. His pilots had a problem and flew very close to the limit. They should have aborted. But the future of commercial spaceflight was resting on their shoulders. They were fixated on the goal, and that causes problems. The reason we are finding out is that authorities noticed the flight was outside its designated airspace because stronger winds than expected caused the flight to have a different profile. The pilots got a red light ENTRY GLIDE CONE WARN. That means the spacecraft is so far from the planned course that it might not reach the place it has to be to glide to the landing site. The correct checklist approach is to abort the mission. But this was a highly-billed first commercial flight with the founder on board. The pilots pressed on. They managed to go to space and return safely. But they were dangerously close to the edge. People die because they get fixated on the goal and push on. Mountaineers continue towards the summit after the safe turnaround time, and pilots fly into bad weather. Some of the well-known people we've lost to pilot goal fixation include basketball legend Kobe Bryant and Polish President Lech Kaczyński. We see the same thing in failed IT projects. Multi-year, million-dollar projects keep collapsing ignominiously without anything to show for all the effort. This happens due to goal fixation. Tragically, the problem is completely invisible to project sponsors who feel part of their reputation is on the line as sponsors of the project. It is also invisible to the program management and project leaders inside the project. There are two solutions. One is to listen to the people on the ground. The programmers and testers know that a project will fail. The other is to get an independent outside opinion. That's what I provide to my customers. If you don't have a process for gathering in-the-trenches information, or an outside advisor, or preferably both, you are likely to fall prey to goal fixation. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:09:10

Ask host to enable sharing for playback control

Narrow Focus

8/20/2021
In this episode of Beneficial Intelligence, I discuss the narrow focus of IT professionals. This is an unavoidable consequence of the complexity of the technology we use. We've had to learn to give our computers very exact instructions, and that informs our thinking. The app from my local supermarket is obviously built by people with a narrow focus. If I search for "sugar," the first hit is "pickled cucumber (sugar-free)." The Amazon app, on the other hand, is built by people with a wider focus. Whatever you search for, Amazon will always give you a suggestion. When IT organizations try to hire, they will come up with a long list of technologies and programming languages. Unfortunately, nobody matches the entire list, and no one is hired. Successful organizations instead ask for recommendations and interviews to find people with energy and willingness to learn. In the IT industry, we call something Artificial Intelligence if it can succeed at some very narrow task like recognizing cats in videos. Unfortunately, the word "intelligence" means something much wider to everyone else. When Tesla talks about "autopilot," they mean something that can stay on the road at a constant speed. In a narrow sense, a Tesla has an autopilot. In the wider sense, drivers expect that word to mean a car that drives itself. A narrow focus is a quality in an IT professional. There is no need to change them, and they do not get a wider focus by being sent on a User Experience (UX) boot camp or a three-day Product Owner course. Your teams need people with a wider focus. That's something that UX professionals and real product owners from the business can give you. That's why the best IT organizations employ anthropologists to study users. It is your job as an IT leader to ensure you have people with both narrow and broad focus. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:08:28

Ask host to enable sharing for playback control

Back to the Office

8/6/2021
In this episode of Beneficial Intelligence, I discuss whether you should force people back to the office. This will be your most important leadership decision this year. Apple told everyone to report back to the office. Apple CEO Tim Cook says that "in-person collaboration is essential to our culture." Google is expecting 20% of employees to work from home in the long term, while Facebook is expecting 50% remote work. The big Wall Street banks, on the other hand, require everyone back in their New York offices five days a week. Remote working presents two problems: Culture and promotion. Deciding on a remote working policy is your most important leadership task right now. Not making a decision and letting people work it out for themselves is the worst option. It will mean that experienced employees will stay at home and the new hires will wander the halls of empty offices, quickly quitting again. And your leadership team will become less diverse. Your job as a leader is not to be popular. Your job is to do make decisions that ensure your organization meets its goals. This year, that is likely to involve forcing some people back to the office. Beneficial Intelligence is a bi-weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:08:38

Ask host to enable sharing for playback control

Humans and Computers

7/23/2021
In this episode of Beneficial Intelligence, I discuss humans and computers. Jeff Bezos went to space in a fully autonomous computer-controlled rocket. Richard Branson went to space last week, and he had humans flying his spacecraft. The Silicon Valley mindset is that you can program or train computers to do anything. However, as the continuing struggle to build truly self-driving cars has shown, some things are still very, very hard for computers. Even Elon Musk, who claims his Teslas are self-driving, has manual controls on his spacecraft, SpaceX Crew Dragon. Jeff Bezos remains fully committed to the power of computers, and computers will fire Amazon workers automatically if they don't perform as the algorithm expects. Richard Branson, on the other hand, is an entrepreneur. He has founded dozens of companies and made them successful by believing in humans. He hires good people, gives them resources and direction, and lets them do their thing. The first human spaceflight program of the United States was Project Mercury. NASA initially subscribed to the computer-centric school of thought. But the highly trained astronauts rebelled and demanded a window so they could fly the spacecraft if needed. Fortunately, they got their way. On the last Mercury mission, astronaut Gordon Cooper saved his life and the U.S. space program by hand-flying his craft back to earth after multiple equipment failures. You can implement IT systems in two ways. Either the computer is in charge, and the human can intervene. Or the human is in charge, and the computer assists. What's your approach? Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:06:42

Ask host to enable sharing for playback control

Competition

7/9/2021
In this episode of Beneficial Intelligence, I discuss competition. Billionaires Jeff Bezos and Richard Branson are competing who gets to space first, with both likely to blast off within the next two weeks. Competition is one of the great forces propelling the world forward. Richard Branson's Virgin Galactic spacecraft is based on SpaceShipOne that won the Ansari X Prize back in 2004. That prize was for a private spacecraft that could go to the edge of space twice in two weeks. It seemed impossible, but aerospace genius Burt Rutan with funding from Microsoft billionaire Paul Allen claimed the prize. In the early part of the 20th century, the Schneider Prize similarly spurred innovation in aviation. The 1931 winner became the basis of the Spitfire fighter aircraft that won the Battle of Britain in 1940. Self-driving cars come from the DARPA Grand Challenge. In 2004, no car could autonomously drive more than 7 miles. The next year, competition between especially Stanford University and Carnegie Mellon University resulted in their two cars completing the 150 miles route within 9 minutes of each other. If you have clear competitors in your space, identify them. Have someone examine your competitors' products, and share that knowledge with the entire team. Making sure that everyone knows what the bar is can release energy and creativity that will allow you to leapfrog the competition. If you don't have a good external competitor to benchmark yourself against, commission two competing products inside your organization. That costs more money, but it releases energy and gives you speed and creativity. Once a winner has been declared, incorporate the best ideas from the losing project in the winning one. Competition has been a great force for progress all through human history. Use it in your organization for increased creativity, energy, and speed. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:10:18

Ask host to enable sharing for playback control

Pseudo-Security

6/25/2021
In this episode of Beneficial Intelligence, I discuss pseudo-security. The lock on your front door is not secure. It takes an experienced locksmith an average of 7.1 seconds to manually an average door lock, and it's even faster with a "pick gun." If locks are so bad, why don't we have even more burglaries? Because your total security does not only depend on the lock. A would-be burglar has to contend with the risk of somebody being home, neighbors noticing you, a camera on someone else's house recording you, and cops grabbing you and putting you in jail. Like locks, passwords also do not protect you. At least one of your thousands of users has re-used the company password somewhere else. That means it will end up in one of the large hacker databases where identities can be bought for pennies. Then a hacker can sit comfortably in a basement in Moscow and run software to try thousands of username/password combinations with zero chance of being caught. In the military, I learned that barbed wire that was not constantly observed was dangerous pseudo-security. You think you are protected, but when the enemy attacks, you will discover that your wire has long since been cut. Barbed wire cannot stand alone. Your door lock cannot stand alone. Your passwords cannot stand alone. You need to complement password security with two-factor authentication, IP address verification, time restrictions, network segmentation, anomaly detection, and all the other tools in the IT security toolbox. Passwords alone are pseudo-security. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:53

Ask host to enable sharing for playback control

Good Enough

6/18/2021
In this episode of Beneficial Intelligence, I discuss how to choose what is good enough. How do you know when something is good enough? That requires good judgment, which is unfortunately in short supply. IT used in aviation, pharma, and a few other life-and-death industries are subject to strict standards. We can lean on standards like the GxP requirements that anyone in the pharma industry loves to hate. However, in the general IT industry, we have lots of standards, but none of them are mandatory. That's why each week seems to bring a new horror story of an organization that believed their IT was good enough and found out it wasn't. Southwest Airlines learned that first-hand this week. On Monday, they couldn't fly because the connection to their weather data provider was down. On Tuesday, they couldn't fly because the connection from airports to the central reservation system was down. If you don't know who is supposed to be on the plane, you can't fly. They ended up canceling more than 800 flights over two days. Obviously, the CIO of Southwest Airlines decided that a single network was good enough. That can be a valid business decision. But you need to make a full comparison. On one side is the cost of redundant network connections and data sources. On the other side is the loss resulting from canceling 800 flights and delaying thousands more. This outage probably cost them around $20 million. If you believe the risk of a $20 million network outage is 0.1%, standard risk calculation says you can only spend $20,000 to avoid it. But if the risk of an outage is 5%, it is worth spending $1 million on redundant connections or other alternatives. Everybody in your IT organization who makes major architectural decisions have to know what constitutes "good enough." There might be hard regulatory requirements about data security, privacy, and access control. But there are also judgment calls based on estimates of risk probability and impact. As CIO or CTO, it is your job to teach your organization how to determine what is good enough. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:55

Ask host to enable sharing for playback control

Unnecessary Roadblocks

6/4/2021
In this episode of Beneficial Intelligence, I discuss unnecessary roadblocks. Amazon has a problem finding enough workers, and they have decided to get rid of an unnecessary roadblock: They will no longer test people for marijuana use. As marijuana becomes legal in more and more states, Amazon decided they only need to test truck drivers and forklift operators, not everyone. IT organizations are also always complaining that they can't find the people they need. There are three reasons for this: Bad business cases, unrealistic requirements, and unnecessary roadblocks. If you don't have a good business case, you can't pay what talent costs. In this case, it's better for the world that IT professionals go somewhere where they can create more value. If you are requiring a laundry list of database architectures, programming languages, and architecture patterns, you are indicating to prospective applicants that you don't really know what you want. That's a turnoff for most professionals. Finally, you might have set up roadblocks that keep people from applying. Mandatory drug testing is one, requiring security clearance for everyone is another, and requiring a certain education is a third. Requiring a college degree for an IT position is simply an outdated practice. Many good IT professionals are self-taught, and spending two years working for a scrappy startup teaches you much more than four years of college does. The problem with talent roadblocks is that they are glaringly obvious to the potential applicant, but invisible inside the organization. If you have a hard time finding the talent you need, you need to have someone external identify your unnecessary roadblocks. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:09:08

Ask host to enable sharing for playback control

Expectation Management

5/28/2021
In this episode of Beneficial Intelligence, I discuss expectation management. I was doing a small renovation project in our summer cottage, and I needed a special type of hinge. I found it on the website of our local building supplies store, but when I got to the store, it wasn't there. It turned out that this store was part of a co-branded chain. They had an aspirational website showing all the items a shop could potentially carry, but each shop would actually sell only their own idiosyncratic collection of items. The store did not meet my expectation, and I will not go back there. You also want to meet or exceed the expectations of the users of your IT systems, no matter if they are internal users, external partners, or customers. The problem with achieving that is that IT professionals are notoriously bad at putting themselves in the users' place. The secret to meeting user expectations is to ask real users. You don't need a fancy usability lab to do that. Usability guru Jakob Nielsen has popularized the term "discount usability engineering" where you grab five random people in the hallway (outside the IT department) and show them your system. His research backs his claim that these five people will find almost as many of the issues as a much larger and more professional study. As CIO or CTO, you have the ultimate responsibility for the success of all projects. That means you have to remind each project to communicate continually to the entire organization what the project will achieve. In that way, you can manage expectations and make your projects successful. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:50

Ask host to enable sharing for playback control

Gaming the Metrics

5/7/2021
In this episode of Beneficial Intelligence, I discuss gaming the metrics. We measure things to be able to manage them. But when we start using metrics to reward individual employees and teams, people will start gaming them. Newton's third law for business says that for every system the organization implements, the employees will implement an equal and opposite workaround that negates the system. Amazon is managing a huge workforce of delivery drivers. To ensure they drive safely, they require drivers to be logged in to a mobile phone app. The app uses the accelerometer to measure acceleration, braking, and other parameters and gives each driver a score. But because Amazon is also ruthlessly pushing their small subcontractors to deliver a lot of packages very quickly, the delivery companies have started instructing their drivers to game the metrics. Drivers say they are instructed to drive very carefully for the first two hours each day to achieve a good score. After that, they are also instructed to put their phones into airplane mode and drive like the devil for the rest of their 10-hour shift to achieve the number of deliveries required. Andy Grove, who used to be the CEO of Intel back when they were successful, was known for understanding productivity. He formulated the rule that for every metric, there should be another ‘paired’ metric that addresses the adverse consequences of the first. As an IT leader, getting your measurements right is one of the most important parts of managing your IT organization. If your metrics are used in any to praise or blame individuals and groups, you can be sure people will try to optimize for them. If you are not carefully establishing paired metrics, you can be sure your metrics are being gamed. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:10:31

Ask host to enable sharing for playback control

Accidental Publication

4/30/2021
In this episode of Beneficial Intelligence, I discuss accidental publication. There are two ways organizations lose data: Through break-ins and through carelessness. It is hard to protect your systems against determined hackers, but it should not be hard to protect yourself against carelessness. Strangely, this is just as big a source of data leaks as determined hacker attacks. Some accidental losses are the result of individual failures to follow procedures. The British MI6 is famous for losing classified laptops in taxis and having them stolen from unattended cars. In Denmark, the health authorities produced two unencrypted CD-ROMs with data on every Danish citizen and their illnesses. They were accidentally sent to the Chinese embassy instead of the national statistics authority. Other losses happen because organizations are accidentally publishing data to the entire world. By now, every tech journalist who sees a ?id=48375 in a web address will try to change the number to something else. That's how the State of California accidentally published information about all donations Californians made to NGOs and political organizations. Another way is through badly secured APIs. A 19-year old college student shopping for student loans found he could check whether he qualified for a loan by simply entering his name, address, and date of birth. Looking at the web page source, he quickly discovered that the website was calling an unsecured API at credit scoring company Experian. As a CIO or CTO, you can no longer allow the security strategy of your IT organization to depend on a lack of IT skills in the general public. Are you sure every system your organization rolls out has been subject to a security review? If not, you might be the next organization to find that you have accidentally published confidential data. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:55

Ask host to enable sharing for playback control

Irrational Optimism

4/23/2021
In this episode of Beneficial Intelligence, I discuss irrational optimism. IT people are too optimistic. It is a natural consequence of our ability to build something from nothing. Our creations are not subject to gravity or other laws of physics. A builder cannot decide halfway through a construction project that he wants to swap out the foundation, but IT regularly changes the framework in mid-project. Similar optimism informs our project plans. For some reason, we assume that everything will go the way we plan it. Fred Brooks first wrote about programmer optimism in his classic "The Mythical Man-Month" back in 1975. He points out that there is indeed a certain probability that each task will be completed on schedule. But because modern IT projects consist of hundreds of tasks, the probability of every one going right is low. Even with an unrealistic 99% chance of success, having only 100 tasks reduces the overall probability of all tasks to finish on schedule to 37%. Sadly, our irrational optimism also extends to the business cases we present to management for our projects. I am regularly presented with drafts of investor presentations that hopeful startups want to pitch. The optimism is palpable, but there is never any realistic consideration of all the things that can go wrong. As a CIO or CTO, you need to make sure you have some pessimists on your team. Not the kind of pessimists you find in Legal and Compliance, who are fighting tooth and nail to ensure no new project ever gets off the ground. But a kind of pragmatic pessimist who can look at your projects and business plans and tell you what might go wrong. These people are rather rare in IT organization, which is why this is one of the things I'm helping my customers with. Unless you add a counterweight to your IT organization, your projects will continue to fail due to irrational optimism. ------ Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:08:05

Ask host to enable sharing for playback control

Risk Aversion

4/16/2021
In this episode of Beneficial Intelligence, I discuss risk aversion. The U.S. has stopped distributing the Johnson & Johnson vaccine. It has been given to more than 7 million people, and there have been six reported cases of blood clotting. Here in Denmark, we have stopped giving the Astra Zeneca vaccine because of one similar case. That is not risk management, that is risk aversion. There is a classic short story from 1911 by Stephen Leacock called "The Man in Asbestos." It is from the time where fire-resistant asbestos was considered one of the miracle materials of the future. The narrator travels to the future to find a drab and risk-averse society where aging has been eliminated together with all disease. People can only die from accidents, which is why everybody wears fire-resistant asbestos clothes, railroads and cars are outlawed, and society becomes completely stagnant. We are moving in that direction. Large organizations have departments of innovation prevention, often called compliance, risk management, or QA. They point out all the risks, and it takes courageous leadership to look at the larger benefit and overrule the objects of the naysayers. Smaller organizations can out-innovate larger ones because they spend their leadership time on innovation and growth and instead of on fighting organizational units dedicated to preserving the status quo. As an IT leader, it is your job to make sure your organization doesn't get paralyzed by risk aversion. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:05:23

Ask host to enable sharing for playback control

Biased Data

4/9/2021
In this episode of Beneficial Intelligence, I discuss biased data. Machine Learning depends on large data sets, and unless you take care, ML algorithms will perpetuate any bias in the data it learns from. The famous ImageNet database contains 14 million labeled images. However, 6% of these have the wrong label. The labels are provided by humans paid very little per image, so they will work very fast. Unfortunately, as Nobel Prize winner Daniel Kahneman has shown, when humans work fast, they depend on their fast System 1 thinking that is very prone to bias. Thus, a woman in hospital scrubs is likely to be classified "nurse" and a man in the same clothes is likely to be classified "doctor." Google Translate was showing its bias when translating from Hungarian. Hungarian only has a gender-neutral pronoun, but the English translation was given a pronoun. The original gender-neutral phrases became "she does the dishes" and "he reads" in English. As CIO or CTO, you need to make sure somebody ensures the quality of the data you use to train your machine learning algorithms. If you don't have a Chief Data Officer, maybe you have a Data Protection Officer who could reasonably be given this purview. But you cannot foist this responsibility on individual development teams under deadline pressure. It is your responsibility to ensure that any machine learning system is learning from clean, unbiased data. Beneficial Intelligence is a weekly podcast with stories and pragmatic advice for CIOs, CTOs, and other IT leaders. To get in touch, please contact me at sten@vesterli.com

Duration:00:07:29