Heuristic and Biases
This week, we are going to look at how individual human beings make decisions in general, and how we make decisions in cybersecurity situations specifically.
To help us understand these decisions, we are going to talk about cryptography and encryption as a specific example of a cybersecurity decision. We will also talk about, ways people use encryption to protect themselves, and why is hasn’t solved all of our cybersecurity problems. If you aren’t familiar with encryption, then read this basic primer and the Wikipedia pages on encryption and cryptography. They are an interesting read. If you are familiar with encryption, feel free to skip those readings.
In general, encryption works, and it works well. It is very difficult, and basically impossible, to break encryption when used properly; it is one of the biggest successes in modern cybersecurity technology. So why hasn’t it solved all of our problems? The basic reason is because encryption needs to be used by human beings, and human beings don’t always “use it properly”. Let’s spend some time looking at human beings and cryptography as a way of thinking through the challenges of having humans involved in cybersecurity.
There is a paper from 25 years ago called “Why Johnny Can’t Encrypt” that looks at a technology called “PGP” (or Pretty Good Privacy) for secure email. Read the first 7 pages (up until “Cognitive Walkthrough”), which are a good discussion of the relationship between usability and security, and help explain a lot of core ideas about why people might not use encryption properly.. Briefly skim the rest of the paper to see what issues they found 25 years ago. Are things better now?
To understand how most people think about encryption now, read this academic paper about Mental Models of Encryption. You can skim as much as you want, but focus on section 3, which describes how people think about encryption and how they think encryption is used. Which mental model do you have? Or do you have a better understanding of encryption?
Next, we are going to look more generally at how human beings make decisions, and why they don’t always make the most “rational” decisions. We are going to start by reading an article or two that I wrote about how people make cybersecurity decisions. Start by reading this article in IEEE Security and Privacy magazine about about Folk Security. That article summarizes some research I did about how people make security decisions in general. I also recommend you read, or at least skim, the original research paper that describes these eight mental models of security.
Bruce Schneier is another author who has written multiple books on cybersecurity and the psychology of cybersecurity. He summarizes some of the key background about how people think and its relationship to cybersecurity in a short series of two blog posts. Please read both blog posts this week: Post 1 and Post 2.
These two essays describe a large number of “heuristics and biases” in the way we (human beings) think about security and risk. Some of these have names, such as the “availability heuristic” or the “control bias”. Others don’t have clear names but are still important, like the idea that we are especially attuned to risks involving people. As people make security decisions, all of these heuristics and biases come into play and complicate the way we make security decisions.
To think about how these heuristics and biases affect security decisions, I want you to think about another common cybersecurity decision made by end users: a popup warning. Consider if you were designing a computing system that has to issue a warning to the user: the user is visiting a website that you suspect is dangerous, and want to create a popup warning them. You don’t want to prevent the user from visiting the website — you might be wrong about how dangerous it is — but you want to warn them that it might be dangerous. What heuristic(s) and/or bias(es) might be important as you write your warning?
For this week’s summary, identify at least two of these heuristics and/or biases that might help you know what to say in the warning, or when/how to present the warning to the user. Then explain why you think they are relevant and how they might help.