With a long delay, I am again back on the track. I still repeat these questions: how one can resolve conflicts when we do not have exact measures of the conflicts?
But to be less boring and repetitive, I’d like to discuss a little bit of stuff I learned from a great paper by Bruce Schneier, titled The Psychology of Security.
He talks about security as a personal subjective feeling which involves trade-offs with other requirements and needs. Security engineering people or cryptography specialists may not need or may not feel the need to look at security from this viewpoint. But as a software developer or designer, I hope users will actually use the security solution or mechanisms enforced in the software. So psychology of security is of interest and importance.
So the first interesting argument in this paper is about the difference between actual security and feeling secure. As humans, we may mislead ourselves and feel secure, when actually there is no real secure solution applied or we may live in total fear for unknown attacks. “The feeling and reality of security are certainly related to each other, but they're just as certainly not the same as each other. “
Another discussion in this paper is about the way people decide on security. Schneier concludes that the approaches for designing security systems and deciding on security trade-offs need to take the feeling and psychology of security into account. Humans do not evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, they use shortcuts, stereotypes, biases, generally known as heuristics.
Empirical studies on trade-off decisions show that unlike the utility theory, people have subjective values for gain and losses, which is described by prospect theory. I repeat an example from the paper here:
Subjects were divided into two groups. One group was given the choice of these two alternatives:
- Alternative A: A sure gain of $500.
- Alternative B: A 50% chance of gaining $1,000.
The other group was given the choice of:
- Alternative C: A sure loss of $500.
- Alternative D: A 50% chance of losing $1,000.
These two trade-offs aren't the same, but they're very similar. And traditional economics predicts that the difference doesn't make a difference. Traditional economics is based on something called "utility theory," which predicts that people make trade-offs based on a straightforward calculation of relative gains and losses. Alternatives A and B have the same expected utility: +$500. And alternatives C and D have the same expected utility: -$500. But experimental results contradict this. When faced with a gain, most people (84%) chose Alternative A (the sure gain) of $500 over Alternative B (the risky gain). But when faced with a loss, most people (70%) chose Alternative D (the risky loss) over Alternative C (the sure loss).
In addition, peoples' choices are affected by how a trade-off is framed. By framing a choice as gain, people tend to be risk-averse; in opposite, when the choice is framed as loss, people tend to be risk-seeking. This is called framing effect.
These are interesting psychological results. As humans, we have a completely different world in our minds, we have budget accounts for security and privacy in our minds, we make decisions that machines won’t make this way.
Now: another question, should we incorporate these aspects into the decision maker machines and algorithms?