UNDOING A RULE OF THUMB
40 Books in 2018 / #7
THE UNDOING PROJECT
368pp Allen Lane 2016
This is Draft 6. Perhaps it’s number seven. I’ve lost count.
Why go to so much effort to write a blog post that will be read by a small audience?
Because I’ve been struggling to find the best way to capture Michael Lewis’s book in 1500 words, and to do it justice. Here are some options:
1. The Undoing Project (TUP) is a biography of two brilliant thinkers who helped to re-invent C20th psychology;
2. TUP is the insider view of how creative minds can work together, becoming greater than the sum of their parts;
3. TUP explores a series of academic papers that stretched the boundaries of many other disciplines, and won the Nobel Prize for Economics for good measure;
4. TUP is a snapshot of Israel’s emergence in 1948, and the subsequent 6-Day War (1967) and the Yom Kippur / Arab-Israeli War (1973);
5. TUP contains a collection of cognitive experiments presenting us the opportunity to reflect on one’s own thought processes;
6. TUP explains the foundation of behavioural economics and the ‘nudge’ Behavioural Insights unit.
The two academics in this story are Daniel Kahneman and Amos Tversky. Lewis describes them as The Outsider and The Insider.
Kahneman spent his childhood as a Jew in Nazi France, sometimes hiding in a chicken coop, sometimes going to school, careful not to say too much or seem too clever.
From a certain distance, he witnessed a lot of interesting, apparently contradictory, behaviour. Like the SS Officer who saw Kahneman in the street and beckoned him. The soldier showed a photo – probably of his own son – then lifted Kahneman in his arms and hugged him.
Lewis notes that, even at time of writing, Kahneman’s defining emotion is doubt.
By comparison, Tversky is described as incredibly agile and physically courageous. During his Israeli national service, he was awarded a medal by military leader Moshe Dayan, who said:
“You did a very stupid and a very brave thing, and you won’t get away with it again.”
Tversky loved people but had no liking for social norms. He had a gift for doing only precisely what he wanted to do – displayed with beautiful simplicity. As an example, Tversky took his wife to the cinema, decided that he didn’t like the film (she did) so left after five minutes. He went home to watch his favourite TV drama, then returned to collect his wife, saying:
“They’ve already taken my money, should I give them my time too?”
Tversky’s intelligence was astonishing. His peers thought so highly of him that they devised a tongue-in-cheek test for mental capability: Whatever your qualifications, whatever your discipline, the faster you realised that Tversky was smarter than you, the smarter you were.
The Outsider and The Insider. The Holocaust Kid and the swaggering Sabra (slang for a native Israeli). The constant they shared was discovering what made people tick. Or as Tversky said later in his career:
“My colleagues, they study artificial intelligence; me, I study natural stupidity.”
To be less brutal, what interested the pair was people’s inability to face the evidence of their own folly. When people become attached to a theory, we’ll fit evidence to the theory rather than re-fitting the theory to the evidence.
The best working theory in social science – especially in economics – was that people are rational, or at least decent intuitive statisticians.
Example: What’s more likely in the birth order of children in a family: B-G-B-B-B-B or G-B-G-B-B-G? Obvious, right?
They’re about as likely as each other, but as the first sequence doesn’t reflect the proportion of boys and girls in a population, we regard it as less representative. And the ‘fact’ that the second sequence looks more ‘random’ than the first.
Kahneman and Tversky published a paper called ‘Subjective Probability: A Judgment of Representativeness’. ‘Subjective Probability’ meaning the odds each of us assign to a situation when we are more-or-less guessing, ‘Representativeness’ when we compare whatever we’re judging with some model we have in mind.
We replace laws of chance with our own rules of thumb.
I’m no statistician (I wish I knew a lot more than just the likelihood of a coin landing heads or tails), and I only learned about Kahneman and Tversky in the past couple of weeks (seeing the name Michal Lewis on the book’s cover was the only reason for requesting it for my Christmas list).
But I do have a very, very small example of a rule of thumb and challenging it.
A couple of careers ago, I worked in an executive search firm, and had the great good fortune to work with a fast growth tech Client. On a path to global domination, they were creating and filling senior roles like cells multiplying in a petri dish.
With my ‘vast experience’ in two Fortune 500 tech companies, there was an unspoken assumption – on both sides – that I knew what I was doing.
We had a couple of excellent researchers who were very efficient at uncovering potential candidates. My responsibility was to meet the longlist, interview them using my sector insight and management experience, and create high-impact profiles for a shortlist to present to my hiring Client.
The quality of the candidates was the highest of highest. But after our first slam-dunk VP appointment (always a good start to a new Client relationship) the next few shortlists didn’t really set anyone alight. Where was I going wrong?
The Client (European HR Director) and I debriefed the process. We reached an uncomfortable truth: I was presenting candidates who I liked. That was my unconscious rule of thumb (now called a ‘heuristic’ by Kahneman and Tversky). If four candidates had the same skills and knowledge as the brief, and the hiring executive only wanted to see three, the one I didn’t like would be bounced back to the longlist.
Head hunters make the case that they represent the Clients’ values and brands. One of the reasons for outsourcing the work to a third party was to save time by finding people who will ‘fit’. It’s such a subjective decision point.
A year later, the HRD and I gave a presentation to the company’s annual Leadership Conference. Although we still lacked concrete statistics or an algorithm for hiring success, we presented six hard-learned stages to a successful hire. How to get sign-off from finance and the hiring committee; how to build your assessment criteria; how to on-board the person who’s going to solve all your problems…
My singular contribution was another, different heuristic: Once you know that a potential candidate’s CV matches the job requirements, you’ll have a point of view within 5 minutes of meeting. Either way – like or dislike - spend the next 55 minutes trying to prove yourself wrong.
Only then decide if s/he goes to the next stage. Even if he is (accidentally) John Roberts, Chief Justice of the United States on the Supreme Court.
All this came flooding back when reading about Kahneman’s early experience as a psychologist in the army during his national service. All those who were ‘called up’ had to be interviewed, to assess where they should be deployed and the likelihood of their success.
Kahneman wanted to avoid having to rely on human judgment. He taught army interviewers how to put questions to minimize “I like him” decisions (now labelled “the halo effect”: the you-went-to-the-right-school-you’ll-be-fine effect). Or as Kahneman says:
“How do we prevent the intuition of interviewers from screwing up their assessment of army recruits? Remove their gut feelings, and their judgment improves.” The interviewers hated it – and yet the Kahneman process, created 60 years ago, is still in place.
Just when I’m feeling a little smug about my ‘insight’ interviewing story – and perhaps impressing you (even a little) – I notice a TUP note about a 1973 paper referenced hallway through Lewis’ book: ‘The Psychology of Prediction’. As Kahneman and Tversky say:
“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or statistical theory. Instead they rely on a limited number of heuristics.”
In the Prediction paper, they highlight how political analysts, historians, sport commentators and pundits all impose false order upon random events. The headhunting experience and the Leadership Conference presentation did happen, but is my rationale and persuasive narrative real?
Mea culpa. Then and now.
I’ll close this post with a Lewis summary of perhaps Kahneman and Tversky’s finest work together (Tversky died in 1996; Nobel rules meant that Kahneman’s 2002 Prize for Economics could not be awarded posthumously).
By the early 1980s, Tversky had heard repeatedly from economists and decision theorists that he and Kahneman had exaggerated human fallibility. A lot of people had significant investment (aka careers, consulting, and commentary) in the idea that people were rational.
Tversky wanted to demonstrate, once and for all, how rules of thumb can mislead. Here’s the set up for the ‘Linda case’:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
The participants were then presented with eight scenarios and asked to what degree Linda resembled the typical member of each one. The outcome was not surprising, but to make his point, Tversky made the test more extreme:
Which of these two alternatives is more probable?
1. Linda is a bank teller
2. Linda is a bank teller and is active in the feminist movement.
(You have reached a view already, even if you’re only skim reading to reach the end of this post.)
They asked 142 undergraduates at University of British Columbia, Vancouver (Kahneman's tenure at that time). 85% chose alternative #2.
Tversky wrote: “This violated the fundamental rule of logic.”
To spell it out: the probability of two events occurring in conjunction is always less than or equal to the probability of either one occurring alone.
Here’s a less emotive example.
Which is more likely:
1. You will have a flat tyre tomorrow morning
2. You will have a flat tyre tomorrow morning and a driver in a silver car will stop to help you.
In this case, it should be evident that (2) is not the more likely outcome.
The UBC 85% error in Linda the Bank Teller case showed that people are blind to logic when the information is presented in a story.
The students chose the more detailed description, even though it was less probable, because Option #2 was more ‘representative’.
Kahneman calls that instant form of conclusion 'System 1', in his book Thinking, Fast and Slow. That's my reading for next week.
(1751 words. Thanks for reading.)
For Rutherposts direct to you inbox, subscribe *here*