Numbers can be confusing.

Why personal stories trump numbers in global development

In global development, we are always looking for the most scientific way to show what works. For example, academics heavily criticised the Millennium Villages Project because of the lack of control group. Simply put, we demand more rigorous proof of what does and doesn’t work in development. We want the hard numbers backing up studies. Anecdotal evidence is not enough.

While not refuting the validity of these criticisms, there are some major reasons why anecdotal evidence of what does and doesn’t work in development still gets traction.

The first reason is simple. Human beings are not good with evaluating statistics, percentages and probability. Dan Gardner’s Risk: the Science and Politics of Fear begins with the telling figure that after September 11, an estimated 1,595 Americans were killed when they switched from flying to driving, perceiving the latter to be safer. Our comprehension of numbers, it seems, is poor.

Numbers can be confusing.
Numbers can be confusing.

A second and more complex reason can be seen through our desire to make sense of the world through stories, rather than numbers. This desire is outlined in the brilliant book Thinking Fast and Slow, by Nobel Prize-winning author Daniel Kahneman.

In 1975, the social psychologist Richard Nisbett and his student Eugene Borgida, at the University of Michigan, conducted the helping experiment on a cohort of psychology students. The experiment was set up like so: six participants were placed in individual isolation booths, where they were allowed to talk for two minutes at one time about their personal lives and problems. Only one microphone was active at one time. Importantly, one of the participants was a stooge, covertly instructed by the researchers prior to the experiment.

The stooge spoke first, speaking about adjusting to life in New York, and admitting that he was prone to seizures which would could be set off by high-stress situations. Each of the other five participants had their own turn, then it came back to the stooge again. This time, he became agitated and incoherent, told the five others that he felt a seizure coming on, and asked for someone to help him, gasping “C-could somebody-er-er-help-er-uh-uh-uh [choking sounds]. I… Im gonna die-er-er-er Im… gonna die-er-er-I seizure I-er [chokes, then quiet]” as he fell to the ground. Not a further sound was heard from him.

How many of the other people would you expect to rush to the aid of the possibly dying man?

The answer is disturbingly low. Only four of 15 (27%) participants responded immediately. Six never got out of their booth, and five others came only after the “victim” choked to death. This effect is known in psychology as the bystander effect, where the diffusion of responsibility occurs when there are other people around to take action. Decent people, like you and I, are less likely to help someone in need when there are others around that might avoid you dealing with an unpleasant situation.

After describing this experiment, Nisbett and Borgida showed the psychology students video interviews of two people who had supposedly participated in the New York study. The interviews were deliberately bland, where the interviewees talked about their hobbies, plans for the future, and so on. They were designed not to elicit any further information about their propensity to help or not.

Students were then asked to guess whether or not the two interviewees had helped the person in distress. This would help to answer the pressing question: given that students knew the statistical unlikelihood of participants in the helping experiment coming to the aid of the person in distress, would this knowledge affect their guesses about whether the two interviewees had helped?

The answer is both worrying and surprising. They learnt nothing at all. 100% of the class still predicted both interviewees had helped immediately, despite knowing that the probability of anyone helping is only 27%.

This shows that statistical knowledge of human behaviour has very little bearing on our ability to apply that knowledge in predicting human behaviour.

However, all is not lost. The researchers took another class of students, showed them the two video interviews of participants in the helping experiment, and simply told them that these two had not immediately helped the person in distress. They then asked them to predict the global results for the rest of the participants in the helping experiment. The predictions were surprisingly accurate.

This tells us that teaching people a surprising statistic, and then asking them to predict behaviour, is futile. Yet when people were surprised by individual cases, and then asked to generalise from these cases, they do so with relative ease.

Nisbett and Borgida brilliantly summarised the results of this experiment by stating:

“Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.”

This telling illustration of human behaviour, one of just many within Kahneman’s book, speaks volumes about our desire to prove what works and what doesn’t work in development. We need stories (the particular) to infer information about the world (the general).

Sachs' Millenium Villages Project, heavily criticised for being unscientific, still centres the narrative around people.
Sachs’ Millenium Villages Project, heavily criticised for being unscientific, still centres the narrative around people. Photo credit: Nina Munk

This is why ideas like Sachs’ Millennium Villages, although easy to criticize, received so much support from the United Nations and other funders. Not because they are rigorous and scientific, but because they are case studies involving people. Until we recognise that human beings have a bias towards seeking particular stories that we can identify with, we will not be able to convince people of where resources should be allocated.

In a famous study, Paul Slovic, Deborah Small, and George Loewenstein asked people to donate to African relief. One of the appeals showed statistical evidence of the extent of the problem, another profiled a 7-year-old girl, and a third combined statistics and the profile. Unsurprisingly, the profile generated more donations than the statistics, but most surprisingly, it even generated more giving than the combination of profile and statistics. It was like the numbers alone had turned people off the idea of giving.

We still need statistical information, rigorous trials, and solid data in development. The more we can show that development is a science, as opposed to guesswork, the better.

But there is something be learnt from all of this. Even though we push for statistical information to demonstrate to the public the net effect of what works, and what doesn’t, or talk about the need in terms of numbers of people, we still need to keep the message centered around human beings. Without a human story, our ability to empathise and understand is severely hampered.

The following two tabs change content below.

Weh Yeoh

Weh is a disability development worker currently based in Cambodia. He is a professionally trained physiotherapist who has completed a MA in Development Studies at the University of NSW. He has a diverse background, having spent years travelling through remote parts of Asia, volunteering in an orphanage and adult shelter for people with disabilities in Vietnam, interning in India, and studying Mandarin in Beijing. He has experience in the NGO sector both in Australia and internationally in China, through Handicap International. He is an obsessed barefoot runner, wearer of Lycra, and eats far too much for his body size. You can view his LinkedIn (www.linkedin.com/in/wmyeoh) and follow him on Twitter @wmyeoh.

Latest posts by Weh Yeoh (see all)

Creative Commons License
This work, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

15 thoughts on “Why personal stories trump numbers in global development”

  1. Obviously, both President Obama as well as his GOP respondent last night know the value of this style of appeal! But I find it somewhat problematic given that one can always find an anecdote to support or defend absolutely any theory or opinion and (again, just like the GOP congresswoman did last night), so can the other side.
    Sorry to be so cynical; its one of those days.

  2. Really interesting piece, Weh. Shows why people like the human interest stories over straight news, etc. And it resonates with the discomfort I’ve had over the past couple years in my advocacy for ethical agriculture. I want the figures on animal consumption, fertiliser and pesticide use, lost forests and species, etc to speak for themselves, but the public (& media) want the personal stories – the ‘why I went from vegetarian to free-range pig farmer’ story being everyone’s favourite, and the one that seems to affect people’s personal decision making far more than any stats do. My academic training has made this very personal mode of advocacy a difficult transition (it feels so self-indulgent!), but I reckon if it’s effective then so be it!

  3. Whether you lean qualitative or a quantitative, you’ve got to have stories to make information stick. Regardless of time or place, stories touch deep human’s psychological processes of perception, learning and memory to help us do this. The human mind has evolved a narrative sensemaking faculty that allows us to perceive and experience the chaos of reality in such a way that the brain then reassembles the various bits of experience into a story in the effort to understand and remember. Stories balance the logical (sequence) and the emotional (empathy) aspects of our brains.

    My question is: Is our sector’s over-reliance on “killer facts” and numbers (e.g. the “data dash,” obsessive measurement disorder) just a reflection of our fear and thus our unhealthy relationship with risk? In many development programs, precise ways of measuring results in order to make consequential judgments about how to help people and affect social change remain elusive. But that’s hard for us do-gooders to admit.

    1. Substantially more eloquent than my rant below. The top paragraph is great and I wholly agree.

      On the second, the ‘over-reliance on killer facts and numbers’ comment there is a slippery slope of logic, as we probably don’t want to rely of the opposite of facts and numbers (noting that you are pointing out the extreme case). I think risky activities should be just as much, if not more, subject to this scrutiny, so to capture the leverage and capture the rewards when successful (and know when to cut losses when failed). This means the critical policy challenges are setting up soft and hard institutions where where the likelihood of engaging in risky activities isn’t deterred by a culture of data and evaluation, and where program and policy flexibility and adaptability is indeed encouraged not deterred either. I also think that there is a lot of frontier research evaluating the impact of past-deemed impossible-to-evaluate programs, which keeps me optimistic.

    1. Nice one Weh. I also like this post a lot, packed full of interesting insights and a fun read. Thanks for sharing.

      I was about to have a whinge along the lines of “I’m sick of people framing this stuff in mutually exclusive terms: one or the other”, which is probably the pointless ‘debate’ (read: flogged dead horse) around (yep, more than last weeks..), and isn’t confined only to development. I thought you were above this, Weh. Then I read the article and, well, you certainly are are, despite the title!

      I’m a skeptic of anecdotes and personal stories (standard academic reasons you allude to in the post), but the need to communicate ideas, results, advocacy, etc. in both a language and a framing that compels the reader/listener is critical to gain any traction. There is nothing worse than seeing the most rigorous evaluations of interesting policies and programs hide in journals, buried beneath jargon, to never get discussed by decision-makers. Actually, seeing non-evidence being passed off as evidence and at the centre of policy discussions because its communicated well is worse. Fortunately, the former is now much less common and I think the discourse around these issues is in a pretty healthy place now, with great thanks to the fantastic outreach and communication programs of certain organisations and individuals, which I won’t name.

      I need to get around to reading that Kahneman book gathering dust on my office shelf too… thanks.

  4. Great post Weh. But do note that an awful lot of the psychology literature is more particular than it looks. So much research is conducted with WEIRD undergrad students (western educated industrialized rich democratic) and there’s some good papers on response variances outside of that group to some classic tests in the field.

    Also as you note it is very much a different thing to understand the evidence base on what is a persuasive technique for advocacy, and very much a different thing to determine real causal links. Our duty is to always try to check our anecdotal intuitions at the door … even as we deploy them in the next public pitch.

    1. I’m also having the same trouble with “should” language lately. Advocacy organizations especially rely on this, e.g. “The government must respond to…,” “The international community has a responsibility to…” If you as a constituent are constantly bombarded with messages about those in power disregarding the common good or the most vulnerable (an age-old human story), does that really make you want to sign yet another petition?

What are you thinking?