I recall my first experience sharing the results of a behaviour change program I was working on with my organisation’s donors (all of whom are businessmen). I was presenting the changes in the 30+ indicators used to measure the success of the program, when I arrived at the “47% increase in women’s recognition of the danger signs of diarrhoea,” a fact we were very proud of. It was one of our many statistically significant changes with a 95% confidence interval.
Suddenly, the donors asked why 100% of women did not know how to recognise danger signs of diarrhoea. I was speechless. It hadn’t occurred to me that they wouldn’t see the program’s result as a good achievement. We explained to them the complexity of behaviour change and the difficulty of teaching new information. The meeting ended well enough, but I began ruminating on what had gone wrong because I was peeved by our donor’s unenthusiastic reaction.
Slowly, it dawned on me how little our donors understood of the issues they were financing. They often compared their own lives to those of the women on the program, not understanding, for instance, why they wouldn’t use a latrine to go to the bathroom. In reality, people who have never used a latrine or toilet are quite content and comfortable defecating in the open air. It can be hard to convey this message to people who have never lived something different; they don’t understand how complicated it can be to teach people to use a latrine. Human behaviour is extremely complex, but to businessmen who are used to cold numbers, figures and sales, not achieving the 100% mark seemed like we were not reaching our goals. Our donors, though well-intentioned, didn’t really understand the intricacy of human behaviour and the difficulty of changing it. Additionally, as the presenter, I hadn’t been able to make them understand. Finally I came up with a good comparison, which I wish I had used in that meeting.
I should have asked them what they would think if every child in their children’s classroom got 100% on every assignment, always. Most people would immediately discredit whatever scoring system was being used, because we know from experience that classrooms have a mix of students including higher and lower achievers. Of course, every child has the potential to succeed, but some may need additional mentoring, or may have trouble at home or might be in the wrong school system. Yet, when it comes to development programs, we seem to forget human fallibility. We want every farmer to increase their yields by 200%. We want every woman entrepreneur to pay back her debt and build a successful business. What is worse, and I have seen this over and over again, is that we sell this perfect image to our current and potential donors.
I’ve seen hundreds of promotional videos for NGOs. You finish watching these videos and feel elated. You think, “That program sure is fixing the problem. It seems they just ended the terrible (name your crisis) in that village.” We keep selling miracles. I’ve seen microfinance organisations swear that only 2% of microfinance participants ever default on payments. However, on the ground I hear of organisations hiding numbers or twisting them around to meet that brutal, self-imposed goal. I see organisations shoot themselves in the foot by promising things they can’t possibly make happen because the situations in which they work are just too complex.
As NGO staff, we need to start holding our tongues, and promising the boring yet reasonable results. Not all farmers’ lives will turn around magically after one year of using such-and-such seeds. Maybe 60% of them will improve in some way, and only while they’re getting the seeds. Not all children who receive scholarships will graduate and become wealthy professionals. Donating an ambulance will not eradicate maternal mortality if the hospital still doesn’t have the supplies or staff necessary to provide basic medical services.
Unfortunately, it seems we’ve set ourselves up in a promotional arms race where, in order to gain funding, every organisation promises more and more results for fewer and fewer cents spent. I believe the lack of program evaluation and monitoring in some organisations is often tied to the fear of not finding the results they are selling. It’s easier to find one individual beneficiary who was successful, interview them and extrapolate from there. If this person did so well, then surely everyone benefitting from the program did just as well.
To some this may sound extreme, but I have encountered numerous people who claim the success of a program, even an entire organisation, on anecdotal evidence. They fear that if they really got into a statistical analysis, more likely than not, the gaps would appear. Their success rate would lower and they would have to face the facts; the results being touted are far from accurate. In fact, the program might have no effect whatsoever, and then a deep-seated fear we all harbour will spring up. “What if our donor leaves us for someone else selling something better?” (even if that other organisation also may have blown their results out of proportion).
We need to educate our audiences, especially donors, on what is reasonable to expect. We need to speak in realistic terms and change the way programs are promoted. More importantly, we need to teach our donors and employees to be sceptical of perfection. It does not exist. People who say they have the solution to poverty or that their program has a 100% success rate are exaggerating. If a solution like that existed we would all be doing it; if it worked there would be hard evidence to prove it.
Until everyone in the aid and development structure come to terms with reality, we will likely continue to oversell, under-achieve and hide from the truth. This will be to the detriment of our organisation, our employees, and even our program’s participants.
Featured image shows a ‘miracle cure’ medicine bottle. Photo from Wikimedia Commons.