Thousands of scientists have rigorously studied the causes and risk factors in heart disease over the last half century, but a single longitudinal experiment has revealed more than any other approach.
In 1948, researchers began tracking health records from all participants in the town of Framingham, Massachusetts. This was an observational study; they did not formulate causal theories or test specific hypotheses, but simply let nature take its course and observed what happened.
In 1960, they found a link between smoking and heart disease. In 1961, they found a link with cholesterol. And in the coming decades, they also found correlations with obesity, exercise, high blood pressure, hypertension, stroke, diabetes—virtually everything that now matters to clinical treatment.
So why aren’t we in the philanthropy world copying this approach—observing what’s out there and looking for patterns over time?
As a neuroscientist, I have a confession to make. My type have been responsible for propagating a lie they still teach in schools, that scientists always devise a hypothesis and test it in controlled experiments. This is simply not true. The human genome project mapped 3 billion base pairs before understanding what variation in the genetic code meant. The drugs you take were “discovered” in massive drug discovery libraries, where the structures for these drugs were already known, but the context for that chemical—the effect it has on a part of the body for treating a disease—is determined using a screening process that quickly conducts millions of tests, rather than by hypothesis.
We already create more information every two days than existed in the first two millennia of human civilization, and this pace is accelerating. However, the rate with which we convert all this “information” into useful “knowledge” is slowing down.
It was with this problem in mind that we started the GlobalGiving Storytelling project. We needed to dissociate two requirements: to collect rich information about development in a flexible, easily re-structurable way, and to turn these stories into data so we can interpret and contextualize what we see. We’ve come up with a survey design tool which you can use to do a custom evaluation and compare your results to stories told by others, with the overall aim of helping everyone share knowledge and improve project design. The approach will save you time but it will also enable you to get more back than you could ever put in.
So why do we use storytelling, you wonder? It turns out that managing this process with metrics, indicators, spreadsheets, and a numbers-only mindset is far more difficult and time-consuming. Narratives and a few survey questions are sufficient to see common patterns emerge from many perspectives.
And how does it work? Our pool of survey questions are used to map the ambiguous story elements in narrative that people choose to share, as well as detect and correct bias using methods explained by Statistician Frank L Schmidt and James Pennebaker in The Secret Life of Pronouns. This approach solves the statistical “power problem” in the aid world by making meta-analysis and data aggregation possible (essentially allowing you to pool results together) in ways that number-centric monitoring has not yet achieved. Narratives are noisy, but collecting a lot of them (57,000 and growing) and adding a little more structure to the listening process allows the community to speak above the noise of individuals.
Everything is available online, for free, and has been extensively tested:
The “brontobyte era” of big data is coming. We hope to help organisations join together and share knowledge so that we can learn more, faster.