One of the resounding themes of this morning’s plenary at the SVRI Forum is how to deal with high standards for statistical significance in extremely complex, sometimes dangerous and impoverished communities. In particular, the presentation on SASA!, a GBV intervention in Sub-Saharan Africa, tried to show very large effects of their program with a very small sample size and very little statistical significance. Charlotte Watts discussed at length how we shouldn’t “throw the baby out with the bathwater” and how if we’re missing statistical significance by “a hair’s breadth,” we shouldn’t ignore the results.
It grated on me that the discussion took this particular path because I think there is a large opening for those doing research to examine the different ways that we use and examine quantitative evidence, but most involved still heavily relied on the semantics of statistics. There is a strong argument for mixing qualitative and quantitative research. Each method can uncover patterns and trends and events that are not evident in the other, and qualitative work can provide justification for small-scale quantitative work with small sample sizes or insufficient power. I don’t think this is the only way of dealing with the problem, however.
We know that RCTs are expensive. Lori Heise, of the London School of Tropical Medicine, made an excellent point that if RCTs are the standard, there has to be more funding and recognition of that from the donor community.
There’s a lot of work and questioning to be done around this issue. Just because there is not sufficient power or sample size to find a statistically significant effect, it doesn’t mean we shouldn’t ask the question, but it does affect how we can present the results. And presenting the results as of a particular magnitude (especially when that magnitude is large), just identifying that it matches the expected direction is not really sufficient and worse, perhaps disingenuous.