Friday, June 3, 2016

Causal Estimation and Millions of Lives

This just in from a fine former Ph.D. student.  He returned to India many years ago and made his fortune in finance.  He's now devoting himself the greater good, working with the Bill and Melinda Gates Foundation.

I reminded him that I'm not likely to be a big help, as I generally don't do causal estimation or experimental design. But he kindly allowed me to post his communication below (abridged and slightly edited). Please post comments for him if you have any suggestions. [As you know, I write this blog more like a newspaper column, neither encouraging nor receiving many comments -- so now's your chance to comment!]

He writes:

One of the key challenges we face in our work is that causality is not known, and while theory and large scale studies, such as those published in the Lancet, do provide us with some guidance, it is far from clear that they reflect the reality on the ground when we are intervening in field settings with markedly different starting points from those that were used in the studies. However, while we observe the ground situation imperfectly and with large error, the inertia in the underlying system that we are trying to impact is so high that that it would perhaps be safe to say that, unlike in the corporate world, there isn’t a lot of creative destruction going on here. In such a situation it would seem to me that the best way to learn about the “true but unobserved” reality and how to permanently change it and scale the change cost-effectively (such as nurse behavior in facilities) is to go on attempting different interventions which are structured in such a way as to allow for a rapid convergence to the most effective interventions (similar to the famous Runge-Kutta iterative methods for rapidly and efficiently arriving at solutions to differential equations to the desired level of accuracy).

However, while the need is for rapid learning, the most popular methods proceed by collecting months or years of data in both intervention and control settings, and at the end of it all, if done very-very carefully, all that they can tell you is that there were some links (or not) between the interventions and results without giving you any insight into why something happened or what can be done to improve it. In the meanwhile one is expected to hold the intervention steady and almost discard all the knowledge that is continuously being generated and be patient even while lives are being lost because the intervention was not quite designed well. While the problems with such an approach are apparent, the alternative cannot be instinct or gut feeling and a series of uncoordinated actions in the name of “being responsive”.

I am writing to request your help in pointing us to literature that can act as a guide to how we may do this better. ... I have indeed found some ideas in the literature that may be somewhat useful, ... [and] while very interesting and informative, I’m afraid it is not yet clear to me how we will apply these ideas in our actual field settings, and how we will design our Measurement, Learning, and Evaluation approaches differently so that we can actually implement these ideas in difficult on-ground settings in remote parts of our country involving, literally, millions of lives.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.