By Eike Luedeling, Lutz Göhring, Keith Shepherd
This post was first published on ICRAF's blog here.
Big decisions with limited information
Deciding on big issues without enough information has just gotten a little bit easier. This step forward is facilitated by the release of the decisionSupport R package by the Land Health Decisions group at the World Agroforestry Centre (ICRAF), supported by the CGIAR Research Program on Water, Land and Ecosystems.
One of the main reasons why many decisions are difficult to make is that we don’t know enough to be absolutely sure about their consequences. We also don’t normally have enough time or money to collect a lot of information. But even where we’re able to ‘do the research’, some uncertainty still remains, because there are many things about the world and the future that we simply can’t know.
How can we make decisions in such an environment? Of course we can rely on our gut feeling alone, but this isn’t always acceptable. Especially for big decisions in international development, we normally expect a more robust approach, since such decisions can affect millions of people, vast areas and enormous amounts of money. Unfortunately, many of these same decisions are made in areas that are particularly data-scarce, such as rural Africa.
Lessons from the corporate world
Of course, development is not alone with such challenges. Businesses around the world struggle with similar situations all the time. Decisions must be made without perfect knowledge, and profits and jobs depend on managers making these decisions right.
This is why at ICRAF, we have adopted business analysis approaches for aiding decisions in international development. Douglas Hubbard’s Applied Information Economics concept served as a blueprint for our first stab at this.
In this concept, business analysts develop mathematical impact models for a decision together with decision-makers, who are encouraged to put into the model everything they find important. Rather than going off on a multi-year multi-million dollar research journey, analysts work together with technical experts to quantify the current state of uncertainty on all the unknown variables in the model. What do we know now, before we take any measurements?
When we do an honest appraisal of what we really know for certain, there isn’t a lot – at least when we try to pin precise numbers to things. But what we can always do – at least after learning a few estimation techniques and maybe doing a quick web search – is come up with two numbers that are very likely to bracket the real value. We can find one value that is certainly lower and one that is certainly higher than the value we’re looking for.
For some mathematical techniques, this is sufficient. These have somewhat obscure names like Monte Carlo simulation or Bayesian Networks, and they are a bit more difficult to use than standard Excel spreadsheets. But if we manage to apply them to our decision problems, they can produce for us a range of decision outcomes that we should prepare for. They can give us an impression of the worst and the best decision results that can realistically happen.
Maximizing expected welfare
Every decision requires us to answer the question What shall be done? We can only give an answer, if we have clarity on what it actually is that we’re trying to accomplish. Often, we may simply want to maximize our financial profits. In most cases in development, however, things are more complicated and success or failure of a project is determined by a wider range of measures, which are often interrelated and may even be in conflict with each other. Economists call the compound measure that combines all these variables the ‘welfare function’. According to this principle, decision-makers strive to maximize their expected welfare, which is determined by his or her values and preferences. Decision models are mathematical representations of all the variables and processes that determine the welfare arising from a decision.
Sometimes knowing the range of plausible welfare outcomes is enough. If all conceivable consequences of making a decision a certain way are better than those of the alternative course of action, we can safely go ahead and decide. Things get a bit trickier when the results are not so clear – if our first analysis tells us that the results of choosing decision alternative A over B can have both positive and negative welfare outcomes.
The value of information
We’re always smarter about what would have been the right decision, after we observe the actual outcome. We often wish that we had decided differently, but this is never an option. Our decision models attempt to reduce the frequency of such frustrations, but a certain chance of negative welfare outcomes often remains. If the decision is important and we don’t feel like the analysis so far has given us enough guidance, we may want to take measures to reduce our uncertainty even further. But how? Let’s turn to the next step in our analysis framework: the value of information.
The value of information gives an impression of just how much our uncertainty about different variables limits our confidence that we’re making the right decision. It tells us, on which variables measurements would produce the greatest gain in certainty that we’re taking the right decision. The value of information normally varies greatly between the various variables in a model, and there are often a few that stand out clearly from the rest. Measurements on these factors can often be very effective for vastly improving the decision.
Making methods accessible
We have used this framework in a few cases already, for example for developing a Global Intervention Decision Model, or for analyzing the prospects for building a freshwater pipeline in Kenya.
So far the methods we have used, and are continuing to improve, have been useful only to us and a few of our stakeholders. But they can only have the desired effects of improving decisions on a larger scale, if everyone can use them. A first step in this is the publication of our methods as an open-source extension package for the R programming language.
The package decisionSupport has just become available on the CRAN repository, the standard download hub for R packages. In addition to the code, we have added detailed documentation of all functions, so that we hope that the tools will find a wide audience. And we’ve attempted to make their use as convenient as possible. Please check decisionSupport out, and let us know what you think. This package is only one step on our journey towards establishing the use of decision analysis tools in international development. We will therefore be grateful for any comments that will help us make the tools more user-friendly.
A constructive approach to knowledge gaps
Admittedly, some residual difficulty remains in making big decisions without adequate information, even now that our tools are available. But things get easier if we shift the focus from what we don’t know to what we actually do know. Systematically assessing this and then using this knowledge in a structured way that honestly acknowledges our uncertainties can go a long way to making better and more transparent decisions. This is what our tool aims to achieve, and we hope that it can make a contribution to overcoming the decision paralysis often caused by lack of data.
This post was first published on ICRAF's blog here.
Big decisions with limited information
Deciding on big issues without enough information has just gotten a little bit easier. This step forward is facilitated by the release of the decisionSupport R package by the Land Health Decisions group at the World Agroforestry Centre (ICRAF), supported by the CGIAR Research Program on Water, Land and Ecosystems.
One of the main reasons why many decisions are difficult to make is that we don’t know enough to be absolutely sure about their consequences. We also don’t normally have enough time or money to collect a lot of information. But even where we’re able to ‘do the research’, some uncertainty still remains, because there are many things about the world and the future that we simply can’t know.
How can we make decisions in such an environment? Of course we can rely on our gut feeling alone, but this isn’t always acceptable. Especially for big decisions in international development, we normally expect a more robust approach, since such decisions can affect millions of people, vast areas and enormous amounts of money. Unfortunately, many of these same decisions are made in areas that are particularly data-scarce, such as rural Africa.
Lessons from the corporate world
Of course, development is not alone with such challenges. Businesses around the world struggle with similar situations all the time. Decisions must be made without perfect knowledge, and profits and jobs depend on managers making these decisions right.
This is why at ICRAF, we have adopted business analysis approaches for aiding decisions in international development. Douglas Hubbard’s Applied Information Economics concept served as a blueprint for our first stab at this.
In this concept, business analysts develop mathematical impact models for a decision together with decision-makers, who are encouraged to put into the model everything they find important. Rather than going off on a multi-year multi-million dollar research journey, analysts work together with technical experts to quantify the current state of uncertainty on all the unknown variables in the model. What do we know now, before we take any measurements?
When we do an honest appraisal of what we really know for certain, there isn’t a lot – at least when we try to pin precise numbers to things. But what we can always do – at least after learning a few estimation techniques and maybe doing a quick web search – is come up with two numbers that are very likely to bracket the real value. We can find one value that is certainly lower and one that is certainly higher than the value we’re looking for.
For some mathematical techniques, this is sufficient. These have somewhat obscure names like Monte Carlo simulation or Bayesian Networks, and they are a bit more difficult to use than standard Excel spreadsheets. But if we manage to apply them to our decision problems, they can produce for us a range of decision outcomes that we should prepare for. They can give us an impression of the worst and the best decision results that can realistically happen.
Maximizing expected welfare
Every decision requires us to answer the question What shall be done? We can only give an answer, if we have clarity on what it actually is that we’re trying to accomplish. Often, we may simply want to maximize our financial profits. In most cases in development, however, things are more complicated and success or failure of a project is determined by a wider range of measures, which are often interrelated and may even be in conflict with each other. Economists call the compound measure that combines all these variables the ‘welfare function’. According to this principle, decision-makers strive to maximize their expected welfare, which is determined by his or her values and preferences. Decision models are mathematical representations of all the variables and processes that determine the welfare arising from a decision.
Sometimes knowing the range of plausible welfare outcomes is enough. If all conceivable consequences of making a decision a certain way are better than those of the alternative course of action, we can safely go ahead and decide. Things get a bit trickier when the results are not so clear – if our first analysis tells us that the results of choosing decision alternative A over B can have both positive and negative welfare outcomes.
The value of information
We’re always smarter about what would have been the right decision, after we observe the actual outcome. We often wish that we had decided differently, but this is never an option. Our decision models attempt to reduce the frequency of such frustrations, but a certain chance of negative welfare outcomes often remains. If the decision is important and we don’t feel like the analysis so far has given us enough guidance, we may want to take measures to reduce our uncertainty even further. But how? Let’s turn to the next step in our analysis framework: the value of information.
The value of information gives an impression of just how much our uncertainty about different variables limits our confidence that we’re making the right decision. It tells us, on which variables measurements would produce the greatest gain in certainty that we’re taking the right decision. The value of information normally varies greatly between the various variables in a model, and there are often a few that stand out clearly from the rest. Measurements on these factors can often be very effective for vastly improving the decision.
Making methods accessible
We have used this framework in a few cases already, for example for developing a Global Intervention Decision Model, or for analyzing the prospects for building a freshwater pipeline in Kenya.
So far the methods we have used, and are continuing to improve, have been useful only to us and a few of our stakeholders. But they can only have the desired effects of improving decisions on a larger scale, if everyone can use them. A first step in this is the publication of our methods as an open-source extension package for the R programming language.
The package decisionSupport has just become available on the CRAN repository, the standard download hub for R packages. In addition to the code, we have added detailed documentation of all functions, so that we hope that the tools will find a wide audience. And we’ve attempted to make their use as convenient as possible. Please check decisionSupport out, and let us know what you think. This package is only one step on our journey towards establishing the use of decision analysis tools in international development. We will therefore be grateful for any comments that will help us make the tools more user-friendly.
A constructive approach to knowledge gaps
Admittedly, some residual difficulty remains in making big decisions without adequate information, even now that our tools are available. But things get easier if we shift the focus from what we don’t know to what we actually do know. Systematically assessing this and then using this knowledge in a structured way that honestly acknowledges our uncertainties can go a long way to making better and more transparent decisions. This is what our tool aims to achieve, and we hope that it can make a contribution to overcoming the decision paralysis often caused by lack of data.