The new age of big data was supposed to make decisions easier, and in some ways it has. Data tells you how much you’re going to like a new film on Netflix. Data lets you know when you can stop running on that treadmill. Data helps predict hurricanes, financial markets, and the impact of policies on the health and prosperity of entire nations. But new research indicates that while policymakers may have more data than ever at their fingertips, they may not know how best to use it to make sound decisions.
Asad Liaqat, a PhD candidate in public policy, first saw this in action as a research associate at the Center for Economic Research in Pakistan (CERP).
“Pakistan is very concerned about the population growth rate and contraception,” says Liaqat. “You can look at the numbers showing how much access there is to contraception, and say ‘OK, access is a problem; if we provide more access then things will figure themselves out.’”
Liaqat knew that the data told a more complicated story, and one beyond simple access. Research showed that even in areas with high access to contraception, a power differential between men and women in Pakistani households prevented many families from using, or even talking about, contraception. Despite this finding, many NGOs and policymakers held on to the simple—but wrong—idea that they could solve the problem by increasing access.
“That’s very lazy thinking—but precisely the kind of thinking that happens in the policy world,” says Liaqat.
The Communications Gap
Asim Khwaja, Sumitomo-FASID Professor of International Finance and Development at Harvard Kennedy School, co-director of Evidence for Policy Design (EPoD), and co-founder of CERP, says that poorer countries, like Pakistan, lack the appropriate infrastructure to translate data into policy that will improve the lives of citizens.
“In rich, Western countries this happens without people even noticing: Governments use large-scale studies to optimize their welfare programs, health ministries use data to anticipate the paths of disease, tax departments use behavioral insights to increase compliance so the government can pay for social programs,” says Khwaja.
As a PhD student, Liaqat wanted to address the gap he’d seen at CERP between the data that scientists had produced and the information that policymakers could easily understand, so he joined research directed by EPoD. Khwaja, Liaqat, and their collaborators set out to learn how bureaucrats use quantitative data to make decisions.
The team surveyed more than 1,500 early and mid-career civil servants in Pakistan and India to assess how they use data. The respondents struggled to analyze quantitative data. When asked to interpret a 2 x 2 table, their answers were no more accurate than if they had guessed randomly. All civil servants in Pakistan must take a competitive exam to get their first job in civil service, but the exam doesn’t focus on mathematical literacy.
“These people are highly educated, but they’re not necessarily trained in the tools you need to make decisions based on data,” said Liaqat. “Without statistical training, you are more affected by behavioral biases.”
These biases were the next crucial barrier for the civil servants surveyed by EPoD. The bureaucrats were presented with two sets of data, one from a survey with a large sample size, and one with a small sample size. Scientists rely on the law of large numbers: The greater the sample size of an experiment, the more likely it is that the results from that experiment provide an accurate prediction. The civil servants in the EPoD survey believed the data from the large sample size, but they also believed the data from the small sample size almost as much.
Equally worrisome, the civil servants didn’t believe that data applied to their policymaking decisions if it came from districts other than the ones they were making decisions about. In other words, a health department official making decisions about Lahore was less likely to believe data if it had come from a study conducted in Islamabad or Karachi.
“That speaks directly to the way we do development economics,” Liaqat says. “We run very sophisticated experiments in specific areas, and we’ll find other areas that are of interest as well. But this says they don’t believe our large sample size as much as a few stories from their own area.” Half-joking, he adds, “If our objective is to convince policymakers, we should be sending people to collect stories to share with policymakers instead of showing them the data.”
Liaqat believes that if academics and policymakers want to work together, they have to step outside their comfort zones and turn their backs on some of their deepest-held biases. Academics also need to engage with policymakers to help them build their capacity for understanding complex, data-driven research.
Khwaja’s work to educate and train bureaucrats in Pakistan through the Building Capacity to Use Research Evidence, or BCURE program, is already following this model. Funded by UK Aid, BCURE is a collaborative program that conducts trainings to help civil servants learn to analyze data. Khwaja is also working to change the way data is presented to civil servants.
“We invest a lot in communications and data visualizations in our countries of focus, because we want to influence the entire policy environment and advance a ‘culture of evidence’,” says Khwaja. “We’re starting to see how government officials there are using the language of data and evidence with each other.”
Liaqat’s research has led him to believe that academics, particularly those working in the developing world, need to establish long-term, mutual relationships with the NGOs and government agencies that rely on their research. “One thing we shouldn’t be doing is producing our research papers and not contributing back.”
Khwaja says that his work in Pakistan has been successful because he and his team are committed to forging those long-lasting relationships with bureaucrats.
“In the countries where we work—particularly Pakistan, India, and Indonesia—members of our team have worked alongside government officials so long that they have, in effect, become part of the ‘furniture’ of the ministry,” says Khwaja. “That is the way to create change.”
A Global Problem
The gap between data and decision-making is not exclusive to the developing world. Allan Brandt, the Amalie Moses Kass Professor of the History of Medicine, says that in the US, there is an increasing divide between the kind of information put out by academics, and the kind of information that is digestible by the average policymaker.
“When we get to a certain level of quantitative skill in math, it’s very difficult to translate that work into its most important policy and social implications,” says Brandt. “Those of us in universities and colleges have very scrupulous requirements for convincing our immediate colleagues about the importance of our work, and we find ourselves writing for smaller groups of people who think like us. That creates a big problem in terms of communicating the implications of our work to a wider audience.”
If academics wish to reach out to the wider community, then it’s important to do so in language that they can understand. Liaqat thinks that one way to get past academic jargon is to meld data with a good story.
“When I started working as a CERP research associate, I thought a large part of my job was to write long field notes, very detailed stories of engagement with individuals,” says Liaqat. But the stories didn’t make it into the CERP publications. Liaqat knows that no economic journal will publish an anecdote, but he thinks that there is room for storytelling within the discipline.
Brandt sees the same problem in his own work: “When we dichotomize ‘quantitative and qualitative’ or ‘stories versus numbers,’ we’re really not reflecting the reality of problems.”
“There’s bias against qualitative research because we don’t understand its value,” Liaqat explains. “But it is important not only because it adds to our research, but also because people believe stories.”
No matter how well-versed they are in data science, policymakers are, first and foremost, people. People who might trust a movie reviewer more than a Netflix star rating. People who might trust a trainer to pick their workout routine rather than an app. In a culture flooded with data, human connections stand out. Finding the human story in all those numbers might be the best way to bridge the gap between research and policy.