Evidence Based Practice
top of page
Search

Evidence Based Practice

Guessing vs. Knowing


In very rare circumstances will you choose to guess rather than know. The rise of evidence-based policy (EBP) and practice is driven by largely this idea, that it is better to have as much evidence possible and to be optimally informed in order to produce, theoretically, the most effective outcomes. EBP is often understood, in most cases on the part of the academic community, as involving a systematic and comprehensive understanding of high-quality and robust evidence. While this may be an ideal-type of evidence use in policy and practice, evidence which is systematically collected and analyzed is not necessarily the most relevant, nor the most useful given either the problem or the stakeholders involved. 


In most cases, evidence-informed practice starts with understanding what is effective and what is not effective. Measuring outcomes is important because it can be used to make programs more effective and sustainable, show donors and other stakeholders that a project represents the best possible outcomes, and contribute to a broader body of knowledge around a particular subject. In order to accomplish this, monitoring and evaluation (M&E) consists of the regular collection of information about a project’s activities, and then assessing whether a project is achieving its original goals and why and how and intervention operates. M&E can provide guidance on future activities and constitutes an integral part of accountability to intervention partners. 


The general steps for evaluation consist of developing objectives, creating indicators, data collection, analysis of data, application of findings to implementation, and communicating the relevant information. Indicators should be specific and measurable, and they should communicate what has changed and for whom. You might choose to use a logic model, or the analysis of annual work plans to evaluate the efficiency and effectiveness of a program.


For example: 


Types of Evidence


It is increasingly recognized that policy and practice decisions are the result of numerous inputs and judgements that extend from factors outweighing scientific evidence (Head, 2008). Most proponents of EBP argue that there are reasons why research has minimal influence on policy, including competing interests with which policymakers are faced. These may take the form of political and electoral incentives, economic concerns, and other types of competing evidence such as personal experience and expert opinion. Evidence may also be regarded as irrelevant if it is not applicable locally or if there is a lack of consensus about the quality of evidence (Black, 2001).  


In light of the challenges of EBP, social scientists have proposed strategies for encouraging the uptake of evidence in policy while considering the additional factors that decision-makers face. Cairney and Oliver suggest that researchers can be most effective when combining the principles of evidence and governance (2017). They argue that the weight of value-driven arguments can be just as important, if not more important, to policymakers than the importance of evidence, and therefore, evidence should be packaged to accommodate policymakers’ social, political, and ideological predispositions and motives (ibid.). Head argues that rather than recognizing one ‘base’ for evidence, it is more useful to identify three types of evidence that can inform policy, namely systematic or scientific research, practical experience, and political judgement (2008). Additional recommendations for improving the uptake of evidence include pursuing the systematic examination of research which more holistically identifies past lessons and experiences (Pawson, 2002); using research that targets multiple stages of the policy process, for instance to inform agenda setting, examining alternatives, and evaluating outcomes (Solesbury, 2001); and evaluation of policies that considers political considerations and the need to identify and isolate effects of pilot programs from exogenous factors (Sanderson, 2002).  


Say you are implementing a microfinance group for the purpose of reducing child labor, and you have a group of women whose children are all employed and who are interested in participating. You probably wouldn’t use a systematic review to convince the women, their children, or their families that microfinance projects have (in some situations) reduce the incidence of child labor in similar contexts, at least not in its original form. You wouldn’t even use a systematic review to communicate the projected effectiveness of microfinance to donors, to whom you would likely invoke more anecdotal and less scientific language. I can almost guarantee you that if you handed out a systematic review to a group of donors and used it as the reason for why they should fund you, in the absence of any other information or ‘evidence’, you would have a difficult time filling the budget that you need. 


Indeed, there are many other kinds of evidence that exist and that are used and communicated across a wide range of development programs. These include experiential knowledge (information gained from previous experience), stakeholder opinions, and beliefs. 


Evidence or Belief? Belief or Budget?


It has been the norm for practitioners (as well as academics, policymakers, etc.) to argue for a particular base of evidence that is likely to produce the most efficient processes or effective results. This naturally results in one base pitted against one another, evidence vs. belief, like opponents in a match. 


The question we should be asking, however, is not which base of evidence is the ‘best’ (by whichever standards are defined for that purpose). Rather, we should be asking how and why those bases produce which outcomes. In this sense, we are moving away from an evidence vs. stakeholder vs. belief approach, and moving towards recognizing that in most organizations, groups, and political associations, all bases are present, and all have the propensity to produce both positive and negative outcomes. 


There is no guarantee that a program based solely on the results of a meta-analysis will have better outcomes for a community than a program based solely on the religious beliefs of the participants. Let’s return to our example. After a couple of years, the microfinance group has conducted an evaluation and has determined that the incidence of child labor has remained consistent. They hold a meeting to decide how to change their approach, given that they are still committed to reducing child labor. One participant suggests that given the number of studies showing that microfinance works in conjunction with health interventions to reduce child labor, that the group should incorporate health insurance into their program. Another participant suggests that since the financial supporter of the project, who wants to improve his company’s corporate social responsibility (CSR) image to the wider community by exposing industries that employ children by conducting an awareness campaign, that the group should seek instead to improve their awareness efforts. A third participant suggests that the group should seek to change community members’ attitudes and beliefs towards child labor and argues that child labor should be understood from a human rights framework which can be solved around religious cooperation. 


Should the group choose to rely on scientific evidence by pursuing a joint health intervention, stakeholder opinion through awareness and local CSR promotion, or belief in religious movement aiming to alter mindsets? None of these bases of evidence are contradictory, unless they are defined as such, and each approach or multiple combined approaches may reduce the prevalence of child labor.


The question should be, therefore, ‘What are the outcomes produced, and how can different types of evidence contribute to those outcomes most effectively?’, rather than simply, ‘Evidence or belief?’ or ‘Belief or budget?’. 


Improving Practice


In many cases, taking an evidence-based approach is characterized more so by evaluation than it is by incorporating outcomes into practice (see USAID, 2016 as an example). What we need to be doing is placing just as much emphasis, if not more, on putting what we know is effective into practice rather than only measuring what if effective and what is not. 


If you conduct ten evaluations of the microfinance group over the course of five years and find that in the two years you combined the microfinance program with a health and life insurance program there was a significant reduction in the prevalence of child labor, whereas in the years that the microfinance program was run by itself there was an insignificant reduction in prevalence, you may preliminarily conclude (before you determine the presence of confounding variables and use appropriate statistical measures of significance) that combined programs are more effective. 


This information is, at best, interesting, until it is used to improve practice.


While you may have read books and guides on the collection of evidence, what we should be asking is how the utilization of such evidence can improve practice. Evidence might be used to design a strategy, make corrections, ensure accountability, make funding decisions, and increase knowledge about the project. To start, you would make a list of key questions that would help inform the application of evidence to practice. These may include, ‘What are the current strengths of the project?’, ‘What are the gaps or weaknesses of the project?’, ‘What does the evidence tell us?’, ‘How much time and resources would it take to improve?’, ‘Would taking these steps bring value to this project or this community?’, and ‘Do we have the capacity or comparative advantage to move forward?’.

 

Does Evidence Equal Positive Outcomes?


Not necessarily. As we discussed above, there are numerous types of evidence to take into consideration when designing an intervention. It is less prudent to debate the base of evidence than it is to understand which base and combinations of such produce positive outcomes, and which do not. Contrary to the ideal-type of EBP, an intervention designed from a collection of systematic reviews does not necessarily result in positive outcomes. Similarly, and intervention designed by a church or a mosque that is grounded in religious and doctrinal ideals may not be effective. 


Once we recognize that there are many types of evidence, we can seek an evidence-based approach that is founded upon known and projected outcomes. We would not advise the microfinance group to choose the health, awareness, or mindset approaches without having consulted what they know about the outcomes of these approaches, and based on what they know, what they project to be the outcomes of such interventions implemented in that context. 

bottom of page