Evidence Based Practice
Guessing vs. Knowing
In very rare circumstances will you choose to guess rather than know. The rise of evidence-based policy (EBP) and practice is driven by largely this idea, that it is better to have as much evidence possible and to be optimally informed in order to produce, theoretically, the most effective outcomes. EBP is often understood, in most cases on the part of the academic community, as involving a systematic and comprehensive understanding of high-quality and robust evidence. While this may be an ideal-type of evidence use in policy and practice, evidence which is systematically collected and analyzed is not necessarily the most relevant, nor the most useful given either the problem or the stakeholders involved.
In most cases, evidence-informed practice starts with understanding what is effective and what is not effective. Measuring outcomes is important because it can be used to make programs more effective and sustainable, show donors and other stakeholders that a project represents the best possible outcomes, and contribute to a broader body of knowledge around a particular subject. In order to accomplish this, monitoring and evaluation (M&E) consists of the regular collection of information about a project’s activities, and then assessing whether a project is achieving its original goals and why and how and intervention operates. M&E can provide guidance on future activities and constitutes an integral part of accountability to intervention partners.