One question I frequently get asked about examining the effectiveness of educational interventions is what is an impact evaluation? In the most general sense, impact evaluation is a research process whereby a researcher conducts a study to determine the effect of a particular intervention. Studies are often distinguished by various designs rather than different outcomes (e.g., randomized design, QED (quasi-experimental design), or a pre-post design). In my work, I conduct impact evaluations and look forward to contributing to the body of academic knowledge regarding both the techniques in addition to the conclusions drawn from the research. This post will serve as a general introduction to the concept of impact evaluation and areas for consideration.
Any high quality research endeavor starts with well thought-out research plans and scientific process that covers questions, the confirmatory analyses that will be reported prior to the start of any outcome data collection, procedures, and techniques from sample identification, selection and assignment, inference space, attrition, locations through data collection planning, statistical analysis and dissemination of information.
An impact evaluation aims to determine if the intervention has succeeded in some way and therefore deserves expansion within the setting. Typically you conduct this research in pilot programs or even at the very start of a line of questioning to evaluate results on a small scale before time and money are spent on a larger study.
- Logic model for the intervention A clear understanding of the theoretical basis for the intervention is paramount to developing a plan to study the impact. Also know as the theoretical framework, the logic model will give the credibility as well as the path to understanding how the “intervention as planned” should work.
- Research questions for evaluation of implementation Clear research questions for the impact evaluation are essential to moving forward with design. Research questions should include the following information: (1) name of the intervention, (2) counterfactual condition, (e.g., business as usual or other condition to which treatment will be compared), (3) outcome domain, and (4) educational level of the sample. For example, a research question that includes all of this information might read as follows: “What is the impact of the SEL Academy on the reading achievement of fourth grade students compared to the business-as-usual condition?”
- Measuring fidelity of implementation When working out the plan of an impact evaluation, the implementation of the intervention must be well documented. Were instructions clearly followed during the implementation? What was modified? Your job is to assess the effectiveness of the intervention but also map a strategy should the intervention be used again or brought to scale. This includes highlighting the conditions under which the intervention was implemented as well as explaining any modifications.
- Data collection and analysis plan Your data collection plan should describe the measures (outcome and baseline) and a detailed analysis plan. The following should be described for each measure: Name of the measure including subtest; Measure citation; Domain addressed; Validity and reliability information; Information regarding the measures alignment with intervention; information on how the outcomes is collected for treatment and control groups and whether collectors are aware of group membership, i.e.,“blinded,”; Scale and score type; information on other independent variables. With any impact evaluation you are searching for evidence to support claims of what would have occurred (or not occurred) had the intervention not been available so a clear plan for that process as well as triangulation is important.
- Fidelity reporting plan Your system for measuring fidelity of implementation and reporting fidelity findings should be described. Dissemination of information is a critical component of any research plan. Afterall, completely a high quality research study is a wasted endeavor if you do not share your results and build on the body of academic literature. Journal articles, conference presentations as well as how you propose to digitally share the information should be explained.
As with all high quality research, an impact evaluation requires a plan that will be followed to uncover the intended and unintended consequences. Depending on where the intervention is in development and use, your impact evaluation may be reporting on a first step or perhaps a new direction/market for use. Be clear in your intention and direction of your evaluation.