Evaluation Plan Guidance Page 18

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
Specific Guidance: Impact Evaluation Design Selection
Describe the characteristics of the impact evaluation proposed. Impact evaluations provide statistical evidence
of how well an intervention works and what effect it has on participants or beneficiaries. The type of
information that should be provided in the impact evaluation section will differ according to the type of design
proposed (e.g., randomized control group, matched comparison group, single case with repeated
measurements, a pre-experimental evaluation using pre- and post- testing). However, all proposals should
include a clear description of the design, including its strengths and limitations. Where possible, the proposed
design should draw upon previous examples of the design type from the literature and explain why the
proposed evaluation design was selected over alternative designs.
The subsections below describe the different types of research designs most commonly used in impact
evaluations. The evaluation plan should describe in detail the selected research design, noting the specific
requirements from the checklist for the particular design type. This list is comprehensive, but not exhaustive,
and other research designs may be appropriate depending upon program specifics.
Randomized Between-Groups Design
The strongest evaluation design available for establishing causality is
random assignment
of program
participants (or groups of participants, program sites, etc.) to either a program participation group or a
control
group
that is not exposed to the program (often referred to as the treatment or intervention). If individuals are
randomly assigned to the program and control groups, the groups are statistically equivalent on measured and
unmeasured characteristics—including unmeasured characteristics that evaluators may not have considered
when designing the evaluation (Boruch, 1997). Random assignment allows evaluators to infer that changes in
the participants are due to the intervention, regardless of the characteristics of any of the individuals that are
easily recorded (such as race or gender) or less easily recorded (such as motivation or beliefs).
This
statistical equivalence
comes from the treatment and control groups
Additional Resources
being formed in an unbiased way. If the evaluation were, theoretically,
See Boruch (1997) for
replicated a large number of times, the groups would be perfectly balanced in
information on conducting
terms of individual characteristics. However, in any one sample, the groups
experimental design
may not be perfectly balanced on all characteristics. Even so, when groups are
evaluations.
formed before individuals start the program, they are assumed to be
statistically equivalent on measured and unmeasured characteristics prior to
See Song and Herman (2010)
program participation. If the groups remain the same throughout the
for guidance on methods of
evaluation (i.e., there is minimal attrition or those who drop out of both
random assignment for
groups are similar in terms of key characteristics), then the difference in the
experimental designs.
average outcome between the intervention and control groups can be
See McMillan (2007) for a
attributed to the program without reservations. However, issues (e.g., the
discussion of threats to
number of randomized units being too small, participants dropping out
internal validity in RCT
differentially from the intervention and control groups, unequal participation
designs, (available here:
rates in the treatment and control groups), may create systematic differences
in the groups during the study.
15.pdf)
Randomized Between-Groups Design
Specific Guidance:
Discuss steps that will be taken to ensure that the groups remain equivalent, or steps to impose statistical
corrections for non-equivalence.
nationalservice.gov/SIF
15

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education