Evaluation Plan Guidance Page 20

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
Specific Guidance: Non-Randomized Group Designs – Groups Formed by Matching
For a proposed quasi-experimental design where the comparison group is formed by matching, clearly
describe the proposed comparison group and each step in the matching procedure. List all measures to be
used in the matching, and provide details for these in the measures section, below. Provide information (e.g.
from existing literature) that justifies the inclusion of all measures in the matching procedure.
If possible, describe any matching characteristics used that were drawn from previous evaluations. Further,
this evaluation should include all variables that are typically included in matching procedures in similar
evaluations. To anticipate potential threats to
internal validity
(the certainty that the evaluation results
accurately account for program impact on documented outcomes), be sure to discuss reasons why the
comparison group might differ from the treatment group and the ways in which the proposed methods adjust
for those differences.
Additional Resources
Between-Groups Designs - Groups Formed by a Cutoff Score
For more information on
Another way of assigning individuals into groups is by examining a
RDD, see the Resources for
quantifiable indicator related to the key outcome collected prior to study
Further Reading list in
participation, such as reading ability measured by standardized test scores
Appendix B: Resources.
(for a tutoring program) or income per household member (for a program
addressing economic opportunity). If the indicator is reliably and validly
A
description of discontinuity
measured, a well-defined cutoff score can be used to form the intervention
designs
is given by Imbens
and comparison groups, with those in more need (as measured on the
and Lemieux (2008), available
here:
indicator) assigned to the program group and those with less need assigned to
the comparison group. The difference (or discontinuity) in the average
t/classes/eco7377/papers/imbe
outcome between those in the intervention group just below the cutoff score
ns%20lemieux.pdf)
(e.g., those with lower test scores) and those in the comparison group just
above the cutoff score (e.g., those with higher test scores) can be attributed to
A
clear outline of the method
the program, albeit with reservations. This design is known more formally as
is also given here:
a
regression discontinuity design
(RDD).
hods.net/kb/quasird.php
Specific Guidance: Between-Groups Designs - Groups Formed by a
Cutoff Score
If this type of design is proposed, clearly delineate and justify the cutoff point (thereby describing the program
group versus the comparison group), including whether an estimated or exact cutoff score is to be used. The
unit of measurement to be used for the cutoff score (e.g., an individual student’s score) should correspond to
the outcome measure, and to the unit of assignment (e.g., an individual student was measured and students
were the unit of assignment). The cutoff score should have sufficient range to constitute meaningful
differences between the two groups to ensure internal validity. The cutoff score should have a relatively linear
relation to post-test measures, because non-linear relations can erroneously be interpreted as a regression
discontinuity. It is important not to mistake the continuation of a non-linear trend in data for a break in a test
score.
Single-Group Designs - Single Subject (or Case Study Designs)
The single case design is recognized in the literature and by the federal Department of Education’s What Works
Clearinghouse as enabling evaluators to attribute changes in the outcome based on a single case. Single case
nationalservice.gov/SIF
17

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education