Evaluation Plan Guidance Page 17

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
Moderate Evidence: Evaluation designs that have strong internal validity, but weaker external validity, are
anticipated to produce moderate levels of evidence. Moderate evidence comes from studies that are able to
show that a program produces changes among participants (or groups or sites), but cannot demonstrate how
well the program would work among other groups besides those included in the study, or which may have a
very limited number of threats to internal validity unaddressed.
Different types of evaluation designs may produce moderate evidence, such as the following:
Randomized control group designs that include small numbers of respondents or draw participants
from a group that is not representative of the target population as a whole;
Cut-off score matched group designs;
Interrupted time series designs drawn from representative samples of the target population; or
Single case study designs that involve frequent data collection across time.
Preliminary Evidence: Evaluation designs that address either no, or only a few, threats to internal validity
produce preliminary evidence. Preliminary evidence comes from studies that cannot demonstrate a causal
relationship between program participation and measured changes in outcomes, although in some cases they
may be able to show a strong association (statistically) between program participation and measured changes.
Different types of evaluation designs may produce preliminary evidence, such as:
Any study (even an otherwise well designed randomized controlled trial [RCT] or
quasi-experimental
design
[QED]) without sufficient sample
size/statistical
power;
Any study (even a RCT or QED) that fails to address threats to validity due to instrumentation or
experimenter effects;
Interrupted time series designs with insufficient pre- and post-measurements;
Non-randomized two-group post-test or pre-and post-test comparison without adequate
matching
or
statistical controls; or
Pre- and post-test or post-test only design with a single group.
These designs are unable to provide strong or moderate evidence because they cannot sufficiently reduce other
possible explanations for measured changes.
Design Types
This section outlines the two major categories of impact evaluation: (1)
experimental
or quasi-experimental
evaluation designs, and (2) pre-experimental designs. Experimental or quasi-experimental evaluations can be
either
between-group
impact studies or single-group impact studies. Between-group designs compare at least
two groups of individuals who differ in terms of their level of program participation on one or more outcomes.
The people (or groups of people) who receive services are referred to as the treatment group, the group of
program participants, or the group that receives the intervention, while the group that does not participate is
referred to as the control group (when people are randomly assigned to that group) or the comparison group
(when people are assigned to that group through non-random matching). Single subject/group designs collect
data on only one group (or respondent) for multiple time points pre- and post-intervention, or at different
points during the program intervention. All of these designs are explained in greater detail below.
nationalservice.gov/SIF
14

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education