Evaluation Plan Guidance Page 63

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
measured by a
coefficient that ranges from -1 to +1. However, the observation that two
correlation
characteristics are correlated does not imply that one caused the other (correlation does not equal causation).
Counterfactual: A term used in evaluation to denote a hypothetical condition representing what would have
happened to the intervention group if it had not received the intervention. The counterfactual cannot be
directly observed, so it is usually approximated by observing some group that is “nearly identical,” but did not
receive the intervention. In random assignment studies, the “control group” formed by random assignment
that is equivalent to the intervention group in every way, on average, except for receiving the intervention
serves as the counterfactual.
Covariates: A statistical term that describes the relationship between characteristics of study participants that
are, typically, correlated with the outcome. These characteristics could explain the differences seen between
program participants and the control or comparison group. As such, these variables are often used as
statistical controls in models used to estimate the impact of the intervention on study participants’ outcomes.
Effect Size: A way of statistically describing how much a program affects outcomes of interest. Effect size is
the difference between the average outcomes of the intervention and control group expressed in standard
deviations. This expression is derived by dividing the difference by a standardized unit.
Evidence Base: The body of research and evaluation studies that support a program or components of a
program’s intervention.
Experimental Design: A research design in which the effects of a program, intervention, or treatment are
examined by comparing individuals who receive it with a comparable group who do not. In this type of
research, individuals are randomly assigned to the two groups to try to ensure that, prior to taking part in the
program, each group is statistically similar in both observable (i.e., race, gender, or years of education) and
unobservable ways (i.e., levels of motivation, belief systems, or disposition towards program participation).
Experimental designs differ from quasi-experimental designs in how individuals are assigned to program
participation or not; in quasi-experimental design, non-random assignment is used, which prevents evaluators
from feeling confident that both observable and unobservable characteristics are similar in each group since
group assignment is based on observable characteristics usually.
Exploratory Research Question: In contrast to a confirmatory research question, an exploratory research
question is posed and then addressed to inform future research rather than to inform policy. This question
type includes questions that examine, for example, which specific subgroups respond best to an intervention;
questions such as that are less likely to be answered with strong statistical certainty, but may be helpful for
program implementation and future evaluation. If a question arises as a result of analyzing the data and was
not originally posed as a fundamental impact of the program before data is collected, it is categorized as
exploratory.
External Validity: The extent to which evaluation results, statistically, are applicable to groups other than
those in the research. More technically, it refers to how well the results obtained from analyzing a sample of
study participants from a population can be generalized to that population. The strongest basis for applying
D.2

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education