Evaluation Plan Guidance Page 13

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
III. Research Questions and Contribution of the Study
Research Questions
The evaluation plan should include both the questions that the evaluation hopes to answer and a description
of how answering these questions will contribute to a greater understanding of programs, outcomes, and/or
policies. Research questions are inquiries that are measurable, and thus empirically answerable. SIF evaluation
designs frequently include both impact and implementation components.
Impact evaluations pose questions about the outcome of the program for beneficiaries/participants and on the
impact of program services or participation, more generally, in relation to the comparison group, control
group, or pre-participation baseline of the participants themselves.
Implementation evaluations pose questions related to the process of developing, running, or expanding a
program, and potentially also about participants’ experiences of program participation. Questions should
focus on the process by which the program operates, rather than the outcomes among beneficiaries or impacts
on them in relation to non-participants.
For example, for a program that delivers job training services, an implementation question might be:
Do participants receive all components of the job training program upon completion of the program?
In contrast, a comparable impact question might be:
Do participants have more job-related skills upon completion of the program components compared to the
control group?
Most SIF-funded evaluations contain both types of research questions.
Specific Guidance: Confirmatory and Exploratory Impact Questions
Additional Resources
The confirmatory and exploratory impact questions should describe the
impact evaluation questions, and note whether they are confirmatory or
For more information on
exploratory in nature. Exploratory questions include those that are posed
addressing questions about
during the design phase, and implied by, or stated in the logic model, but
multiple comparisons, see
cannot be answered with adequate statistical power. If you have multiple
Peter Schohet’s (2008a)
confirmatory questions related to the same outcome, consider prioritizing
“Technical Methods Report:
them to avoid difficulties with multiple statistical comparisons during the
Guidelines for Multiple
analysis phase.
Testing in Impact Evaluations”
(NCEE 2008-4018).
For example, a confirmatory question might examine changes in knowledge
levels among program participants at the beginning of program participation compared to the end of the
program. An additional exploratory question about the evidence of effectiveness of program exposure to all
components of a program may be posed if it is unclear whether all participants follow the same path through a
program.
Questions should be based on outcomes specified in the logic model. Developing the logic model and the
evaluation questions in concert can help ensure that the impact evaluation measures the right outcomes and
nationalservice.gov/SIF
10

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education