Evaluation Plan Guidance Page 25

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
To do this, describe exactly how the participants who will be in the sample will be selected from the entire
possible population of participants. That is, participants might be selected through one of several techniques:
simple random sampling, stratified random sampling, convenience sampling, or some other sampling
procedure. The sampling plan, which describes the technique and includes the size and composition of the
sample, should reflect the budget and timeline of the project.
Power Analysis and Minimum Detectable Effect Size
The evaluation plan should include a power analysis and MDES calculations for the proposed research design
and estimated sample sizes. These calculations should capture whether the sample used in the evaluation will
be large enough for the impact analysis to provide meaningful results. A power analysis and MDES
calculation should be presented for each analysis in the evaluation plan, including all sub-groupanalyses.
Power analyses can be conducted using the free programs R and G*Power, as well as SPSS, SAS and STATA.
For designs that have multiple levels of analysis or assign treatment and control at the group, rather than
individual level, the free
Optimal Design Software
(
based/optimal_design_software) can be used to estimate power and MDES.
Sample Retention
To ensure that the evaluation is as strong as possible, it is important to try to maximize the number of
participants (or groups or sites) who take part in the study. This means not only recruiting participants, but
also making sure that as many people as possible are part of data collection efforts during the study, as well as
in any follow-up period. This is true for program participants, as well as for any control or comparison group
members.
Specific Guidance: Sample Retention
The evaluation plan should describe how the study will keep track of participants and their data (ideally,
when generating the sample specifications and data collection strategy). This plan should include mechanisms
for monitoring, tracking, and troubleshooting issues related to keeping participants in the study.
These practices might include systematic checks of data collection practices to ensure that data are being
collected on a regular basis from all participants, thus helping identify any participants who have not been
contacted in a given period of time. This could also include having strategies in place for following up with
participants who are not responding or participating in either program or data collection activities. It is
important to remember that individuals who leave a program prior to the completion of all components still
need to be tracked as part of the data collection efforts. All participants, and control or comparison group
members, are ideally tracked and provide data during the entire study period, regardless of program
participation, location, or ease of access.
Measures
Ensuring that the measures selected are reliable, valid, and appropriate to use for the study is a key way to
reduce threats to internal validity caused by the study itself. Selection of poorly validated or unreliable
measures can lead, for example, to an effective program showing no results.
Reliability refers to the generalizability of a particular measure. That is, if a measure is used over and over
again on the same group of people, it should yield the same results. Validity refers to the extent to which a
measure adequately reflects the concept under consideration; if a measure matches the concept in question,
nationalservice.gov/SIF
22

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education