Evaluation Plan Guidance Page 27

ADVERTISEMENT

EVALUATION PLAN GUIDANCE
SOCIAL INNOVATION FUND
Specific Guidance: Measure Validity, Reliability, and History of Use
Additional Resources
Describe validity, reliability, and history of use for all measures used in data
See Pett, Lackey and Sullivan
collection activities. At a minimum, each survey, test or interview measure
(2003) for a discussion of
used should have evidence of the following:
exploratory factor analysis.
See Brown (2006) for more
Reliability (e.g., test-retest reliability for measures with individual
information on confirmatory
items, Cronbach’s alpha for tests/scaled surveys);
factor analysis.
Face validity (e.g., pilot testing with appropriate populations, review
by service providers); and,
Content validity (e.g., review by content experts, systematic alignment with key areas to be measured).
In addition, surveys or tests that use scales or generate scores should provide details of exploratory factor
analysis (Pett, Lackey, & Sullivan, 2003), confirmatory factor analysis (Brown, 2006), and validation against
outcomes, as well as history of use, if available.
If reliability and validity of measures have yet to be determined, an analysis plan for doing so should be
presented. If non-validated, self-developed measures of key outcomes are to be used, provide a justification for
the use of this method over any pre-existing and pre-validated measures.
Data Collection Activities
A systematic plan for collecting data for the evaluation must be in place to ensure strong results. The measures
indicated in the evaluation plan dictate the data collected. Data for participants and for control or comparison
group members may come from a variety of sources, not all of which may be part of the program.
Data collection ideally starts prior to program participation to best capture participants’ baseline status. In the
same vein, identical data from both participants and control or comparison groups should be collected
whenever possible. Data collection often continues even after participants are no longer part of the program. It
is important to map out the timing of data collection to ensure the best possible evaluation.
Specific Guidance: Data Collection Activities
Identify data sources for all measures, as well as the type of data to be collected along with how they will be
collected. Descriptions should cover data for program participants as well as control or comparison group
members.
Establish a baseline status for program participants and control or comparison group members. Baseline data
are important for assessing change in participants over time. Describe when the collection of baseline data will
occur. Also, discuss whether the baseline measures being used are comparable for both program participants
and control or comparison group members.
Data may be collected by the evaluation team, by project staff, or by some other party. Indicate who will collect
all data used in the evaluation. Note the role of project staff members in relation to the data collection process.
If administrative data will be used, specify the source(s) and availability of the data, and the evaluation team’s
experience working with that type of information.
Describe the way in which data are to be collected. Specify if data will be collected through administrative
records, program systems, or through instruments specifically created for the purpose. For example, will
nationalservice.gov/SIF
24

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education