Survey Methodology #2006-02 - Use Of Dependent Interviewing Procedures To Improve Data Quality In The Measurement Of Change - U.s. Census Bureau Page 8

ADVERTISEMENT

their impact on transitions in income amounts. We note that our analysis is limited to a subset of
all characteristics included in SIPP that are captured at a monthly level, chosen with some eye
toward breadth and importance, but primarily because their results could be analyzed reasonably
easily with preliminary, internal data files. Our use of preliminary files also means that the
results presented here may differ from those obtained from any future analyses using final, edited
data. We use the best method available to us – a comparison of the 2004 seam bias results with
those of the immediately preceding 2001 panel – recognizing that drawing conclusions from a
“natural experiment,” as opposed to a designed one, requires additional strong assumptions.
Although we acknowledge these limitations, we have no evidence to suggest that any of the
potential confounds actually did influence our findings. From the strength and consistency of our
findings we also draw confidence that any differences between our results and those of later
investigations looking at other characteristics, or using other data files, will be at the margins, and
are unlikely to affect overall conclusions.
4.1. Seam bias analysis for program participation and other “spell” characteristics
Our analysis uses data from the first four interview waves of the 2001 and 2004 SIPP panels.
Each panel started its wave 1 interviewing in February of the panel year, and thus the first four
waves of the two panels cover the exact same calendar months, three years apart. We carried out
three separate seam bias investigations, one for each successive pair of waves – waves 1-2, 2-3,
and 3-4 – in effect treating each pair of waves as if it provided an independent set of eight
months’ worth of data, with one seam in the middle. We chose this approach for its simplicity,
for the ease it offered with regard to linking sample cases across waves, and also to avoid as much
as possible the loss of otherwise useful analysis cases due to their absence from only one or two
of the four waves (e.g., attritors and in-movers). In each analysis we exclude cases for which an
interview was obtained in only one of the two waves, and within each characteristic we further
exclude cases for which data are missing for either of the seam months.
We summarize the results of these analyses in Table 1, which presents the simple average of the
three separate estimates we computed for each statistic. Use of the simple average of the three
estimates gives them all equal weight, ignoring the fact that the number of cases from which they
are derived generally declines slightly across the three tests, due primarily to attrition. We opted
for this approach primarily to avoid giving extra weight to the results for waves 1 and 2, which
differs from the other wave-pairs in that it includes the only completely non-dependent interview
(wave 1) in the entire panel. Another consideration was to accord equal weight to all periods of
the calendar year, so as not to over- or under-weight any seasonal effects just because of SIPP’s
arbitrary interview schedule. Although we have done no formal sensitivity analysis, we doubt
that the decision to treat the three wave-pairs in this manner has any important impact on the
results and conclusions presented here, primarily because the differences among the three for any
of the characteristics examined appear to be minimal.
Table 1 is subdivided into two parts. Part 1 summarizes the results for characteristics which were
captured with different procedures in the two panels – i.e., where the 2004 questionnaire used the
new DI procedures. Part 1A presents the results for “need-based” public-assistance-type
-7-

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Legal