Survey Methodology #2006-02 - Use Of Dependent Interviewing Procedures To Improve Data Quality In The Measurement Of Change - U.s. Census Bureau Page 3

ADVERTISEMENT

2. Seam Bias
Seam bias as a measurement problem in longitudinal surveys began to draw the attention of
survey methodologists in the early 1980’s. Czajka (1983, p93), for example, describing data from
a survey which was the precursor to the U.S. Census Bureau’s SIPP, notes “a pronounced
tendency for reported program turnover to occur between waves more often than within waves.”
Moore and Kasprzyk (1984) provide a quantitative assessment of the extent and magnitude of the
seam bias effect in the same dataset. Soon the phenomenon was identified in the SIPP itself
(Burkhead and Coder, 1985; Coder et al., 1987), and in other ongoing longitudinal survey
programs such as the University of Michigan’s Panel Survey of Income Dynamics (Hill, 1987),
and the U.S. Census Bureau’s quasi-longitudinal Current Population Survey (Cantor and Levin,
1991; Polivka and Rothgeb, 1993). In its subsequent panels, SIPP has continued to provide much
evidence of seam bias (Hill, 1994; Kalton and Miller, 1991; Martini, 1989; Ryscavage, 1993;
Weidman, 1986; Young, 1989 – see Jabine, King, and Petroni (1990), and Kalton (1998) for
summaries of SIPP seam bias research), so much so that Weinberg (2002) lists it as a key
unresolved research issue for the survey. Michaud and colleagues have produced numerous
papers documenting seam bias and its attempted amelioration in Statistics Canada’s longitudinal
surveys (e.g.: Brown, Hale, and Michaud, 1998; Cotton and Giles, 1998; Dibbs et al., 1995;
Grondin and Michaud, 1994; Hale and Michaud, 1995; Michaud et al., 1995; Murray et al.,
1991); and in recent years researchers on the other side of the Atlantic have demonstrated that
European longitudinal surveys are by no means immune (Holmberg, 2004; Hoogendoorn, 2004;
Lynn et al., 2004). LeMaitre (1992) provides an excellent general review; his summary (p5) of
the first decade of seam bias research results still seems apt: “seam effects would appear to be a
general problem with current longitudinal surveys, regardless of differences in design and the
length of the reference period.” Marquis and Moore (1990) confirm that seam bias severely
compromises the statistical utility of estimates of change.
Since the very beginning, seam effects researchers have considered it almost axiomatic that the
amount of change measured between interview waves is overstated. Collins (1975), for example,
speculates that between two-thirds and three-quarters of the observed change in various
employment statistics (as measured in a monthly labor force survey) were spurious; Polivka and
Rothgeb (1993) estimate a similar level of bias. Michaud et al. (1995, p13) describe apparent
change in income across successive survey waves as “grossly inflated;” similarly, Sala and Lynn
(2004, p8) label the amount of change they observe from one survey wave to the next in various
employment characteristics as “implausibly high;” see also Cantor and Levin (1991), Hill (1994),
Hoogendoorn (2004), and Stanley and Safer (1997). Other researchers have focused on the other
side of the equation – the understatement of change within an interview wave – sometimes called
“constant wave responding” (Martini, 1989; Rips, Conrad, and Fricker, 2003; Young, 1989).
Moore and Marquis (1989), using record check methods, confirm that both factors – too little
change within the reference period of a single interview, and too much at the seam – operate in
concert to produce the seam effect. Kalton and Miller (1991) offer supporting evidence for that
assessment, as does LeMaitre (1992). Rips, Conrad, and Fricker (2003) tie these phenomena to a
combination of memory processes – specifically, memory decay over time – and strategies that
respondents invoke to simplify a difficult reporting task. In support of these positions they cite
-2-

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Legal