Basic Impact Assessment At Project Level Page 31

ADVERTISEMENT

similar manner, and careful design of forms may help to some extent but it is impossible to
completely overcome this problem
Qualitative Approach
The qualitative approach attempts to resolve some of the problems described above by
seeking to provide an interpretation of the processes involved in intervention and of the
impacts that have a high level of plausibility. It recognises that there are usually different,
and often conflicting, accounts of what has happened and what has been achieved by a
program. The validity of specific IAs adopting this approach has to be judged by the
reader on the basis of:
(i)
the logical consistency of the arguments and materials presented;
(ii)
the strength and quality of the evidence provided;
(iii)
the degree of triangulation used to cross-check evidence;
(iv)
the quality of the methodology; and
(v)
the reputation of the researcher(s).
Although such work has been common in development studies for decades, it is only
during the 1980s that its relevance for IA has been recognised. This recognition has arisen
partly because of the potential contribution of qualitative approaches (especially in
understanding changes in social relations, the nature of program staff-beneficiary relations
and fungibility) and partly because of the widespread recognition that much IA survey work
was based on inaccurate information collected by questionnaire from biased samples.
Low budget and low rigour IAs claiming to adopt the scientific method were at best
pseudo-science, but more often simply bad science, despite the sophisticated analytical
tools that were applied to poor datasets.
However, IAs with their roots in the humanities have considerable difficulties with regard to
the attribution of cause and effect. Such studies cannot usually demonstrate the causal
link as they are not able to generate a ‘without program’ control group (although at times
some researchers neglect to mention this to the reader and simply assume causality).
Instead, causality is inferred from the information about the causal chain collected from
intended beneficiaries and key informants, and by comparisons with data from secondary
sources about changes in out-of-program areas. Problems also arise because not
infrequently the labels ‘rapid appraisal’, ‘mini-survey’ and ‘case study’ are applied to work
which has been done in an ad hoc manner and does not achieve a minimum professional
standard in terms of informant selection and the rigor of data collection and analysis.
Examples of this include: i) basing data collection only in program areas that are
performing well, and surveying best clients, and ii) inferring that the data collected in one
area applies to all clients without explaining this assumption.
While such studies cannot provide the degree of confidence in their conclusions that a fully
resourced scientific method approach can yield, in some cases their conclusions may be
more valid than survey based IA work that masquerades as science but has not collected
data with scientific rigor.
Whatever, it is becoming increasingly common to combine
‘scientific’ and ‘humanities’ approaches so as to check the validity of information and
provide added confidence in the. In the future dealing with attribution by multi-method
approaches seems the way forward.
31

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education