Politics+of+Evidence

toc

=__Politics of Evidence__=

The politics of evidence refers to a discussion concerning the ways in which qualitative evidence can advance the goals of scholarly research in light of scientifically based research models. In order to understand this discussion, one must first understand [|different ways of knowing] (epistemology), the different paradigms of quantitative and qualitative research, [|theoretical frameworks], qualitative research methods, what constitutes [|evidence in qualitative research], and scientifically based research models such as that promoted by the [|National Science Foundation].

Music for the Soul
A brief narrative to parallel the divisions that manifest the Politics of Evidence  =What is Evidence?=

What is the relationship between data, evidence and knowledge in qualitative research? How is scientific rigor established and maintained in qualitative research? The politics side of the issue, at this point, seems to relate not so much to how evidence is used but rather to what consititutes evidence. In the article titled, On [|The Nature of "Evidence" In Qualitative Research Methods]Miller and Fredericks (2003) explore how qualitative research data becomes evidence. Their discussion parallels Yvonna Lincoln's (2002) article, [|On the Nature of Qualitative Evidence], describing how data becomes evidence i.e. that data does not constitute evidence but that data becomes evidence when it can be used to support a "claim." Much of what we call "data" is itself phenomenological -- that is, socially constructed and "there" only because we are attuned to looking for it. No "evidence" is evidence until we see it from some theoretical, paradigmatic, or metaphysical framework ... Thus what constitutes evidence, and therefore, what justifies it, is the result not only of what questions are posed, but of the framework within which they are posed. Collins (1983) discusses the paradigms we use to interpret and measure "reality" (as cited in Lincoln, 2002, p. 2). That needs to be the crux of the conversation -- shaping the paradigms by which we understand and speak about our "reality". What evidence is there for holding on to, or surrendering, the paradigms that help us to make sense of the world?

A paradigm for knowledge (epistemology) is coexistent with the rules of seeing and rules of measuring and thinking that give it its physical objects and scientific laws. Different natural objects are precipitated by different paradigms. What may be true in one, may be false in another. H. M. Collins, (1983). New paradigms will be necessary that can help bring these various "ways of knowing" together instead of pushing them apart (as cited in Lincoln, 2002).

Introducing different ways of knowing into research may provide the opportunity for new paradigms to emerge. In a series of discussions concerning graduate education and the nature of evidence, Pelligrino and Goldman (2001) propose that providing new researchers with a broad introduction to modes of inquiry may prove invaluable. They suggest this can best be accomplished in coursework devoted to differing epistemologies and philosophies of inquiry.This commitment sets the stage for graduate training that is substantially broadened. Some suggest that a methods core include a balance of quantitative and qualitative methods. Although we are concerned that trading depth for breadth may lead to less methods expertise, the notion that qualitative and quantitative methods are complementary is an important improvement in how the educational community talks about differences. We propose that enabling a mutual respect and regard for different methods is the greatest priority. We suggest this can best be accomplished in coursework devoted to differing epistemologies and philosophies of inquiry [|(Pelligrino and Goldman, 2001).]

[|Laucken] (2002) uses two modes of thinking to illustrate a disturbing trend in the evaluation of the educational quality of German universities. The trend that Laucken illustrates is that quality criteria are structurally benefiting certain branches of science to the financial detriment of others. Laucken discusses three quality criteria: The extent of economic usability; the breadth of scientific impact; and a beneficial environment. Laucken centers his discussion of the extent of economic usability around the idea that university research should produce information or technology that offers some immediate financial benefit to the community at large. Universities and colleges within the university that produce financilly beneficial information and technology are rated higher on the Quality Criteria scale. Because the physical mode of thinking produces devices, improvements to devices, and information associated with devices, it receives more attention than the semantic mode of thinking.

The discussion of the breadth of scientific impact is centered around a comparison of which mode of thinking produces the greatest quantity of research publications and citations. Again, the physical mode of thinking has the advantage. The nature of research associated with devices produces more research to publish because of ongoing improvements to devices, processes, and software used in the devices. The publications are usually short. The information is so transient that Laucken states that the average citation is usually less than two years old. The semantic mode of thinking produces longer articles and books that are the result of long periods of research. Thus, yielding fewer articles and fewer citations. This criteria does not consider the quality of the research but looks at volume.

The physical mode of thinking also benefits more from the beneficial environment in the mass media. Oftentimes discoveries and developments in the fields of devices and technology are spectacular and easily presented by mass media. Thus, they have high public interest factor. "The mass-media eligibility of discoveries is increasingly becoming a significant quality of research." [|Laucken], (2002, p. 64).

Who nominates evidence for consideration? Ray and Mayan (2001) ask: “Who decides what counts as evidence? The real questions “become ‘Whose account is being privileged?’ and ‘What knowledge remains unprivileged, unworthy of being characterized as a relevant outcome?’” (as cited in Lincoln, 2002, p. 17). Lincoln illustrates this situation with her comments on the recent decisions by the Department of Education to limit the type of educational research that is made available on their website and the move to limit funding for ERIC (pg. 19).

In their article, [|Critically appraising qualitative research for systematic reviews] Attree and Milton (2006) propose a critical appraisal process for research evidence.

Aims and objectives Is there a clear statement of aims of the research? Sampling Is the sampling strategy appropriate to address the research aims? Data analysis and findings How was the analysis carried out? Are sufficient data presented to support the findings? How were data selected for inclusion in the report? Does the research privilege subjective meaning? What steps were taken to demonstrate the trustworthiness of findings? Have the limitations of the study and their impact on the finding been taken into account? How valuable or useful is the research? Does the research add to knowledge or increase the confidence with which existing knowledge is regarded? What are the implications?

Two Melodies
We began with a musical metaphor; we close with another. The Politics of Evidence group struggles with dueling banjos...

=References and Resources=

Attree, P. and Milton, B. (2006). Critically appraising qualitative research for systematic reviews. //Evidence and Policy//, //2//(1), p. 109-126. Retrieved on October 22, 2006 from http://docserver.ingentaconnect.com/deliver/connect/tpp/17442648/v2n1/s7.pdf?expires=1165469392&id=33915255&titleid=10934&accname=Guest+User&checksum=FA1DB4AB5688F48DECE2B1585128B6DA

Collins, H.M. (1983). An empirical relativist programme in the sociology of scientific knowledge. In K.D. Knorr-Cetina and M. Mulkey, (Eds//.),// //Science observed: Perspectives on the social study of science,// (pp. 85-113). London: Sage Publications.

Laucken, U. (2002). Quality criteria as instruments for political control of sciences. Forum//: Qualitative Social Research//, //3//(1) Article 6, January 2002. Retrieved December 3, 2006 from [|www.qualitative-research.net/fqs-texte/1-02/1-02laucken-e.htm]

Lincoln, Y. (2002). //On the nature of qualitative evidence//. A paper for the Annual Meeting of the Association for the Study of Higher Education, Sacramento, California, November 21-24, 2002. Retrieved December 3, 2006 from http://www.usc.edu/dept/chepa/pdf/ASHE_lincoln.pdf

Lincoln, Y. (2004, November/December). Scientific research in education/evidence matters: Randomized trials in education research. //Academe.// Retrieved October 22, 2006 from http://findarticles.com/p/articles/mi_qa3860/is_200411/ai_n9470394

Miller, S. & Fredericks, M. (2003, Winter). The nature of "evidence" in qualitative research methods. //International Journal of Qualitative Methods 2//(1), Retrieved October 22, 2006 from http://www.ualberta.ca/~iiqm/backissues/2_1/html/miller.html

National Research Council. (2002). //Scientific research in education//. R. J.Shavelson & L. Towne (Eds.) Committee on Scientific Principles forEducation Research. Washington, DC: National Academies Press.

Pelligrino, J. and Goldman, S. (2001). Be careful what you wish for-you may get it: Education research in the spotlight. //Educational Researcher//, //31//(8), p. 15-17. Retrieved December 3, 2006 from http://www.aera.net/uploadedFiles/Journals_and_Publications/Journals/Educational_Researcher/3108/3108_CommentPellegrino.pdf