The current globo-political climate (in the West anyway) calls for evidence for what seems like everything. We need evidence for climate change, evidence for new medical treatment, evidence of the best ways to raise children, evidence of the best ways to teach, evidence which proves that what you are doing works and even evidence to find a person guilty!
Evidence in of itself is not bad. Do I want to take a medicine that is not proven to be safe? Not particularly (although there may be times when it is necessary). Do I want to remove the requirement of evidence from the justice system or the education system? Absolutely not. Evidence has its place in many facets of life. I become frustrated when I feel required to find evidence for evidence’s sake or when there is no obvious point to it, now or in the envisioned future.
Denzin (2009, p.142) puts it quite well:
“And evidence is nevermorally or ethically neutral. But, paraphrasing Morse, who quotes Larner (2004: 20), the politics and political economy of evidence is not a question of evidence or no evidence. It is rather a question of who has the power to control the definition of evidence, who defines the kinds of materials that count as evidence, who determines what methods best produce the best forms of evidence, whose criteria and standards are used to evaluate quality evidence?“
It is amazing how much comes back to power – how much frustration powerlessness (perceived or real) causes.
Denzin’s article is written in the context of the qualitative v quantitative paradigm debate. Quantitative data (think quantity – numbers) is generally lauded as good evidence. Numbers are easy to manipulate. This data is usually gathered from an objectivist’s perspective (I am observing – there is one truth and I can capture that truth). On the other hand is qualitative data – data from an interpretivist paradigm (we see things from different perspectives – therefore there are multiple truths and the researcher cannot be objective because their presence affects the reality).
In education, statistics count. But statistics don’t tell a full story. The shift on Johnny’s score from 4 to 10 looks fantastic – but why did that shift happen (come to think of it – I’ve never taught a Johnny)? Indeed, are these statistics questioned when they are in your favour? But what if Johnny went from a 10 to a 4? Why did that happen? And what about the fact that Johnny may have dropped in that score on the one test, but has learned to manage himself and socialise positively? How can that be quantified and does anybody in power really want to know?
The challenge with evidence is that it needs to be quality, valid and reliable evidence. It needs to be trustworthy. Here is where the OTJ (overall teacher judgement of where a child sits on the curriculum) is important, and absolutely awesome to have. You can take a myriad of other factors into account when making that judgement. But it still comes back to a measurable statistic. Every year a requirement of NZ schools is to review the targets they have set themselves in literacy and numeracy and justify why these have or have not been met and what you have done to work towards them. This is called an analysis of variance. This is another great thing to have when you have to present statistical data.
Perhaps the issue with evidence is not always the evidence itself, but what is done with the evidence once it is out of your hands. When the National Standards were brought in we were assured that these results would not be put in tables and compared with other schools in a public forum as NCEA results (our high school assessments) are. Unsurprisingly, years down the track, you can now go to a website and look up primary schools statistics to compare them with one another. But these are just percentages regarding performance in literacy and numeracy. What about the myriad of factors that affect these results? What about the results showing the tremendous gains in areas like socialising, self control, oral language which are not recorded in the statistical fields put up there? Where are the other stories of growth and development? Perhaps this is just not the place for that. Maybe that’s a purpose of the school website.
On the same token, evidence is looked at not just through different eyes, but for different purposes. I look at my students’ data to inform what I teach and how I teach it effectively to them. Others look at the data to compare cohorts and look for trends across the school. Still others look at the (now much larger pool) of data for funding and evaluation purposes.
Quantitative is a lot more efficient to analyse than qualitative data. Both have their strengths and weaknesses and one is holds greater weight than the other.
Denzin (2017, p. 9) says: “Today, we are called to change the world and to change it in ways that resist injustice while celebrating freedom and full, inclusive, participatory democracy.”
We live in a numbers dominated world. We know after a decade of critique in the health, welfare, and educational fields that the evidence-based measures of quality and excellence rely on narrow models of objectivity and impact. Researcher reputation, citation, and impact scores are not acceptable indicators of quality. They should not be the criteria we use to judge our work, or one another. They should not be allowed to shape what we do.To these ends, we must create our own standards of evaluation, our own measures of quality, influence, excellence, and social justice impact. These are moral criteria. They celebrate resistance, experimentation, and empowerment. They honor sound partisan work that offers knowledge based critiques of social settings, and institutions.
Ultimately though, it comes back to what is the evidence you are gathering? Do you know why you are gathering it and what you will do with it? What evidence are you disregarding and which evidence are you noticing?
So perhaps evidence is not the new four letter word. But, like many four letter words, it is not the word itself that causes the upset, but the intent and manner within which it is used.
Denzin, N. K. (2009). The elephant in the living room: Or extending the conversation about the politics of evidence. Qualitative Research, 9(2), 139–160. https://doi.org/10.1177/1468794108098034
Denzin, N. K. (2017). Critical qualitative inquiry. Qualitative Inquiry, 23(1), 8–16. Retrieved from http://journals.sagepub.com/doi/abs/10.1177/1077800416681864