Reflecting on … critical use of student data.

“Data is like garbage.  You’d better know what you’re going to do with it before you collect it.”  Mark Twain

As I prepared for the new term, I was struck by how much information about the pupils I had available.  This is before I’ve even met many of them in person.  Increasingly, the expectation is that students come to us much as food from the supermarket; pre-packaged with catch-all information progressively simplified into little coloured boxes.

As with many things, one broadly assumes the benefits outweigh the costs, although I’m unsure whether any rigorous research has been conducted to test this.  However, I do wonder whether the data we are given come with enough health warnings to help teachers avoid some of the dangers they present.  I have been reflecting on a few of these as I prepare to meet my new classes:

  • The Pygmalion and Golem Effects: In 1965 Rosenthal and Jacobson experimented with the impacts of labelling by convincing teachers that a (non-existent) test they had run on their students had identified certain students who were on the verge of going through the intellectual equivalent of a ‘growth spurt’ and whose progress would accelerate dramatically over the course of the next year.  They found that children profited from their teacher’s high expectations and made greater progress than those not so labelled.  There are various provisos to this “Pygmalion Effect”; including that the impact was limited to younger children and their paper explores a range of possible explanations.  However one implication is the possibility that low expectations can lead to poor outcomes for the student – the Golem Effect.  As a teacher, seeing my students for the first time through the filter of data, it is sobering to remind myself of the potential for my expectations to shape their outcomes.

 

  • Confirmation Bias: Along with the risks of self-fulfilling prophecy acting on the students, confirmation bias is a well-documented psychological tendency that it is worth teachers familiarising themselves with.  Essentially this is where we interpret new evidence in light of our existing theories.  This can be done in a variety of ways; we can look for evidence that supports our beliefs, disregard contradictory evidence as anomalous and give greater weight to information that fits comfortably with our current world-view.  It is not always a conscious process and can be hard to avoid, even when aware of the phenomenon.   The risk in education is that it can be easy to find confirmation of low expectations, even without realising we are looking for it.  All teenagers tend to miss the point sometimes, rush a bit of homework and submit an essay that is far below their best standard, not revise for the odd test or just have bad days.  If each instance of underperformance adds up in the mind of the teacher as an accumulated wealth of ‘objective’ evidence that they “can’t” or “won’t” do it, that their targets are too high, their ‘ability’ too low, or their skill-set mismatched to the subject it is hard to think how they might avoid low expectations.

 

  • Reliability and validity issues: As a general rule the data we use are worked out with large sets carefully tested to be as reliable as possible.  Mass testing and standardised methodologies help to ensure that reliability is as possible.  Self-fulfilling prophecy and confirmation bias may also help our systems to achieve this!  However we all have students who perform exceptionally well on exam day, confounding our expectations and careful, reliable testing over the previous years of teaching.  Unfortunately we have probably all experienced the reverse where underperformance strikes.  The same can be true of any of the testing which generates our data going into the relationship; perhaps that student doesn’t test well, or had an off-day, or was distracted during those tests.  Additional data can help (CAT scores and reading levels and KS2 SATs) but only so far.

 

Then there is the question of whether the data actually measure what they are supposed to – and the related question of whether I am using them for that purpose.  Many teachers can talk for hours about how the data we’re given are of questionable validity so I won’t explore this too much here.   However, coupled with the Golem Effect, confirmation bias and reliability issues for the individual student it is worth at least noting.  Sometimes we’re very sensitive to targets or information that we consider ‘too high’ or ‘inflated’.  This can certainly happen and the drive in all parts of the system to high expectations may well mean it is more likely than low expectations.  But I’m not sure I’ve always spent enough time looking for data flaws that go the other way where for some reason, or combination of reasons, my students have been given scores and targets that are too low and which I need to challenge and raise, rather than finding confirmation for, however inadvertently.

This is not to suggest that the data is pointless and can never be relied upon; big-picture and over time it can certainly be valuable.  If it sounds like my reflections suggest the data aren’t useful or should be ignored that is not the case.  Without some very convincing research to show otherwise, I’m operating on the assumption that for most students and most teachers the availability of data is a positive thing that can be well-used to support learning.  If I think back to the start of my teaching when I had very little information about most student I feel much better equipped going into a new class now.    Having seen that I have a GCSE student with a reading age of 9 I have been able to do some careful thinking about the range of Anglo-Saxon source material I am making available in my first lesson.

But I would make a case for caution and critical evaluation of the data from the very beginning.  Too often, if we allow our aspirations to be limited by the data in front of us and confirmation bias kicks in we are at risk of contributing to students’ challenges.    Of course, there are a lot of other factors that come into play.  However my goal as a teacher is to be a positive factor and not one of the hurdles my students have to overcome, and this is enough to give me pause.  The principle of falsifiability is a useful one here – to ask myself how would I know if these data are flawed, if they hide strengths either in the area they measure, such as literacy levels, or in other useful assets that aren’t directly measured such as motivation, emotional maturity or resilience and I find myself asking the following questions as I reflect on the data I’m looking at:

  • Am I reading too much into these data, and forming judgements that may limit my expectations too far?
  • If any initial expectations based on the data are misguided, how will I identify that this is the case and not fall into the trap of confirmation bias? What should I be looking for in this student’s contributions, work, ethos and attitude to learning that challenges the previous data and suggests the student may be capable of more?
  • Were these data to be fundamentally misleading for this student, understating their full potential, how would I know?
  • Is the AfL, teaching and questioning in my classroom giving all students opportunities to excel – to overcome the low expectations they may have of themselves or others may have of them?

Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. The Urban Review, 3(1), 16-20.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s