This week’s colleague-author is Tina Herringshaw whose research has focused on Growth Mindsets.  Here she reflects on empty praise, meaningful responses and the power of videoing yourself as a teacher.

As a reflective practitioner I have been part of numerous research groups at JMS looking at a wide variety of educational research, however I still found that my main issue with my lessons was that my students would constantly ask for re-assurance. ‘Miss, is this ok?’

Teaching a visual art subject has its difficulties as we each see art differently. However we teach a set of fundamental rules that students use to develop skills and analyse their work to know how to improve. Why then did I still have endless students needing confirmation that their work is ‘correct’ or ‘nice’?

We always teach our first yr7 art lesson to dispel the myth that you are either born ‘good’ at art or not.  By teaching them to use their powers of observation to improve their first effort at drawing a common object (say, a tomato) we show them that just as we learn how to walk, talk or ride a bike and we can learn how to get better at art.   Through further discussion with an ex-colleague of mine – Lucy Dusgupta, I made the link that the ideas behind this lesson are at the core of Growth Mindset. I wanted to find to out more. How can I build resilience in students so they ‘know’ they are doing well, and can use feedback to help them improve?

To start my reading, Lucy shared with me her  MSc dissertation ‘Exploring strategies that foster a growth mindset rather than a fixed mindset in previously high attaining secondary school mathematics students’.  I then started to read more including two articles by Carol Dweck, the leading researcher in this field: ‘The Perils and Promises of Praise’, and ‘Carol Dweck Revisits the ‘Growth Mindset’’.  Dweck states:

‘Students’ mindsets  –  how they perceive their abilities  –  played a key role in their motivation and achievement, and we found that if we changed students’ mindsets, we could boost their achievement.’

I became interested in how teachers’ feedback can limit students’ mindsets, potentially driving their need for approval of their work.  Working with a group of interested teachers we observed each other  teach and then discussed the feedback we gave.  We used an observation template from Lucy Dasgupta’s research to help us focus our observations.  Our reading and observations helped us to identify a goal: we wanted to remove “empty praise” (‘yes’, ‘good’, ‘great’, ‘well done’) from our practice and replace it with meaningful feedback that promoted progress. Dweck explains why this matters:

‘Many(educators) believe that praising a students’ intelligence builds their confidence and motivation to learn, and students’ inherent intelligence is the major cause of their achievement in school. Our research has shown that the first belief is false and the second can be harmful – even for the most competent students.’

We began videoing ourselves and analysing our key phrases we used. This was a very useful reflective tool.  Even after a year of aiming to use only growth mindset responses I still found that at times I gave empty praise; it is a hard habit to break.  I tried different ways of giving meaningful praise with useful feedback to build resilience. For instance, ‘you have managed to sew accurately around that shape, now see if you can add a second layer to build more detail to your design’. I found it hard to not add on the phrase ‘well done’ at the end. It felt unfinished, so I often used ‘Ok’. I am still trying to practice not using any empty praise in my feedback. Evaluating my practice regularly through videos, helps to make me aware of where I use filler words and empty praise.

In summary being part of the R&I group on Growth Mindset has helped me to build resilience highlighting and altering my practice. It is hard to say whether the number of students seeking validation and asking ‘Is this ok’ has dropped?  They get encouraged to this way of thinking in lots of ways over a long period of time.  What I do know is that I’m more aware of my responses, so if my students do ask me, my response is to guide them towards the success criteria to self-evaluate and less of me giving them an unproductive answer. I aim to give them the tools to work out the answers so they don’t need me to tell them if it’s ‘ok’. I try to ensure my interventions will help to boost their motivation, resilience and learning.

Questions to help me reflect on the impact of my responses to student questions:

  • Is the student seeking meaningful feedback or praise and reassurance?
  • If I have offered praise, have I explained what they have done well and why the work is successful, or just offered ‘empty’ praise?
  • Why do they need me to tell them if the work is ‘good’ or ‘ok’? What would help them to work this out for themselves?
  • What will help me identify and change my classroom habits when I want to develop new teaching strategies?

 

If you are interested in reading more, Carol Dweck’s ‘The Perils and Promises of Praise’ can be found here:  http://www.helpwithteaching.co.uk/wp-content/uploads/2013/01/The-Perils-and-Promises-of-Praise-Carol-S-Dweck.pdf  and ‘Carol Dweck Revisits the ‘Growth Mindset’’ can be found here:  http://www.edweek.org/ew/articles/2015/09/23/carol-dweck-revisits-the-growth-mindset.html

 

The closely related ideas of triple impact and dialogic marking have been heavily criticised recently for a variety of reasons.  Some of these are criticisms that raise important questions and highlight issues of evaluating impact within teaching.  One key question has been that too much focus was placed on impact without enough of the ‘hidden cost’ of generating that impact, particularly in terms of teacher workload and stress levels.  Seeking to evidence quality dialogue was also a challenge and systems of different coloured pens quickly evolved.   For many the “purple pen” marking system has become a symbol of much that is wrong with teaching.  The use of different coloured pens in marking symbolises lack of autonomy, a focus on appearance and evidence rather than meaningful impact and bureaucratic assessment policies run amok.  Criticisism has even come from the top with School Standards Minister Nick Gibb bemoaning teachers “wasting time” marking in coloured pens (October 2016).  Nicky Morgan also expressed concern (March 2016) and Ofsted, worried they may have started the whole thing, distanced themselves from the phenomenon and withdrew their guide to marking in 2015.

Some of the criticisms are entirely legitimate.  A huge amount of investment goes into training teachers to develop their professional judgement.  This is necessary because teaching is infinitely complex and varied, which is one of the things that makes it such an amazing job, as well as an awesome responsibility.  Policies which prevent teachers using their judgement as professionals fundamentally undermine the profession and the work they do.  A specific pen colour isn’t going to turn someone who doesn’t understand quality assessment into someone who does (teacher, parent or student) although it may just mask some of the conceptual weaknesses that need to be addressed with supportive CPD.  Using a purple pen to correct their work isn’t going to “fix” students’ problems with learning if they lie outside of a very narrow range of issues; it is unlikely to increase motivation, address conceptual misunderstandings or make up for a rushed job.

Our school has recently revised its formative assessment policy quite radically, removing most of the directives that had crept in over the last few years including those about pen colour, regularity of marking and specific symbols and codes to address specific work issues.  The drive behind this was to restore teacher autonomy and allow teachers to use their professional judgement when giving feedback.  Different subjects, students and pieces of work might call for different systems and the best person to make this judgement is the teacher on the ground.

However, with greater freedom comes greater responsibility and it is sometimes hard to know what to do for the best, with all the noise and fiercely held opinions.  Elliott et al’s “A Marked Improvement” is an incredibly useful document for teachers looking to understand what the research really says.  The review is thorough, well-organised and raises as many questions as answers, which is a fair reflection of where we are.  Some things we know work, some things we know don’t work and most things are … well, complicated.  For anyone looking for a quick-cheat guide to tell you how to mark this isn’t it.  However if you think judgement and experience count for something this strikes a good balance between research and open questions.

When it comes to purple pen, there are three key conclusions in Elliott’s report that have driven my thinking:

  1. “A key consideration is clearly the act of distinguishing between errors and mistakes.”
  2. “Unless some time is set aside for pupils to consider written comments it is unlikely that teachers will be maximising the impact of the marking.”
  3. “No high-quality studies appear to have evaluated the impact of triple impact marking … [although] there does appear to be some promise underpinning the idea of creating a dialogue, further evaluation is necessary.”

These ideas individually and collectively have shaped my thinking about marking in many ways over recent years.  Specifically I have learned that it is important that I do the following:

  1. Address fundamental misconceptions through re-teaching and ensure that students have time to work with and assimilate this new information. This may be through redrafting, correcting or a new piece of work but it involves not just ‘new’ learning but unpicking old learning and rethinking – this has to be done carefully.
  2. Make pupils correct their own mistakes.  Not only does it save me time, but it might help them remember to slow down and check their work next time.
  3. Build workload-friendly systems and habits especially where pupils are responding to input. I want to easily see what they have done, check that they now understand and move forwards appropriately.

And that is where the purple pen comes in and does a beautiful job for me.  When my classes are used to using it, they know what it means and time is saved by having it as part of an established routine.

  • Students can use it to correct mistakes and those corrections stand out clearly in the work.
  • For short pieces or responses to questions I’ve raised purple is easy to find in their files or books; I can instantly zoom in on their responses and redrafted work.  This in itself saves time and allows me to focus on what is needed; checking this is now in line with my expectations.
  • If the corrections stand out for me purple also stands out for the students.   Perhaps quite some time in the future.  Perhaps when they need to revisit the work and I’m very keen for them to revise the corrected material, not the original errors or mistakes.  Or when I want them to think about how they improved that type of answer the last time and apply that thinking without me having to repeat the feedback.

I’m not saying purple pen should be used for every piece of work.  An entire redrafted essay in purple is just painful to read.  I’m not saying that it should be used every day, in every subject – that is exactly what has been wrong with too many policies.

But I am saying that it is not the evil devil-child of bureaucratic teaching.  In fact, what came out of our old policy was that I was forced to try a new thing and it helped.  What came out of spending time reviewing the research is a better understanding of why it worked and how to use it.  Not all the time, in every scenario, with every child.  But enough that even given more freedom I intend to continue to mark primarily in red and have pupils redraft in purple.  Not to mention that I have a stock cupboard full of purple pens and someone has to use them!

Questions to help me reflect on my assessment and feedback:

  • How confident am I that I have correctly worked out which are mistakes and which are fundamental misconceptions? Is further dialogue needed to pin this down?
  • How will I know that this is having impact and that the student is now moving forward?
  • Is the method I am using the most time-efficient way to achieve the desired impact?

The research

‘A Marked improvement? A review of the evidence on written marking’ can be accessed here:

https://educationendowmentfoundation.org.uk/public/files/Publications/EEF_Marking_Review_April_2016.pdf

 

In this week’s guest blog, Oonagh Fairgrieve reflects on what she learned when given her disaggregated INSET time to focus on a research project that interested her.  She picked ‘talk-less teaching’ as her starting point but ended up thinking about teaching as a much more integrated whole.

 

“Never become so much of an expert that you stop gaining expertise. View teaching as a continuous learning experience.” Denis Waitley.

 

One of the things about reflective practice is that you begin to reflect on your own reflections. As a Social Science teacher, I feel sometimes that I overanalyse my behaviour in a classroom too much, reflecting on what I should have said at a certain moment in time, what I could have phrased differently; the list goes on.

 

As part of my own continuous professional development I chose to look at the concept of “talk-less” teaching. It made sense to me to think that the more time we spend talking, the more time students are passive, the less learning happens in our classroom. In my initial research on talk-less teaching, I found similar results in interviewing and observing teachers and students; that too much time was taken up with explanation and a “talk and chalk” approach and students felt that more individual guidance and collaborative learning made an engaging and stimulating learning environment.

 

However, my research suggested that there was an important difference between reducing the amount of teacher talk and changing the quality of teacher talk. This change, needs to start with the teacher themselves; but may be guided by continuous professional development, or by mentoring from another reflective practitioner.   As Nunan (1996) states: understanding “has to begin with the teachers themselves, considering the ways in which the processes of instruction are illuminated by the voices of the teachers.” By focusing on whether teacher talk matches our intentions at any given stage of a lesson, rather than the time it takes, I hoped to enable learners to achieve more in a lesson and for learning to be more impactful…in theory, at least.

 

Walsh (2005) argues that for teachers and learners to work effectively together, both the teacher and learner need to acquire competence in language communication; making use of a range of appropriate interactional resources in order to promote active and engaged learning. By putting interaction firmly at the centre of teaching and learning and by reflecting on their quality of teacher talk, teachers will immediately improve learning and opportunities for learning.  This fit with my focus and I spent a few weeks reflecting on what I needed to say and when I needed to say it.

 

I found it helpful to use the principle of modes (developed from Walsh’s framework).  Although designed for use in a MFL or EAL classroom, I was able to apply this to a Social Science/Humanities lesson:

  • Skills and Systems: I used DIRT at the start of the lesson to give feedback or check previous knowledge and understanding.
  • Managerial: I thought carefully about when and how to give an instruction or explain a new concept to the whole classroom.
  • Classroom Context: I used questioning rather than talk to capture opinion, check knowledge and spark discussion

 

By reflecting on what I wanted to say before I said it, I began to create a reflective running dialogue, almost like a verbal lesson plan.   To break out of my normal habits, I used tools such as Google docs to provide iterative feedback and trialed interventions such as muted lessons.  Being open about what I was trying meant that capturing student response to this was straightforward, and colleagues also supported me to reflect on the importance of and nature of talk in a lesson.  My results interestingly found little impact on progress, but a definite impact on student attitude towards the subject and to the learning itself.  It is possible that with longer-term development there will be more impact on student progress.

 

But crucially for me was my which led me to my key reflection; the importance of the quality of teacher talk.

 

But interestingly, reflecting on teacher talk and what I wanted to say and what I wanted students to learn led to reflecting on independent learning. This is because through the use of effective teacher talk, we create an environment where words are like gold and are meaningful. We create an environment where students begin to understand the importance of collaborative work. This in turn linked to students’ mindset and attitude because by doing this, I could, in turn, instil confidence and esteem thus encouraging a growth mindset; where students feel confident to reflect on their own abilities through the use of talk.

 

To summarise, this project showed me the power of effective talk but also how focusing on one part of my teaching leads to almost a “web” of continuous professional development that is interconnected. By starting with what we say, who knows where we will end up?

“Data is like garbage.  You’d better know what you’re going to do with it before you collect it.”  Mark Twain

As I prepared for the new term, I was struck by how much information about the pupils I had available.  This is before I’ve even met many of them in person.  Increasingly, the expectation is that students come to us much as food from the supermarket; pre-packaged with catch-all information progressively simplified into little coloured boxes.

As with many things, one broadly assumes the benefits outweigh the costs, although I’m unsure whether any rigorous research has been conducted to test this.  However, I do wonder whether the data we are given come with enough health warnings to help teachers avoid some of the dangers they present.  I have been reflecting on a few of these as I prepare to meet my new classes:

  • The Pygmalion and Golem Effects: In 1965 Rosenthal and Jacobson experimented with the impacts of labelling by convincing teachers that a (non-existent) test they had run on their students had identified certain students who were on the verge of going through the intellectual equivalent of a ‘growth spurt’ and whose progress would accelerate dramatically over the course of the next year.  They found that children profited from their teacher’s high expectations and made greater progress than those not so labelled.  There are various provisos to this “Pygmalion Effect”; including that the impact was limited to younger children and their paper explores a range of possible explanations.  However one implication is the possibility that low expectations can lead to poor outcomes for the student – the Golem Effect.  As a teacher, seeing my students for the first time through the filter of data, it is sobering to remind myself of the potential for my expectations to shape their outcomes.

 

  • Confirmation Bias: Along with the risks of self-fulfilling prophecy acting on the students, confirmation bias is a well-documented psychological tendency that it is worth teachers familiarising themselves with.  Essentially this is where we interpret new evidence in light of our existing theories.  This can be done in a variety of ways; we can look for evidence that supports our beliefs, disregard contradictory evidence as anomalous and give greater weight to information that fits comfortably with our current world-view.  It is not always a conscious process and can be hard to avoid, even when aware of the phenomenon.   The risk in education is that it can be easy to find confirmation of low expectations, even without realising we are looking for it.  All teenagers tend to miss the point sometimes, rush a bit of homework and submit an essay that is far below their best standard, not revise for the odd test or just have bad days.  If each instance of underperformance adds up in the mind of the teacher as an accumulated wealth of ‘objective’ evidence that they “can’t” or “won’t” do it, that their targets are too high, their ‘ability’ too low, or their skill-set mismatched to the subject it is hard to think how they might avoid low expectations.

 

  • Reliability and validity issues: As a general rule the data we use are worked out with large sets carefully tested to be as reliable as possible.  Mass testing and standardised methodologies help to ensure that reliability is as possible.  Self-fulfilling prophecy and confirmation bias may also help our systems to achieve this!  However we all have students who perform exceptionally well on exam day, confounding our expectations and careful, reliable testing over the previous years of teaching.  Unfortunately we have probably all experienced the reverse where underperformance strikes.  The same can be true of any of the testing which generates our data going into the relationship; perhaps that student doesn’t test well, or had an off-day, or was distracted during those tests.  Additional data can help (CAT scores and reading levels and KS2 SATs) but only so far.

 

Then there is the question of whether the data actually measure what they are supposed to – and the related question of whether I am using them for that purpose.  Many teachers can talk for hours about how the data we’re given are of questionable validity so I won’t explore this too much here.   However, coupled with the Golem Effect, confirmation bias and reliability issues for the individual student it is worth at least noting.  Sometimes we’re very sensitive to targets or information that we consider ‘too high’ or ‘inflated’.  This can certainly happen and the drive in all parts of the system to high expectations may well mean it is more likely than low expectations.  But I’m not sure I’ve always spent enough time looking for data flaws that go the other way where for some reason, or combination of reasons, my students have been given scores and targets that are too low and which I need to challenge and raise, rather than finding confirmation for, however inadvertently.

This is not to suggest that the data is pointless and can never be relied upon; big-picture and over time it can certainly be valuable.  If it sounds like my reflections suggest the data aren’t useful or should be ignored that is not the case.  Without some very convincing research to show otherwise, I’m operating on the assumption that for most students and most teachers the availability of data is a positive thing that can be well-used to support learning.  If I think back to the start of my teaching when I had very little information about most student I feel much better equipped going into a new class now.    Having seen that I have a GCSE student with a reading age of 9 I have been able to do some careful thinking about the range of Anglo-Saxon source material I am making available in my first lesson.

But I would make a case for caution and critical evaluation of the data from the very beginning.  Too often, if we allow our aspirations to be limited by the data in front of us and confirmation bias kicks in we are at risk of contributing to students’ challenges.    Of course, there are a lot of other factors that come into play.  However my goal as a teacher is to be a positive factor and not one of the hurdles my students have to overcome, and this is enough to give me pause.  The principle of falsifiability is a useful one here – to ask myself how would I know if these data are flawed, if they hide strengths either in the area they measure, such as literacy levels, or in other useful assets that aren’t directly measured such as motivation, emotional maturity or resilience and I find myself asking the following questions as I reflect on the data I’m looking at:

  • Am I reading too much into these data, and forming judgements that may limit my expectations too far?
  • If any initial expectations based on the data are misguided, how will I identify that this is the case and not fall into the trap of confirmation bias? What should I be looking for in this student’s contributions, work, ethos and attitude to learning that challenges the previous data and suggests the student may be capable of more?
  • Were these data to be fundamentally misleading for this student, understating their full potential, how would I know?
  • Is the AfL, teaching and questioning in my classroom giving all students opportunities to excel – to overcome the low expectations they may have of themselves or others may have of them?

Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. The Urban Review, 3(1), 16-20.

For many, if not most, teachers what originally inspired their choice of vocation was a love of subject and a desire to share this passion with a new generation.  Despite the negativity that can be prevalent on some parts of the web, most teachers I know retain this passion to a high degree.  Why else would PE teachers organise and enthuse about so many sporting fixtures, language teachers put so many hours into organising trips and cultural experiences and geography teachers spend days wading hip-deep in rivers in the middle of nowhere?

However it can be challenging to stay in touch with academic developments in your field which were often tough enough to track as a student, let alone a full-time teacher.

Which is why I found a one-day INSET organised by Jason Todd of the Oxford University Department of Education to be a particularly inspiring event when I first attended in 2016.  Although research critiques one-off INSET as low impact, this was perfectly timed post-exam period for reflection and implementation.  As well as material on the new specifications and teaching advice, it also included academics presenting on their historical work, particularly a talk by Steve Gunn on his work analysing coroners’ reports of accidental death in Tudor England for an understanding of both life and death in that period.

The talk fell at a perfect time for us, when we were revising Key Stage 3 with a particular emphasis on students’ feedback that they found the Tudor unit to lack challenge having already “done them” in primary school.  Their work was engaging, relevant and showed an innovative use of sources to draw inference with which my students could engage.  The lessons I devised based on this material were some of the most well-received I have delivered, based on feedback from the students.

At this year’s conference my eyes were especially opened by a talk about the delivery of black history in secondary schools by Abdul Mohamud and Robin Whitburn.  Their book “Doing Justice to History” challenges the teaching of slavery and the historical misconceptions they have found perpetuated including:  slavery as an economic phenomenon; the trade triangle as just part of a long history of slavery (as opposed to the terrible and dehumanising innovation it was); and the supposed ‘shared guilt’ of African nations in this exploitation.  Next year’s year 8s are going to have a radically rewritten Scheme of Learning in this area, drawing on their scholarship and the source material and life stories they shared with us.

Another talk on research into women in Oxford’s history and an accompanying website with podcasts and interviews with historians has already found its way into our year 7 scheme of learning.

This blog is not about history teaching specifically but about the fresh inspiration that can come from getting back in touch with the academic side of your subject specialism.  I am always excited to hear new teaching ideas or learn about new educational research but subject scholarship can be just as great an inspiration.  Teachers who retain a perspective beyond A-level standard often find they have a better picture of the full development journey of their students and are able to better structure challenge work at all A-levels.  And academics are often very willing, even keen, to give up their time and share some of their work with teachers.  I am very grateful to those who did so through the Oxford History Teachers’ Network; they have reminded me what is exciting about my subject and inspired me to revamp some tired lessons.

Questions that helped me reflect upon subject scholarship:

  1. What is new that is happening in this academic field and why is it exciting?
  2. What resources exist to help me develop this for my students in a workload-friendly way?
  3. Which area of this year’s teaching did students find least inspirational; where can I look to find support developing this?

 

For any historians interested in the specific projects referred to, find more information below:

Death in Tudor England:  http://tudoraccidents.history.ox.ac.uk/

Women in Oxford’s History:  https://podcasts.ox.ac.uk/series/women-oxfords-history

 

“Education is on the brink of being transformed through learning technologies; however it has been on that brink for some decades now.”  Diana Laurillard

As a history teacher who still has a blackboard in my classroom, I have always been a cautious, if not downright reluctant user of technology in lessons.  My early attempts were characterised by patchy wireless, crashing computers, duplication of work and the need for a good back-up plan “just in case”.  Having long embraced the label of a confirmed Luddite, I was recently intrigued to learn that my experiences were perhaps more typical than I had realised.  At a seminar by Dr James Robson I was introduced both to the Laurillard quote above and Larry Cuban’s book “Teachers and Machines” which traces the continual failure of technology to live up to its promise in the classroom since the introduction of educational radio in the 1920s.  The experience is beautifully summarised in one simple quote by Cuban “Computer meets classroom: classroom wins.”

There are, of course, lots of reasons why technology has not had more impact that are outside of individual teachers’, and many schools’, control.  The money needed to invest in infrastructure and the difficulties of managing the ‘digital divide’ so as not to advantage those families with high cultural capital and access to the latest technology are two that need a lot of thought.

However, I have not always reflected enough as a teacher to ensure that I got the full potential from technology.  One reason for this is suggested in the SAMR model: Substitution, Augmentation, Modification, Redefinition.  Very often when technology comes into the classroom teachers use it as a substitute for what they would have previously been doing, or at best to augment what they would have done anyway. Thus I replaced the whiteboard with PowerPoint, a substitute or at best augmentation of the presentation with some flashy graphics.  Interactive Whiteboards, at least in secondary schools, rarely redefined learning but augmented the PowerPoint with a little interactivity.

When we piloted giving Chromebooks to a whole year 7 class for a term and, alternatively, giving a sets of Chromebooks to some teachers for a term we found very similar results.  They were often used as a substitute for other resources, e.g. textbooks, or essays written by hand.  Sometimes they were used to augment learning, e.g. conducting research using a number of sources of information rather than just one, but there was rarely significant change (modification) let alone a redefinition of the learning experience.  In slightly over half of lessons they weren’t used at all.  If this is all they are needed for, they are a very high-cost resource!

However, when we offered better support for teachers to understand the potential, based on peer observation and team teaching with those more experienced with the tools the teachers and students did find them transformative and became very excited about their potential to impact upon learning.  The communication tools supported joint planning and creation of shared work, creating an immediate and ongoing dialogue between peers and teachers I have never found a way to achieve on paper.  Iterative feedback loops which research shows to have high impact but which our students were less engaged with in lower years because they found it ‘boring’ became more accessible and faster paced, securing student engagement.  Online tools such as Quizlet and Socrative allowed for anonymous discussion and quizzes engaging more students in low-stakes testing and maximising contributions. Both are known to contribute to effective learning but can be hard to achieve in a normal, full classroom.

The crucial reflection for us though was the importance of investing fully in development time, shared planning and peer observation in order to maximise the impact of technology.  Teachers need support to modify or even redefine their learning and changing teachers’ practise takes investment in training, support and the opportunity to experiment without judgement.  In that regard introducing technology works like any other teaching development, but sometimes this is perhaps overlooked in the hype and expense.

It is certainly true that technology has often promised more than it has delivered and has rarely been as transformative as the hype has suggested it will be.  However, in recent years, I have found technology to be more useable than ever before with better connections, the “back up” being students’ phones rather than a whole other lesson plan, and certain tools such as Google Classroom, Socrative and, of course, access to a wide range of “Edublogs” contributing to transforming my practise.   However, the biggest driver for me has been colleagues willing to share their excellent practise and innovative uses, who were patient with my clumsiness and willing to listen to what I needed in my teaching and support me to deliver it, rather than imposing new tools from above.  About a year and a half ago I realised I would now be more devastated to lose my Chromebooks than my blackboard.

Questions that have helped me reflect on whether I am getting the most out of technology:

  • Was the learning experience of my students fundamentally any different than it would have been without this tool? What did it deliver for the cost?
  • Where is this technology being used really well? If I can’t find examples, is it likely that I will have the time and skills to use it to redefine my teaching … or will it just be an expensive augmentation.
  • What one tool would I like to master and integrate into my teaching? Am I making the best use of this before moving onto the next tool?
  • What makes this more than a trick or novelty? How does it shape learning?

This JMSReflect Research Project into the use of Chromebooks mentioned in this post was led by David Bate in conjunction with the Oxford Deanery, Oxford Department of Education.

One great article that helped me see the potential of technology in the history classroom was:

Moonen, L. (2015) ‘Come on guys, what are we really trying to say here?’ Using Google Docs to develop Year 9 pupils’ essay-writing skills, Teaching History, 161, pp. 8-14.

And for anyone looking for a longer read and some of the pitfalls, I do recommend:

Cuban, L. (1986) Teachers and Machines: The Classroom Use of Technology Sincee 1920. New York: Teachers’ College Press

The growing demand for teachers to be engaged with and in research seemed daunting at first.  In terms of the educational research out there I was unsure whether I would be able to access it, understand it and apply it.  And as for conducting my own practitioner research…  Visions of large scale projects with complicated control groups and statistical analysis of reams of data to offset the many variables filled my mind and I don’t think I am the only person to hold this misconception.  “Research” spoke of EEF-scale projects and analytical and data skills I don’t possess.  Over the last two years I’ve learned to be much more realistic about what practitioner research can achieve and how to use it to have tremendous impact upon my teaching.

BERA (2014) concluded that “a research literate and research engaged profession” would positively support student progress but warned about the risk of this becoming a demand or “burden” placed on teachers.    2 of the main ways they identified it as supporting teachers included:

  • Equipping them to be discerning consumers of research
  • Equipping them to conduct their own research.

I’ve found both to be true for me.

Accessing and Using Educational Research

The first thing I learned was that there are very rarely simple answers yielded by research into education.  As I’ve become more engaged myself I’ve learned to be increasingly sceptical of anyone who glibly insists that “Research says…”.  A more ‘discerning consumer’, if you will. Despite claims to the contrary most research raises more questions than answers and, even when conclusions are reasonably clear-cut, that doesn’t mean that they apply to every context and every sub-set of students.  As we’ve worked on assessment this year, I read some fascinating material on an iterative feedback loop by Barker and Pinard (2014); essentially showing how powerful a redrafting process can be in building students’ understanding.  Although this focused on students in higher education it seemed to offer a lot for me as a secondary teacher.  Until I spoke to students.  They find redrafting “boring” and this was a tremendous, but not insuperable, block to impact.

Our starting point has been to identify an ‘issue’ or area of pedagogy we’d like to develop or learn more about.  With the support and guidance of the Dr Katharine Burn from the Oxford Deanery we have been helped to identify relevant research and reading. This has been hugely important for us as working teachers, in order to pinpoint the best articles and original research to access without a lot of wasted time.  I recommend any teacher or school engaging in practitioner research to build a good relationship with their local university and take advantage of their expertise and support.

Having read some original research, I found I was in a better position to engage with the active and exciting online community to trawl for ideas and suggestions that might have impact. Never has it been more important for teachers to be critical consumers; there are so many ‘solutions’ on offer, how do you select the best ones for your students.  The reading gave me some context and basis for evaluating and sorting ideas and picking ones that might work.

Conducting My Own Research

Nonetheless  still faced the daunting prospect of engaging in actual ‘research’; trying something out and measuring the impact.  Once again, the Oxford Deanery was the greatest support I found.  The best advice I received was two-fold:

  • Plan how you’re going to assess impact before you start – this helps keep you objective when assessing the intervention you’ve planned and carefully nurtured into the classroom.
  • This (literally) isn’t rocket science– you do this every day in every lesson as a teacher and know how to assess impact, it is just a slightly more formal process for capturing your reflection.

One project involved looking at teacher workload.  There are various ways to measure this, some more scientific than others: having them keep detailed logs of their work before and after the intervention would be one measure.  However to achieve this would only drive up the very workload we were trying to control!  In the end, we just asked teachers to report how they felt; after all, “workload” is in many ways quite subjective.  Few teachers literally count the hours, and I’ve yet to meet one who isn’t willing to go the extra mile for something they feel is valuable for their students.  “Workload” is a catch-all term that relates to how teachers feel about their working week, as much as a measure of hours and so their self-reported judgement was measure enough.

Student voice is another tremendously powerful tool for assessing interventions.  Of course, like any data this can be interpreted in different ways.  My students’ views that redrafting is “boring” and that they particularly don’t want to do it in history when they only have a few lessons a fortnight could be interpreted to support a range of next-steps.  It could mean that I need to better explain the value, or that I need to find new, more time-effective ways to do it.  It could mean that I should reduce the frequency, or that it is a task better suited to homework than classwork.  But it has still yielded a valuable response that helps me understand the impact of the intervention and their reaction to it.

Sometimes cross-referencing this with other data (whether assessment results or behavioural) is also powerful, or reviewing students work for key ideas and evidence of progress… But at this point I am probably teaching you to suck eggs.  Because that is exactly the point; small-scale teacher-led research turned out to be neither as scary nor as daunting as I first thought.  In fact, it mostly involved thinking about a lot of things that I reflect on anyway as a teacher; did that lesson work, did they enjoy it, did they ‘get it’, how do I feel, how do they feel, what does the assessment show they understood or misconceived about the work, and so on.

Overall

Overall, the impact has been powerful.  I do indeed feel better equipped to discern good advice from bad and to take a less ‘trial-and-error’ approach to teaching.  I feel more confident evaluating my ideas and interventions and more willing to abandon those that are not working, however much I might like the idea or have invested in bringing it to fruition.  At first practitioner research seemed scary.  Right now, I don’t know how I ever taught without it.

The following questions have helped me reflect on research and how to use it to develop my teaching:

  • What is the issue I’m trying to address or the area I’m trying to develop?
  • What research exists and what specific questions would I like it to answer? Where can I access research on this?  [For this, our university links have been hugely helpful.]
  • What will I try now to move forward with this? Does that fit with what I learned from my reading?
  • What will success look like here? Who will feel or behave differently and how? How will I check that this is working?

 

The full report can be accessed here:

BERA (2014), Research and the Teaching Profession: Building the Capacity For  A Self-Improving Education System, https://www.thersa.org/globalassets/pdfs/bera-rsa-research-teaching-profession-full-report-for-web-2.pdf