Page 2 of 3

The closely related ideas of triple impact and dialogic marking have been heavily criticised recently for a variety of reasons.  Some of these are criticisms that raise important questions and highlight issues of evaluating impact within teaching.  One key question has been that too much focus was placed on impact without enough of the ‘hidden cost’ of generating that impact, particularly in terms of teacher workload and stress levels.  Seeking to evidence quality dialogue was also a challenge and systems of different coloured pens quickly evolved.   For many the “purple pen” marking system has become a symbol of much that is wrong with teaching.  The use of different coloured pens in marking symbolises lack of autonomy, a focus on appearance and evidence rather than meaningful impact and bureaucratic assessment policies run amok.  Criticisism has even come from the top with School Standards Minister Nick Gibb bemoaning teachers “wasting time” marking in coloured pens (October 2016).  Nicky Morgan also expressed concern (March 2016) and Ofsted, worried they may have started the whole thing, distanced themselves from the phenomenon and withdrew their guide to marking in 2015.

Some of the criticisms are entirely legitimate.  A huge amount of investment goes into training teachers to develop their professional judgement.  This is necessary because teaching is infinitely complex and varied, which is one of the things that makes it such an amazing job, as well as an awesome responsibility.  Policies which prevent teachers using their judgement as professionals fundamentally undermine the profession and the work they do.  A specific pen colour isn’t going to turn someone who doesn’t understand quality assessment into someone who does (teacher, parent or student) although it may just mask some of the conceptual weaknesses that need to be addressed with supportive CPD.  Using a purple pen to correct their work isn’t going to “fix” students’ problems with learning if they lie outside of a very narrow range of issues; it is unlikely to increase motivation, address conceptual misunderstandings or make up for a rushed job.

Our school has recently revised its formative assessment policy quite radically, removing most of the directives that had crept in over the last few years including those about pen colour, regularity of marking and specific symbols and codes to address specific work issues.  The drive behind this was to restore teacher autonomy and allow teachers to use their professional judgement when giving feedback.  Different subjects, students and pieces of work might call for different systems and the best person to make this judgement is the teacher on the ground.

However, with greater freedom comes greater responsibility and it is sometimes hard to know what to do for the best, with all the noise and fiercely held opinions.  Elliott et al’s “A Marked Improvement” is an incredibly useful document for teachers looking to understand what the research really says.  The review is thorough, well-organised and raises as many questions as answers, which is a fair reflection of where we are.  Some things we know work, some things we know don’t work and most things are … well, complicated.  For anyone looking for a quick-cheat guide to tell you how to mark this isn’t it.  However if you think judgement and experience count for something this strikes a good balance between research and open questions.

When it comes to purple pen, there are three key conclusions in Elliott’s report that have driven my thinking:

  1. “A key consideration is clearly the act of distinguishing between errors and mistakes.”
  2. “Unless some time is set aside for pupils to consider written comments it is unlikely that teachers will be maximising the impact of the marking.”
  3. “No high-quality studies appear to have evaluated the impact of triple impact marking … [although] there does appear to be some promise underpinning the idea of creating a dialogue, further evaluation is necessary.”

These ideas individually and collectively have shaped my thinking about marking in many ways over recent years.  Specifically I have learned that it is important that I do the following:

  1. Address fundamental misconceptions through re-teaching and ensure that students have time to work with and assimilate this new information. This may be through redrafting, correcting or a new piece of work but it involves not just ‘new’ learning but unpicking old learning and rethinking – this has to be done carefully.
  2. Make pupils correct their own mistakes.  Not only does it save me time, but it might help them remember to slow down and check their work next time.
  3. Build workload-friendly systems and habits especially where pupils are responding to input. I want to easily see what they have done, check that they now understand and move forwards appropriately.

And that is where the purple pen comes in and does a beautiful job for me.  When my classes are used to using it, they know what it means and time is saved by having it as part of an established routine.

  • Students can use it to correct mistakes and those corrections stand out clearly in the work.
  • For short pieces or responses to questions I’ve raised purple is easy to find in their files or books; I can instantly zoom in on their responses and redrafted work.  This in itself saves time and allows me to focus on what is needed; checking this is now in line with my expectations.
  • If the corrections stand out for me purple also stands out for the students.   Perhaps quite some time in the future.  Perhaps when they need to revisit the work and I’m very keen for them to revise the corrected material, not the original errors or mistakes.  Or when I want them to think about how they improved that type of answer the last time and apply that thinking without me having to repeat the feedback.

I’m not saying purple pen should be used for every piece of work.  An entire redrafted essay in purple is just painful to read.  I’m not saying that it should be used every day, in every subject – that is exactly what has been wrong with too many policies.

But I am saying that it is not the evil devil-child of bureaucratic teaching.  In fact, what came out of our old policy was that I was forced to try a new thing and it helped.  What came out of spending time reviewing the research is a better understanding of why it worked and how to use it.  Not all the time, in every scenario, with every child.  But enough that even given more freedom I intend to continue to mark primarily in red and have pupils redraft in purple.  Not to mention that I have a stock cupboard full of purple pens and someone has to use them!

Questions to help me reflect on my assessment and feedback:

  • How confident am I that I have correctly worked out which are mistakes and which are fundamental misconceptions? Is further dialogue needed to pin this down?
  • How will I know that this is having impact and that the student is now moving forward?
  • Is the method I am using the most time-efficient way to achieve the desired impact?

The research

‘A Marked improvement? A review of the evidence on written marking’ can be accessed here:


In this week’s guest blog, Oonagh Fairgrieve reflects on what she learned when given her disaggregated INSET time to focus on a research project that interested her.  She picked ‘talk-less teaching’ as her starting point but ended up thinking about teaching as a much more integrated whole.


“Never become so much of an expert that you stop gaining expertise. View teaching as a continuous learning experience.” Denis Waitley.


One of the things about reflective practice is that you begin to reflect on your own reflections. As a Social Science teacher, I feel sometimes that I overanalyse my behaviour in a classroom too much, reflecting on what I should have said at a certain moment in time, what I could have phrased differently; the list goes on.


As part of my own continuous professional development I chose to look at the concept of “talk-less” teaching. It made sense to me to think that the more time we spend talking, the more time students are passive, the less learning happens in our classroom. In my initial research on talk-less teaching, I found similar results in interviewing and observing teachers and students; that too much time was taken up with explanation and a “talk and chalk” approach and students felt that more individual guidance and collaborative learning made an engaging and stimulating learning environment.


However, my research suggested that there was an important difference between reducing the amount of teacher talk and changing the quality of teacher talk. This change, needs to start with the teacher themselves; but may be guided by continuous professional development, or by mentoring from another reflective practitioner.   As Nunan (1996) states: understanding “has to begin with the teachers themselves, considering the ways in which the processes of instruction are illuminated by the voices of the teachers.” By focusing on whether teacher talk matches our intentions at any given stage of a lesson, rather than the time it takes, I hoped to enable learners to achieve more in a lesson and for learning to be more impactful…in theory, at least.


Walsh (2005) argues that for teachers and learners to work effectively together, both the teacher and learner need to acquire competence in language communication; making use of a range of appropriate interactional resources in order to promote active and engaged learning. By putting interaction firmly at the centre of teaching and learning and by reflecting on their quality of teacher talk, teachers will immediately improve learning and opportunities for learning.  This fit with my focus and I spent a few weeks reflecting on what I needed to say and when I needed to say it.


I found it helpful to use the principle of modes (developed from Walsh’s framework).  Although designed for use in a MFL or EAL classroom, I was able to apply this to a Social Science/Humanities lesson:

  • Skills and Systems: I used DIRT at the start of the lesson to give feedback or check previous knowledge and understanding.
  • Managerial: I thought carefully about when and how to give an instruction or explain a new concept to the whole classroom.
  • Classroom Context: I used questioning rather than talk to capture opinion, check knowledge and spark discussion


By reflecting on what I wanted to say before I said it, I began to create a reflective running dialogue, almost like a verbal lesson plan.   To break out of my normal habits, I used tools such as Google docs to provide iterative feedback and trialed interventions such as muted lessons.  Being open about what I was trying meant that capturing student response to this was straightforward, and colleagues also supported me to reflect on the importance of and nature of talk in a lesson.  My results interestingly found little impact on progress, but a definite impact on student attitude towards the subject and to the learning itself.  It is possible that with longer-term development there will be more impact on student progress.


But crucially for me was my which led me to my key reflection; the importance of the quality of teacher talk.


But interestingly, reflecting on teacher talk and what I wanted to say and what I wanted students to learn led to reflecting on independent learning. This is because through the use of effective teacher talk, we create an environment where words are like gold and are meaningful. We create an environment where students begin to understand the importance of collaborative work. This in turn linked to students’ mindset and attitude because by doing this, I could, in turn, instil confidence and esteem thus encouraging a growth mindset; where students feel confident to reflect on their own abilities through the use of talk.


To summarise, this project showed me the power of effective talk but also how focusing on one part of my teaching leads to almost a “web” of continuous professional development that is interconnected. By starting with what we say, who knows where we will end up?

“Data is like garbage.  You’d better know what you’re going to do with it before you collect it.”  Mark Twain

As I prepared for the new term, I was struck by how much information about the pupils I had available.  This is before I’ve even met many of them in person.  Increasingly, the expectation is that students come to us much as food from the supermarket; pre-packaged with catch-all information progressively simplified into little coloured boxes.

As with many things, one broadly assumes the benefits outweigh the costs, although I’m unsure whether any rigorous research has been conducted to test this.  However, I do wonder whether the data we are given come with enough health warnings to help teachers avoid some of the dangers they present.  I have been reflecting on a few of these as I prepare to meet my new classes:

  • The Pygmalion and Golem Effects: In 1965 Rosenthal and Jacobson experimented with the impacts of labelling by convincing teachers that a (non-existent) test they had run on their students had identified certain students who were on the verge of going through the intellectual equivalent of a ‘growth spurt’ and whose progress would accelerate dramatically over the course of the next year.  They found that children profited from their teacher’s high expectations and made greater progress than those not so labelled.  There are various provisos to this “Pygmalion Effect”; including that the impact was limited to younger children and their paper explores a range of possible explanations.  However one implication is the possibility that low expectations can lead to poor outcomes for the student – the Golem Effect.  As a teacher, seeing my students for the first time through the filter of data, it is sobering to remind myself of the potential for my expectations to shape their outcomes.


  • Confirmation Bias: Along with the risks of self-fulfilling prophecy acting on the students, confirmation bias is a well-documented psychological tendency that it is worth teachers familiarising themselves with.  Essentially this is where we interpret new evidence in light of our existing theories.  This can be done in a variety of ways; we can look for evidence that supports our beliefs, disregard contradictory evidence as anomalous and give greater weight to information that fits comfortably with our current world-view.  It is not always a conscious process and can be hard to avoid, even when aware of the phenomenon.   The risk in education is that it can be easy to find confirmation of low expectations, even without realising we are looking for it.  All teenagers tend to miss the point sometimes, rush a bit of homework and submit an essay that is far below their best standard, not revise for the odd test or just have bad days.  If each instance of underperformance adds up in the mind of the teacher as an accumulated wealth of ‘objective’ evidence that they “can’t” or “won’t” do it, that their targets are too high, their ‘ability’ too low, or their skill-set mismatched to the subject it is hard to think how they might avoid low expectations.


  • Reliability and validity issues: As a general rule the data we use are worked out with large sets carefully tested to be as reliable as possible.  Mass testing and standardised methodologies help to ensure that reliability is as possible.  Self-fulfilling prophecy and confirmation bias may also help our systems to achieve this!  However we all have students who perform exceptionally well on exam day, confounding our expectations and careful, reliable testing over the previous years of teaching.  Unfortunately we have probably all experienced the reverse where underperformance strikes.  The same can be true of any of the testing which generates our data going into the relationship; perhaps that student doesn’t test well, or had an off-day, or was distracted during those tests.  Additional data can help (CAT scores and reading levels and KS2 SATs) but only so far.


Then there is the question of whether the data actually measure what they are supposed to – and the related question of whether I am using them for that purpose.  Many teachers can talk for hours about how the data we’re given are of questionable validity so I won’t explore this too much here.   However, coupled with the Golem Effect, confirmation bias and reliability issues for the individual student it is worth at least noting.  Sometimes we’re very sensitive to targets or information that we consider ‘too high’ or ‘inflated’.  This can certainly happen and the drive in all parts of the system to high expectations may well mean it is more likely than low expectations.  But I’m not sure I’ve always spent enough time looking for data flaws that go the other way where for some reason, or combination of reasons, my students have been given scores and targets that are too low and which I need to challenge and raise, rather than finding confirmation for, however inadvertently.

This is not to suggest that the data is pointless and can never be relied upon; big-picture and over time it can certainly be valuable.  If it sounds like my reflections suggest the data aren’t useful or should be ignored that is not the case.  Without some very convincing research to show otherwise, I’m operating on the assumption that for most students and most teachers the availability of data is a positive thing that can be well-used to support learning.  If I think back to the start of my teaching when I had very little information about most student I feel much better equipped going into a new class now.    Having seen that I have a GCSE student with a reading age of 9 I have been able to do some careful thinking about the range of Anglo-Saxon source material I am making available in my first lesson.

But I would make a case for caution and critical evaluation of the data from the very beginning.  Too often, if we allow our aspirations to be limited by the data in front of us and confirmation bias kicks in we are at risk of contributing to students’ challenges.    Of course, there are a lot of other factors that come into play.  However my goal as a teacher is to be a positive factor and not one of the hurdles my students have to overcome, and this is enough to give me pause.  The principle of falsifiability is a useful one here – to ask myself how would I know if these data are flawed, if they hide strengths either in the area they measure, such as literacy levels, or in other useful assets that aren’t directly measured such as motivation, emotional maturity or resilience and I find myself asking the following questions as I reflect on the data I’m looking at:

  • Am I reading too much into these data, and forming judgements that may limit my expectations too far?
  • If any initial expectations based on the data are misguided, how will I identify that this is the case and not fall into the trap of confirmation bias? What should I be looking for in this student’s contributions, work, ethos and attitude to learning that challenges the previous data and suggests the student may be capable of more?
  • Were these data to be fundamentally misleading for this student, understating their full potential, how would I know?
  • Is the AfL, teaching and questioning in my classroom giving all students opportunities to excel – to overcome the low expectations they may have of themselves or others may have of them?

Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. The Urban Review, 3(1), 16-20.

For many, if not most, teachers what originally inspired their choice of vocation was a love of subject and a desire to share this passion with a new generation.  Despite the negativity that can be prevalent on some parts of the web, most teachers I know retain this passion to a high degree.  Why else would PE teachers organise and enthuse about so many sporting fixtures, language teachers put so many hours into organising trips and cultural experiences and geography teachers spend days wading hip-deep in rivers in the middle of nowhere?

However it can be challenging to stay in touch with academic developments in your field which were often tough enough to track as a student, let alone a full-time teacher.

Which is why I found a one-day INSET organised by Jason Todd of the Oxford University Department of Education to be a particularly inspiring event when I first attended in 2016.  Although research critiques one-off INSET as low impact, this was perfectly timed post-exam period for reflection and implementation.  As well as material on the new specifications and teaching advice, it also included academics presenting on their historical work, particularly a talk by Steve Gunn on his work analysing coroners’ reports of accidental death in Tudor England for an understanding of both life and death in that period.

The talk fell at a perfect time for us, when we were revising Key Stage 3 with a particular emphasis on students’ feedback that they found the Tudor unit to lack challenge having already “done them” in primary school.  Their work was engaging, relevant and showed an innovative use of sources to draw inference with which my students could engage.  The lessons I devised based on this material were some of the most well-received I have delivered, based on feedback from the students.

At this year’s conference my eyes were especially opened by a talk about the delivery of black history in secondary schools by Abdul Mohamud and Robin Whitburn.  Their book “Doing Justice to History” challenges the teaching of slavery and the historical misconceptions they have found perpetuated including:  slavery as an economic phenomenon; the trade triangle as just part of a long history of slavery (as opposed to the terrible and dehumanising innovation it was); and the supposed ‘shared guilt’ of African nations in this exploitation.  Next year’s year 8s are going to have a radically rewritten Scheme of Learning in this area, drawing on their scholarship and the source material and life stories they shared with us.

Another talk on research into women in Oxford’s history and an accompanying website with podcasts and interviews with historians has already found its way into our year 7 scheme of learning.

This blog is not about history teaching specifically but about the fresh inspiration that can come from getting back in touch with the academic side of your subject specialism.  I am always excited to hear new teaching ideas or learn about new educational research but subject scholarship can be just as great an inspiration.  Teachers who retain a perspective beyond A-level standard often find they have a better picture of the full development journey of their students and are able to better structure challenge work at all A-levels.  And academics are often very willing, even keen, to give up their time and share some of their work with teachers.  I am very grateful to those who did so through the Oxford History Teachers’ Network; they have reminded me what is exciting about my subject and inspired me to revamp some tired lessons.

Questions that helped me reflect upon subject scholarship:

  1. What is new that is happening in this academic field and why is it exciting?
  2. What resources exist to help me develop this for my students in a workload-friendly way?
  3. Which area of this year’s teaching did students find least inspirational; where can I look to find support developing this?


For any historians interested in the specific projects referred to, find more information below:

Death in Tudor England:

Women in Oxford’s History:


“Education is on the brink of being transformed through learning technologies; however it has been on that brink for some decades now.”  Diana Laurillard

As a history teacher who still has a blackboard in my classroom, I have always been a cautious, if not downright reluctant user of technology in lessons.  My early attempts were characterised by patchy wireless, crashing computers, duplication of work and the need for a good back-up plan “just in case”.  Having long embraced the label of a confirmed Luddite, I was recently intrigued to learn that my experiences were perhaps more typical than I had realised.  At a seminar by Dr James Robson I was introduced both to the Laurillard quote above and Larry Cuban’s book “Teachers and Machines” which traces the continual failure of technology to live up to its promise in the classroom since the introduction of educational radio in the 1920s.  The experience is beautifully summarised in one simple quote by Cuban “Computer meets classroom: classroom wins.”

There are, of course, lots of reasons why technology has not had more impact that are outside of individual teachers’, and many schools’, control.  The money needed to invest in infrastructure and the difficulties of managing the ‘digital divide’ so as not to advantage those families with high cultural capital and access to the latest technology are two that need a lot of thought.

However, I have not always reflected enough as a teacher to ensure that I got the full potential from technology.  One reason for this is suggested in the SAMR model: Substitution, Augmentation, Modification, Redefinition.  Very often when technology comes into the classroom teachers use it as a substitute for what they would have previously been doing, or at best to augment what they would have done anyway. Thus I replaced the whiteboard with PowerPoint, a substitute or at best augmentation of the presentation with some flashy graphics.  Interactive Whiteboards, at least in secondary schools, rarely redefined learning but augmented the PowerPoint with a little interactivity.

When we piloted giving Chromebooks to a whole year 7 class for a term and, alternatively, giving a sets of Chromebooks to some teachers for a term we found very similar results.  They were often used as a substitute for other resources, e.g. textbooks, or essays written by hand.  Sometimes they were used to augment learning, e.g. conducting research using a number of sources of information rather than just one, but there was rarely significant change (modification) let alone a redefinition of the learning experience.  In slightly over half of lessons they weren’t used at all.  If this is all they are needed for, they are a very high-cost resource!

However, when we offered better support for teachers to understand the potential, based on peer observation and team teaching with those more experienced with the tools the teachers and students did find them transformative and became very excited about their potential to impact upon learning.  The communication tools supported joint planning and creation of shared work, creating an immediate and ongoing dialogue between peers and teachers I have never found a way to achieve on paper.  Iterative feedback loops which research shows to have high impact but which our students were less engaged with in lower years because they found it ‘boring’ became more accessible and faster paced, securing student engagement.  Online tools such as Quizlet and Socrative allowed for anonymous discussion and quizzes engaging more students in low-stakes testing and maximising contributions. Both are known to contribute to effective learning but can be hard to achieve in a normal, full classroom.

The crucial reflection for us though was the importance of investing fully in development time, shared planning and peer observation in order to maximise the impact of technology.  Teachers need support to modify or even redefine their learning and changing teachers’ practise takes investment in training, support and the opportunity to experiment without judgement.  In that regard introducing technology works like any other teaching development, but sometimes this is perhaps overlooked in the hype and expense.

It is certainly true that technology has often promised more than it has delivered and has rarely been as transformative as the hype has suggested it will be.  However, in recent years, I have found technology to be more useable than ever before with better connections, the “back up” being students’ phones rather than a whole other lesson plan, and certain tools such as Google Classroom, Socrative and, of course, access to a wide range of “Edublogs” contributing to transforming my practise.   However, the biggest driver for me has been colleagues willing to share their excellent practise and innovative uses, who were patient with my clumsiness and willing to listen to what I needed in my teaching and support me to deliver it, rather than imposing new tools from above.  About a year and a half ago I realised I would now be more devastated to lose my Chromebooks than my blackboard.

Questions that have helped me reflect on whether I am getting the most out of technology:

  • Was the learning experience of my students fundamentally any different than it would have been without this tool? What did it deliver for the cost?
  • Where is this technology being used really well? If I can’t find examples, is it likely that I will have the time and skills to use it to redefine my teaching … or will it just be an expensive augmentation.
  • What one tool would I like to master and integrate into my teaching? Am I making the best use of this before moving onto the next tool?
  • What makes this more than a trick or novelty? How does it shape learning?

This JMSReflect Research Project into the use of Chromebooks mentioned in this post was led by David Bate in conjunction with the Oxford Deanery, Oxford Department of Education.

One great article that helped me see the potential of technology in the history classroom was:

Moonen, L. (2015) ‘Come on guys, what are we really trying to say here?’ Using Google Docs to develop Year 9 pupils’ essay-writing skills, Teaching History, 161, pp. 8-14.

And for anyone looking for a longer read and some of the pitfalls, I do recommend:

Cuban, L. (1986) Teachers and Machines: The Classroom Use of Technology Sincee 1920. New York: Teachers’ College Press

The growing demand for teachers to be engaged with and in research seemed daunting at first.  In terms of the educational research out there I was unsure whether I would be able to access it, understand it and apply it.  And as for conducting my own practitioner research…  Visions of large scale projects with complicated control groups and statistical analysis of reams of data to offset the many variables filled my mind and I don’t think I am the only person to hold this misconception.  “Research” spoke of EEF-scale projects and analytical and data skills I don’t possess.  Over the last two years I’ve learned to be much more realistic about what practitioner research can achieve and how to use it to have tremendous impact upon my teaching.

BERA (2014) concluded that “a research literate and research engaged profession” would positively support student progress but warned about the risk of this becoming a demand or “burden” placed on teachers.    2 of the main ways they identified it as supporting teachers included:

  • Equipping them to be discerning consumers of research
  • Equipping them to conduct their own research.

I’ve found both to be true for me.

Accessing and Using Educational Research

The first thing I learned was that there are very rarely simple answers yielded by research into education.  As I’ve become more engaged myself I’ve learned to be increasingly sceptical of anyone who glibly insists that “Research says…”.  A more ‘discerning consumer’, if you will. Despite claims to the contrary most research raises more questions than answers and, even when conclusions are reasonably clear-cut, that doesn’t mean that they apply to every context and every sub-set of students.  As we’ve worked on assessment this year, I read some fascinating material on an iterative feedback loop by Barker and Pinard (2014); essentially showing how powerful a redrafting process can be in building students’ understanding.  Although this focused on students in higher education it seemed to offer a lot for me as a secondary teacher.  Until I spoke to students.  They find redrafting “boring” and this was a tremendous, but not insuperable, block to impact.

Our starting point has been to identify an ‘issue’ or area of pedagogy we’d like to develop or learn more about.  With the support and guidance of the Dr Katharine Burn from the Oxford Deanery we have been helped to identify relevant research and reading. This has been hugely important for us as working teachers, in order to pinpoint the best articles and original research to access without a lot of wasted time.  I recommend any teacher or school engaging in practitioner research to build a good relationship with their local university and take advantage of their expertise and support.

Having read some original research, I found I was in a better position to engage with the active and exciting online community to trawl for ideas and suggestions that might have impact. Never has it been more important for teachers to be critical consumers; there are so many ‘solutions’ on offer, how do you select the best ones for your students.  The reading gave me some context and basis for evaluating and sorting ideas and picking ones that might work.

Conducting My Own Research

Nonetheless  still faced the daunting prospect of engaging in actual ‘research’; trying something out and measuring the impact.  Once again, the Oxford Deanery was the greatest support I found.  The best advice I received was two-fold:

  • Plan how you’re going to assess impact before you start – this helps keep you objective when assessing the intervention you’ve planned and carefully nurtured into the classroom.
  • This (literally) isn’t rocket science– you do this every day in every lesson as a teacher and know how to assess impact, it is just a slightly more formal process for capturing your reflection.

One project involved looking at teacher workload.  There are various ways to measure this, some more scientific than others: having them keep detailed logs of their work before and after the intervention would be one measure.  However to achieve this would only drive up the very workload we were trying to control!  In the end, we just asked teachers to report how they felt; after all, “workload” is in many ways quite subjective.  Few teachers literally count the hours, and I’ve yet to meet one who isn’t willing to go the extra mile for something they feel is valuable for their students.  “Workload” is a catch-all term that relates to how teachers feel about their working week, as much as a measure of hours and so their self-reported judgement was measure enough.

Student voice is another tremendously powerful tool for assessing interventions.  Of course, like any data this can be interpreted in different ways.  My students’ views that redrafting is “boring” and that they particularly don’t want to do it in history when they only have a few lessons a fortnight could be interpreted to support a range of next-steps.  It could mean that I need to better explain the value, or that I need to find new, more time-effective ways to do it.  It could mean that I should reduce the frequency, or that it is a task better suited to homework than classwork.  But it has still yielded a valuable response that helps me understand the impact of the intervention and their reaction to it.

Sometimes cross-referencing this with other data (whether assessment results or behavioural) is also powerful, or reviewing students work for key ideas and evidence of progress… But at this point I am probably teaching you to suck eggs.  Because that is exactly the point; small-scale teacher-led research turned out to be neither as scary nor as daunting as I first thought.  In fact, it mostly involved thinking about a lot of things that I reflect on anyway as a teacher; did that lesson work, did they enjoy it, did they ‘get it’, how do I feel, how do they feel, what does the assessment show they understood or misconceived about the work, and so on.


Overall, the impact has been powerful.  I do indeed feel better equipped to discern good advice from bad and to take a less ‘trial-and-error’ approach to teaching.  I feel more confident evaluating my ideas and interventions and more willing to abandon those that are not working, however much I might like the idea or have invested in bringing it to fruition.  At first practitioner research seemed scary.  Right now, I don’t know how I ever taught without it.

The following questions have helped me reflect on research and how to use it to develop my teaching:

  • What is the issue I’m trying to address or the area I’m trying to develop?
  • What research exists and what specific questions would I like it to answer? Where can I access research on this?  [For this, our university links have been hugely helpful.]
  • What will I try now to move forward with this? Does that fit with what I learned from my reading?
  • What will success look like here? Who will feel or behave differently and how? How will I check that this is working?


The full report can be accessed here:

BERA (2014), Research and the Teaching Profession: Building the Capacity For  A Self-Improving Education System,

  • “Workload ‘pushing young teachers to the brink’” (BBC News, 15th April 2017).
  • “Teachers ‘wasting time on marking in coloured pens’” (BBC News, 21st October 2016) (quoting Nick Gibb).
  • “Inspectors are still looking for detailed marking despite please not to, Ofsted admits” (25th November 2016).

The insidious role of marking in teacher workload and misery has been a growing complaint for some time and with some justification.  Too often an external auditor, be it a senior leader, an OFSTED inspector or parent expects to be able to see evidence of teachers’ work writ clear, ideally in a specific colour of pen.  Of course, simply writing lots of comments on students’ work does not mean that students listen or follow advice, this is well established as the greatest challenge in giving feedback.  Therefore a clear response from the student is called for.  Along with a third pen colour.  Of course this does not fundamentally address underlying issues such as the quality of teacher comments and so many students get easily into bad response habits such as writing “okay”, “thank you” or simply copying out their targets without any developed understanding of their next step.  This doesn’t lead to progress and high quality feedback is known to be the cheapest, high-impact intervention schools and teachers can offer… so clearly more marking is called for.  And so it goes on.

This issue has become serious enough to generate responses from unions, Ofsted and the establishment of a Marking Policy Review Group which specifically addressed the issue of teacher workload, titling its report in March 2016 “Eliminating unnecessary workload around marking”.  Two key culprits stand out: “deep” marking where an extensive quantity of written feedback is given and triple impact marking where a written dialogue develops between teachers and students.  Both generate an intense workload for little proven impact.  But both are driven by the same goal – ensuring that feedback has impact, one of the greatest challenges in assessment, as I discussed in my last post.

This is not just an externally imposed problem.  I have found myself adding more and more to my ‘depth’ marking over recent years; seeking to address literacy, give targets, identify what strengths the work shows, model effective answers and give directives for the application of targets … in short to make each piece of marking the perfect ‘solution’ to student progress.  Too rarely have I stopped to think carefully about what the impact of each piece of feedback was, or which parts of this exhaustive process were actually the ones that best supported students’ learning.  When students made progress it felt irresponsible to tinker.  When they struggled it felt dangerous to step back and reduce my input … so I generally added more.

In a blog post on December 1st 2016 David Didau threw down a challenge to school leaders to let teachers reduce their marking time (the time spent actually writing comments for students) and experiment with other ways of giving feedback, particularly giving whole-class feedback and creating model work based on a reading of students’ work.  This seems very similar to the model advocated by the Michaela school and fits well with Elliott et al’s (2016) finding that dialogic and triple impact marking generate significant workload but lack clear evidence of impact for this work.

At JMS we did indeed pilot this model of feedback across various subjects and key stages in order to reflect on the purpose of feedback and the impact it could have.  There were a lot of positives to it:  once teachers got into the swing it was a dramatic workload-saver.   It drew my attention to exactly how much time I spend rewriting the same comments on several students’ work.  Instead, using this model we produced a single class feedback sheet, which we started terming the ‘Examiner’s Report’ and then focused on how we would ensure that students took the key messages on board.  As with any feedback model, simply telling students what had gone well and what needed improving was not enough.  Modelling helped but even combined both methods rely on students being able to identify which aspects of the general feedback applied to their work.  Those with lower confidence had a tendency to be over-critical of their work and risk focusing on fixing problems which did not apply.  Those with a limited grasp of the assessment criteria could not always see which bits of feedback applied to them.

One-to-one conversations with those students who struggled to apply the feedback were crucial.  I think our openness that we were trying something new and wanted their feedback on it also helped; students seemed more willing to admit early on if they were struggling to understand the feedback.   This may be because ‘problems’ could be safely located with the ‘new’ model, rather than in themselves or the teacher, which facilitated questions and dialogue.

For me, the process has given a new emphasis to the importance of dialogue in feedback.  I am not advocating extended written discussion, or even a specific pen colour.  Workload has to be a consideration, but so does turnaround time if the effort is to pay off for the students.  However  I am convinced of the value to my students in seeing the feedback I give as the first step in a dialogic process where we discuss what went well and how that was achieved, what the next steps are and how they will try to meet these and then a way forward.

This does not have to be a laborious written dialogue built in different colours over several weeks, with books and folders passed back and forth.  Sometimes, often, verbal discussion is quicker and more directly relevant to the student or small group with whom I wish to discuss their work.  Tools such as the ‘examiner’s report’ marking can play a valuable part in this by cutting down wasted time marking repetitively whilst shaping my thoughts on how to move students forward and giving us a clear starting point for dialogue beyond the piece of work itself.  However I have found whole-class feedback to be very much the start of a process, and not sufficient on its own.  In whatever form I need my students to respond directly to my feedback to be sure that it is doing the job.


Questions that helped me to reflect on student responses to feedback:

  1. How widespread is this error and is it something I need to address with the whole class?
  2. Is this something the students can fix themselves? If so, when am I going to give them time to do that?
  3. How will I know if this feedback has ‘sunk in’? What am I expecting students to do with it or how am I expecting their thinking to develop?  When am I going to give them time to do that?
  4. What is the most time efficient way to work with the student on this development point?

Reread Didau’s original post here: