Like over 80% of teachers, I believe that boys and girls should be able to perform equally in any subject and should have equal opportunities so to do.  I don’t use the “gender” column of my class data sheets to adjust expectations.  And, of course, I offer equal support opportunities to both genders.  So gender gaps have very little to do with me or my classroom practice.  And I can relax and breathe easy.  Perhaps there are other colleagues whose attitudes are less modern and who bear a greater share of responsibility for persistent gender gaps in attainment.  Though that it hard to believe as most overtly echo my own views.  Far more likely it is the children themselves and the attitudes, behaviours and expectations they bring to the classroom.  The way they are programmed by wider “society” into certain gender roles and behavioural patterns that affect their educational outcomes.  And the gender gap exists throughout the education system.  So that must be it … it is society’s problem, nothing to do with me and I can relax.

The only downside to reading “Boys Don’t Try?  Rethinking Masculinity in Schools” by Matt Pinkett and Mark Roberts is that it challenges such assumptions.  This can be discomfiting.  To learn that the 80% of teachers who say boys and girls should perform equally in any subject then went on in interviews to show gender oriented attitudes about writing, behaviour, oracy and mathematics gave pause.  I began to think about staffroom comments about “boy-heavy” classes and “not your typical” boy/girl (e.g. those making subject choices that did not fit gender stereotypes).  Of course this may just be a drop in an ocean, or behind closed doors and so it is still possible to think it may not matter.  Not much.

But then you’re challenged to think about all the little choices you make as a teacher, all the little ways in which gender expectations trump individuals; some encouraged by those around and above us.  As I continued reading, I made a list of my own sins in this regard and some of the questions it raised:

  • Using gender to plan lesson seating and as a tool for behaviour management, and thus having different expectations of students’ behaviour before I even meet them. Could this then lead to different reactions for the same behaviour based on gender?
  • Being surprised at the number of boys when walking into a (voluntary) revision class. Did I express this, even inadvertently, and reinforce the expectation that these sessions were not “for” boys?
  • Expecting far more boys in a leadership detention. Did this change the nature and tone of my conversations with students; more of a shrug at the boys’ tales of bad behaviour, greater disappointment and time spent reflecting with the girls?
  • Turning a blind eye to gender-reinforcing “interactions” and “banter”, especially as students get older. Is an ironic eye-roll really enough to challenge the reinforcement of gender norms and the low-level harassment or incursion of their personal space that some students have to endure in the school environment?
  • Adjusting expectations of a “boy-heavy” class both in terms of behaviour and outcome. How far am I therefore driving behaviour and attainment differences?

The list goes on, but the point is that, when I really thought about it, and as I paid close attention to my own practice in subsequent weeks, I began to notice gender-oriented comments and behaviours my own practice and that of those around me.

Of course, the question remains how far this matters.  Our students have had a whole lifetime of gender-oriented behaviour training from society, peers and the media.  Is my expectation that there will be more boys in this week’s after school detention really going to do any significant damage?  Especially as it is a prediction that I could bet my life savings will prove to be true, without much fear of going homeless.

But the sheer prevalence of such conditioning is one of the reasons our modelling and expectations do carry such power.  If boys live in a world where academic success is feminine and damage to their esteem is fixed by asserting masculinity their behavioural choices can be individually rational but destructive over the long-term.  If tests, rows, anxiety, pressure and stress all create a drive to “masculine” behaviours of messing around, not revising or working hard, and getting into trouble then every time we reinforce the underlying expectations we are reinforcing these behaviours.  And, in a time of increasing anxiety in all our students, boys and girls, when we reinforce behaviour and study norms in our students, we perpetuate and increase the anxiety levels of those who feel that they don’t, can’t or don’t want to fit in.  Which is probably all of them.

Each time we, as teachers, model, perpetuate or reinforce these behaviours it probably is just a drop in a vast ocean.  But then the first 5 minutes of, say, my lesson next Tuesday period 5 is just a drop in the ocean of their learning.  I still intend to plan it and make it the best I can.

Because by developing ourselves, and consciously, actively challenging these expectations we as teachers have the power to be more than a drop in the ocean and to promote positive change.  Of course, habits are hard to change and I have already made slips.  But we can learn from these and reflect on how to get better.

And of course my list is just the behaviours I have noticed in myself.  One of the most powerful feedback tools for teachers is that of peer observation. A  peer can help you probe further into your gender practices and expectations.  They can focus their observation on interactions it is very hard to track ourselves whilst teaching.  So my next challenge to myself is to ask a colleague to use our observation time to really probe gender interactions in my classroom.  I want to know:

  • Do my questions fall equally on boys and girls? Not just in terms of number but in terms of challenge level of the questions?
  • How long do I give each gender to answer, think and reflect? Do I move on more quickly or leap in with the correct answer when boys get questions “wrong”?
  • Is my behaviour management consistent – do I notice and address off-task behaviours when they appear in boys and girls?
  • Is my tone and voice adjusted by gender? Am I communicating different expectations with non-verbal cues such as body language?

I suspect some hard answers but they will be useful because, as Pinkett and Roberts argue so convincingly.  This really matters.

As further reading, of course I recommend “Boys Don’t Try? Rethinking Masculinity in Schools” by Matt Pinkett and Mark Roberts.  Now available in our CPD library.

Amanda Spielman (2018):  “Too many teachers and leaders have not been trained to think deeply about what they want their pupils to learn and how they are going to teach it.”

When I became Head of History, over a decade ago, our curriculum offer was quite typical and largely outside of my hands.  We followed a conventional Key Stage 3 covering 1066-1945, in line with the National Curriculum directives, and did a range of units at Key Sates 4 and 5 that were intensively focused on modern world history.  In these courses we included as much overlap as possible in order to, we believed, give our students the best chance of excelling in their final exams.

The freedom offered in recent years is both empowering and exhilarating, but not something that it was easy to instantly know what to do with.  At first, I tinkered with the original national curriculum material, cautiously chose a wider range of A-level and GCSE options as directed by the new specifications and, to be fair, invested a considerable energy into mastering these new topics.  However in the last couple of years I have found myself in a position to think much more carefully about what students should be learning in their time at John Mason – what is the “powerful knowledge” I want them to take away, and why.  In developing my thinking in this area, a number of principle ideas from educational research have been tremendously helpful.

Key idea 1:  Learning is about knowledge in long term memory and builds in schemas of connected ideas.    

Too often in the past I have considered units of work, and even lessons in isolation, considering their “quality” based on performance-related outcomes (rather than learning outcomes) such as pupil enjoyment.  Furthermore I have often failed to be explicit in helping my students to integrate new knowledge to an integrated whole.  The fact that it is easier to teach a course the second time around is a truism that I now relate to the connectedness of knowledge in my own head and my tendency to forward-plan rather than plan from the end.  In light of this I am changing my curriculum planning model and instead of thinking forward from units or even tasks that I like or want students to experience, I am thinking about the key knowledge they need to understand.  In this planning, the knowledge organiser is becoming an indispensable tool.  They are available to download (and time is not infinite!), but wherever possible I am trying to make my own, or at least edit the ones I download, to ensure that I am carefully thinking about the knowledge we are delivering and how it connects in a meaningful way.

Key idea 2:  Students’ response to teaching will be affected by the knowledge they already have, the mindset they bring to the class or subject and, more than anything else, their peer group. 

However, no matter how carefully designed the knowledge organiser and related teaching activities, each student is going to connect the ideas in their own, unique, brain in their own, unique, way.  If in doubt they will look to their peers for guidance which can be helpful, but can also end up building and escalating misconceptions.  The student with the greatest understanding is not necessarily going to be the one the others listen to.    The implications of this for teaching, learning and curriculum planning are vast.  However, one or two key things that I need to think about at the curriculum planning stage strike very strongly.  The first, perhaps, is the need to consider our students’ context very carefully when planning curriculum delivery.  Not knowing doesn’t necessarily mean they’ll ask – and certainly not that they’ll ask the teacher.  They can and will attach ideas to what they do know and can create powerful and long-lasting misconceptions in this process.  It is therefore imperative that I consider their backgrounds and access to relevant contextual information very carefully when planning learning – that vital cultural capital or lack thereof that can make so much difference.  It is important to allow time to explore pre-conceptions.  Educational visits and other experiences that bring the learning to life can take on a new significance, for children who are unable to visualise the content we are describing.  And, vitally, I must share the curriculum map with students (the knowledge organiser comes in handy again here) in order to help them piece together the individual units of learning in a meaningful way.

Key idea 3:  Teaching needs to be responsive, and interactive and the need to build a dialogue about learning includes assessment.

Even with the most careful planning in the world I cannot entirely change idea 2 and my students will build schemas that are unique to them.  But the better I can understand what they are thinking and how they are assimilating information the better able I am to shape this and tackle misconceptions.  Again the implications here are many and varied and this links closely to the nature of questioning in the classroom, which I have discussed a lot elsewhere.  However, for me, the biggest implications for “bigger picture” curriculum planning have been for assessment.  I’ve put a lot of thought into how and when to assess, how to create assessments that really give me insight into what students have taken from the learning and how to “break away” from the rigid adherence to exam questions when a different assessment model would offer better insight.  It is also the key rationale behind our dialogic marking policy, the careful emphasis on minimising staff workload and allowing freedom to respond to what the students have understood.  A driving principle behind our approach to formative assessment is to identify and address misconceptions, in whatever form of response is most appropriate.

Key idea 4:  Memory is strengthened by revisiting material and retrieval practice and learning is maximised when the cognitive load is optimised.

Understanding of how memory works has been one of the most important research areas in developing my teaching and planning.   The old model of delivering blocks of material and then revising these at the end of the course (or expecting students to revise the material) was my instinctive approach for many years.   Now when curriculum planning I try to carefully consider the principles of interleaving, retrieval practice and cognitive load.  This is not the place to detail research into memory at length, but the key implications I have taken are this:

  1. Students will not remember all the “powerful knowledge” I want them to on the first exposure and so I need to work out how to revisit material over the lifespan of a course. This needs to be carefully planned to ensure that the information itself does not lose internal coherence (e.g. in history, say, a chronological structure).
  2. Regular, spaced retrieval practice including low-stakes testing will support students’ retention of the core material they have learned. The time for this needs to be worked into my curriculum plan.  I have made the decision to allocate lesson time to this as I am just not convinced that it will be done well at home by all students, thus creating a chasm in their understanding.
  3. The more information they have readily available in long-term memory the less the “cognitive load” is of absorbing new material and more complicated concepts. By revisiting concepts after spaced retrieval practice I maximise the likelihood of students being able to access such material.  Therefore I need to plan this into my curriculum delivery model.

Key idea 5:  Students of all ages and attainment levels can benefit from metacognition.

Too often in the past I have focused on the end product as the end of the learning,  My assessment of students’ success has been based on a summative piece of work and I have used my professional judgement to try and unpick where and how things went wrong and to plan interventions.  I am only just beginning to understand the power of metacognition to help students understand, plan and monitor their own learning.  I am blogging about my experiences with this in other posts.  However I am already experiencing the implications for my curriculum planning.  The need to leave time and space and identify key points for me to model my own approaches and thinking has further developed my planning around the delivery of new concepts and assessments.  The construction of learning models that support an iterative process where students are able to experience learning in repeated iterations, wherein they reflect on their past learning strategies and develop these is vital.  We are going to be doing a lot more work on this in the next year but already it is feeding into my planning models and we are developing resources to support this at key points throughout the learning in all key stages.

With all this to consider, there is a lot to take on.  Different teams and subjects are at different places and there is work to do in developing these ideas in practice.  Developing and improving a curriculum is not a single or an overnight job but one I am working on all the time.  It will never be perfect.  However, with these research ideas in mind, I feel more confident than ever I have that I know the right questions to ask of our curriculum and that I am identifying ways to improve it that will have real impact on students’ learning at John Mason School.

Questions to help reflect on curriculum design:

  • How have you designed your curriculum and what is the rationale behind this?
  • How do your curriculum choices reflect the context of your pupils?
  • How did you choose what to teach and when?
  • How are subject skills developed throughout the curriculum?
  • How do you identify gaps in knowledge and how do you assess skills?
  • What works well and what needs to be developed within your curriculum?

After many, many years of just operating instinctively, I have been thinking a lot about questioning over the last few years.  I have been reading a lot about questioning.  I’ve read about strategies, types of questioning and pauses.  I’ve blogged as my thinking has developed; about distributing questions equitably, or using pauses at different points in the questioning sequence to build students’ responses.  I have learned about hinge questioning and how to construct  multiple-choice questions that really probe students’ thinking.   I’ve been introduced to technology that does an amazing job at supporting quiet students to respond and participate, or at randomising my question selection.

I have learned a lot.  To summarise some key thoughts in a few bullet points, I have learned that:

  1. Questioning is very important – perhaps one of the most powerful tools we have as classroom teachers.
  2. Performance and learning are not the same – so questioning needs to be subtle and strategic.
  3. There are many different types of questioning, with many different purposes.
  4. Students respond to questioning in very different ways.
  5. There is a LOT to learn about questioning, and it is very complicated.

Some of the advice I’ve heard and, indeed, repeated to teachers in the past now makes me cringe.  To take one example: whole class questioning.  I find this a hugely powerful tool, at the right time and in the right place.  If routines are established it can be an efficient way to poll the class.  However the routines are vital – the equipment being available, the speed with which it can be accessed.  If not, chaos quickly ensues.  Of course students copy each others’ answers; so I’m looking for more than just what is written on the whiteboard.  I’m gauging reaction time and looking to see who is stuck, looking around at their peers or quickly changing their answer to conform with the class.  It matters whether this is a hinge moment, or an opinion poll or a quick plenary.  It matters whether the act of correcting their answer is the learning I desire or whether I really need to know how many students actually know the information … in which case perhaps I should be considering a quick (private, low-stakes) written quiz.  Thus to simply tell teachers to “try whole class questioning” is remarkably simplistic and probably not going to work without much greater support and guidance.  And yet it happens.  A lot.

One of my roles is to support our early career teachers, who increasingly come from a variety of routes into teaching, with many different levels and types of training.  “Questioning” is a recurring development point, frequently raised by reflective teachers themselves who are always looking to improve the quality and value of their classroom interactions.  It is a hard one to tackle: there are so many things to get right, and so many which can go wrong. There is a lot of reading out there and much great advice, but it can be too specific, or else act like a “menu” of strategies.  It is not always clear what to pick.

With a focus on metacognition in our school this year, I have been thinking a lot more about my own thinking and about how and why I make decisions as a teacher.  To support our early careers teachers I have mapped out this questioning flow diagram that tries to capture some of the decisions I make on a day-to-day basis.  My key thinking boils down to:


This is not meant to be a comprehensive overview of questioning.  It links to pieces that explore the issue with far more subtlety.  It leaves off some big ideas in questioning (e.g. hinge questions) as I find them to be a little complicated when struggling with questioning, although immensely powerful when well planned and because I really wanted it to be a single page for easy reference.  Every time I look at it I tweak it a little more, or question whether I have included or excluded the right things.  Several colleagues have suggested tweaks which have been included here (with thanks to Lucy Dasgupta and Chris Davies).

However my early career colleagues this year do say they have found it helpful as a starting point and so I am sharing it here.  Any constructive suggestions would be appreciated and any colleagues who have their own similar maps and would be willing to share, I’d love to take a look.

Questions that help me to reflect when planning questioning:

  1. Why am I asking the class questions?
  2. Should more students be involved in this questioning sequence/dialogue?
  3. Is this questioning strategy time-efficient for my major goal?
  4. What would this look like if it worked brilliantly?  Where should I go for help with that specific strategy or vision?

The problem with lesson observations

Lesson observations can be uncomfortable for both parties and are difficult to get right.  When being observed it is hard not to feel judged and even defenceless, even though that is not the intent.  The abolition of “judgements” following Ofsted’s lead in 2014 did not necessarily do enough to change this dynamic.  Partly because there is also discomfort on the part of the observer.  As an observer the pressure to offer “constructive suggestions” can force you to look for the negative instead of the positive and the better the lesson is the more wide-reaching can become the search for something “useful” to say.

Furthermore there is often a divergence of goals between the observer and a teacher.  As a teacher I want to put on my “best face”.  At times this has been quite a fake “show” that didn’t reflect my normal teaching.  As I grew more confident I was happier delivering something that more closely resembled my “normal lesson” (whatever that is!) but was still overly focused on aspects of planning and delivery that were about performance rather than substance.  However as an observer I want to see difficulties, challenges, classes that are struggling and things that I might be able to “help” with.

There are many other issues with the lesson observation model.  To name a few:

  • Judgements are often unreliable, and it is unclear that two observers would focus on or even notice the same things or feedback on the same points. When doing joint observations I have often picked up on very different, sometimes entirely contradictory things from a fellow observer.  Whilst we can normally reach agreement with a short discussion, it has always made me wonder about all those observations with just a single observer…
  • The observation itself is not necessarily a valid tool for analysing a teacher’s pedagogical choices. This holds even assuming the best of conditions (a subject specialist with some knowledge of the students in the room).  Whilst there are clear and helpful principles behind good teaching, we all know that choices about delivery of a particular unit of learning to a particular cohort of students can be personal and highly nuanced.  Whether observing or being observed, I rarely felt that the comprehensive understanding of these things existed between both parties that was required to ensure feedback was relevant and useful.  Various efforts to mitigate for this (extended lesson planning sheets, detailed “context” documents) have tended to add to workload rather than solving the core problems.
  • The observation is unlikely to give a valid picture of learning or progress. You can’t observe learning, only performance which is a poor proxy for learning under the best conditions.  This results in most teachers, however resistant to putting on a “show” having to offer some adaptations in an effort to “demonstrate progress”.  When both the teachers and the class are “performing” it is at best questionable whether the observation represents normal practice, rendering the feedback of very limited use.


As a result, it is unsurprising that there is little evidence to show that the 3 lesson observations a year most teacher get have a positive impact on teaching and learning.  Attempts to change this have generally fallen flat.  For example, the University of Bristol’s large scale Teacher Observation study trialled in 82 schools showed the model to be very expensive but with no impact on (English or maths) results.

However, this year, our new Director of Teaching and Learning, Lucy Dasgupta, introduced a new model of developmental lesson observations to John Mason and it has been something of an eye opener.  Her ideas have radically changed how we conduct lesson observations at John Mason and not before time.


Key Components of the Developmental Lesson Observation Model

1:  An agreed and precise focus – The teacher brings an idea for the focus of the observation to the planning meeting and this is agreed with the observer by the end of the meeting.  The aim is that it should be something that is a new strategy or a pedagogical development for the teacher.  In my first observation I sought feedback on my implementation of retention and recall strategies, particularly with regards to the appropriate pacing for different groups of students in the lesson.  In another I asked the observer to focus on my modelling of my metacognitive processes as I modelled an extended analytical thinking task for the students.  In both cases, I selected something I am developing in my teaching this year and sought feedback on this agreed aspect of the lesson.

2:  Joint Planning – before the lesson observation the teacher meets with the observer to discuss their plan for the lesson, their objectives and relevant contextual factors.  This does not require mountains of paperwork (if a teacher is there to explain their planning, why would it?) but it does involve both finding some time together to invest in a discussion about the objectives for the lesson.  During the planning session the strategies the teacher plans to use in relation to the observation focus are reviewed particularly carefully.  The observer’s role is as an active participant in planning, sharing experience and suggestions.  This increases the likelihood that the observation itself will be useful as both parties clearly understand the objective and the choices behind strategy selection.  There is not a sense of “I wouldn’t have done it like that…” as ideas are shared at the planning stage.  I have found both as an observee and an observer that what comes out of this meeting is a shared understanding of the context of the class and a sense of shared ownership for the lesson.

  1. The Observation – During the observation the observer focuses on the agreed development in a manner discussed in the planning meeting. At the planning stage both parties discuss what the observer might focus on, with the classroom teacher taking an active role in defining what data would be useful to help them evaluate their own strategy.  When being observed the whole process is more comfortable; I know what the observer is looking at and why, and what sort of feedback they are gathering.  It is what I have asked for!
  2. Feedback – This is short and focused, as both parties review the evidence gathered. The observer’s main role is to provide data to help the teacher (the expert on that class in that subject, let us remember) to reach a judgement about how well the lesson strategy met their goals for the class and how they might develop it further in the future.  Other discussion is off the table; this is not a general, sweeping review of someone’s teaching, with the observer feeling pressured to provide “development points”, however trivial or tangential to the focus.
  3. Considerations for future practice – the final steps of reflection are considerations for future practice in taking the strategy forward. This can be led by the observer or the teacher depending upon the nature of the feedback discussion, and may involve identification of next steps, or further support.


This model of lesson observation seems empowering both as a teacher and an observer.  In both roles I feel more comfortable and the process feels far more natural and productive than using the traditional model.    Obviously there is no way to measure the impact of this specific innovation amongst everything else.  However my experience has been that the feedback I have received has been much more focused and useful to my development than previously – it is something that fits with my own development goals and helps me effectively reflect on my practice.   Our staff feedback after the first cycle of observations suggest this to be widely the case.  Even if this is not 100% achieved, if lesson observations can be conducted in a way that empowers teachers, respects their professionalism and leaves them in control of the learning in their own classroom then I’m all for them!

Questions that help me to get the most out of a developmental observation:

What am I currently developing in my own teaching?  What new strategies am I trying to deploy with my classes?

Where am I least confident in my delivery or outcomes e.g. in what area of my teaching could I most benefit from support and guidance?  Which aspect of content, lesson planning, or which sub-group of students might make a useful focus?

What would I like to better understand about my own teaching at the end of the observation?  What data could an observer gather that would help me better reflect on my own teaching than just being alone with my class?

Information on the Teacher Observation project can be found here:

If learning is to be truly empowering for our students, they need to understand how to use what they have learned.  I have found that bringing metacognitive reflection into the feedback process can support this.

Since the EEF published its report on the high positive impact of metacognitive strategies last April, I have been reflecting on this a lot.  Metacognition is not really a new concept and there are few techniques I’ve seen suggested that are entirely new.  However, the report did reawaken my interest and drive home the potential value of building metacognitive reflection as a habit in my students.  A number of strategies and suggestions fit well with the idea of developing a “growth mindset”.  And like many good growth mindset strategies, one of the great challenges has been developing metacognitive reflection as a habit – in myself, let alone my students.  I have yet to crack this, but I have found some strategies and areas of teaching where it has had particular impact.  The first of these is in feedback.

With the current focus on knowledge-acquisition (a very important goal) it can sometimes be easy to overlook how important it is that students know what to do with the knowledge they have acquired.  My experience of the new qualifications has been that they seem far from friendly towards rote-learned application strategies and simple, formulaic answers.  The qualifications rightly seem to demand that students can apply domain-specific knowledge to quite complicated problems and challenges, using it flexibly and effectively.  In helping them to develop these skills, metacognitive prompts and questions about their process can be very useful in supporting them to reflect on what they did with their knowledge and how they went about deploying it.

There is some great material out there on ways to support students’ thinking along these lines with major assessments.  I am a particular fan of exam wrappers which I first learned about from Alex Quigley’s blog:  The metacognitive modelling of exam technique (the ‘walking talking mock’) is another strategy I favour and John Tomsett has frequently advocated this, not least in his most recent blog:

However, I do think that for something to become a habit of thinking, it needs to be deployed regularly and so I have also been working with prompts I can use in my regular teaching and feedback.  Below are some of the questions I find myself using most regularly to encourage students to think about how they prepared for, planned or researched a task and how effective that process was.  One thing I have learned during the process is that these questions can produce interesting answers that give me better insight into where my students are struggling than simply “marking” an end product.  Another learning point has been that they can be deployed even when students have done well – they don’t always understand why they have done well.  Too often students think the key is about the amount of time spent on the work, rather than strategies used.  With these questions I try to move students’ thinking from focusing on “hard work” to “smart work”.  The last one is therefore particularly important!


What did you read to research for this essay? 

What search terms did you use to find material?

 How did you then select material? 

Which reading was most influential on your thinking? 

What revision strategies did you use to prepare for this assessment? 

Why did you choose these strategies? 

How effective do you think they were? 

What gaps did they leave?

How did you plan this answer? 

What were your key priorities? 

How effectively do you think the [essay/narrative/work] reflect the plan you created?

Review this suggested content and identify which of these you included in your answer.  Did you leave any of the suggested content out?  Why was that? 

Did you include anything not on the suggested content list?  Was it more significant than the material on the suggested content list? 

Why did you include X but not Y or Z?

How would you approach this task differently next time, now that you have had feedback?

What strategies helped you to do so well in this task, that you can deploy next time?

The use of “metacognitive feedback strategies” is not a replacement for all the other feedback and marking strategies I use or have blogged about.  These represent an additional tool I can deploy to support students’ development.  They can work as part of whole-class feedback or individually.  Often these questions will form the basis of an oral discussion whilst students are working on feedback tasks, to avoid the labour involved in a “purple-pen-style” dialogue which can take some weeks to complete!  I still give students targets, redrafting work and further reading as a form of feedback.  I wouldn’t only use these questions as I don’t believe that feedback needs to follow a single format – in fact, that could be detrimental to the main goal of creating a meaningful dialogue.  However, I am increasingly making use of the metacognitive questions above to encourage students to reflect on how they approached their planning or delivery of a task, and how they sought and deployed the knowledge and skills needed to achieve success.  If learning is to be truly empowering for our students, they need to understand how to use it.  I have found this approach to support this outcome.

Questions that help me to reflect on my feedback choices:

What is it I most wanted the student to learn from this activity and what type of feedback will best help them to understand and reflect on that?

How confident am I that I have understood the process by which the student has ended up at this point?  Is there anything more I need to understand about their work or planning process to help them improve?

How can I support  my students to reflect on their own learning journey, rather than simply telling them what to do differently?

How will I know if the feedback has really helped the student to make progress; what different will I see in the future?


The EEF’s report on metacognition is well worth reading and can be found here:

I have come to understand that some of the ways I’ve taught SEND students in the past have not been helpful.  In some cases, I think I have adopted strategies that would have actually hindered learning for some students.  I have had to think carefully about how to develop my practice in this area and challenge some long-held preconceptions about how best to help students.  Here are 5 things I have come to believe I was doing wrong and the changes I’ve made to my teaching.

Differentiated learning objectives… at one stage this was quite standard and my planning would reflect different expectations of students with different needs and prior levels of attainment.  This could be in the form of “Must/Should/Could” or “All/Many/Some will…” or simply in my own planning.  I anticipated SEND students achieving less, thinking less deeply and struggling to access complex tasks.  With more experience, I have increasingly come to understand that if my planning places a ceiling on what my students can achieve they are unlikely ever to excel or to achieve their full potential.  When planning today I aim for all students to achieve the same outcome which (over the course of a unit) includes secure knowledge and the ability to use this to analyse and evaluate the material we’re considering.  If a student has barriers that make this challenging, my main aim is to work out how to scaffold and support them towards the outcome, not how to change the goalposts and give them something easier at which to succeed.

Overloading students with support resources…  When planning challenging activities, I find it very tempting to “support” SEND students with different and additional resources; key words lists, prompts sheets, dictionaries or thesauruses to help with vocabulary, not to mention my own helpful “drop in” to chat to them about what they were doing, normally just as they were getting started.  Perhaps unsurprisingly they were normally more than a little confused… prompting me to offer further “helpful” resources.  I am not saying any of these are inappropriate in and of themselves; each has a valuable place in my classroom and I use them all and more.  But they are workload intensive and do not always seem to do the job.  Reading about cognitive overload has helped me to understand that, far from helping, at times I was making a challenging situation worse, overloading rather than supporting my students.  Sometimes additional resources or support will help them to achieve.   Sometimes an early conversation will help.  At others, they may need a little extra time to get to grips with instructions and have a try at activities, rather than leaping in with further support and models, which they may not necessarily need.  My core focus now is on planning exactly what thinking I wish them to engage in during an activity.  I then find it a lot easier to think of ways to strip away barriers to learning.

Oversimplifying reading materials… literacy barriers can be some of the hardest to overcome in the mainstream classroom.  Even relatively small gaps in literacy levels can damage students’ confidence or ability to access written materials in the time we have.  There is an added challenge in the history classroom where students often have to grapple with archaic language use and unfamiliar sentence structures.  In the past, I would often spend a great deal of time simplifying source and written documents or removing several examples to allow them to focus on one or two sources whilst others would have more.  Sometimes both.  (I’d then replace much of what I’d received with other additional resources such as word lists … see above!)  An article in Teaching History helped me to reflect on this, arguing that it was fundamentally unsound to expect students to do more with less … to build a picture of the past, analyse evidence and evaluate interpretations with less evidence upon which to base their judgements.  I am now extremely careful to think about how I support students to access complex text.  Whilst I may trim the overall word count, I now focus on teaching students techniques to access material that, in the past, I would never have shown them.  For example, highlighting familiar words and phrases, circling difficult passages, reading to get a “sense” of the document and comparing their understanding with peers.

Expecting students to know how to use extra time… In the past I have often treated “extra time” as a sort of universal panacea without really thinking about what it meant for the student, especially in terms of their cognitive processing.  If asked, I would generally have assumed it meant more time to write and thus bring up the word count.  For many years I diligently gave students their extra time in assessments without ever discussing with them how they used it or whether it was working.  In recent years, colleagues have helped me to understand that “extra time” can mean different things in different subjects and for different students.  Do they need longer to plan?  To process instructions?  To check their work?  This can involve paying close attention to them during class assessments and then discussing with them where the sticking points arose very soon after the assessment (while they still remember) to suggest strategies for the next assessment.  A similar approach is needed for other concession including rest breaks and access to technology.  Unless students are trained in how and when to use deploy these supports, it is very wrong to assume that students will know how to use them to best effect.   I now invest a lot more time working with students to understand how they can best use their extra time and training them to deploy it strategically. 

Over marking and unfocused marking … Especially when there is a “gap” to “close” I have found it very tempting to thoroughly scrutinise the work of some student groups, including SEND in a well-meant but ultimately ineffective attempt to fix everything at once.  Some general comments on skills deployed to reinforce these, reflections on the target set and a new development points and, of course, some literacy feedback … so much to choose from, so why not a good selection?  Positive as well as developmental of course, to keep up motivation!  Of course, in terms of cognitive processing, this approach was doomed to failure.  Much like the over-load in lessons, the over-load in marking did not help students to focus on a particular development area for improvement.  There was often a disconnect between target and feedback; or at least the connection was hard to find in a sea of red pen.  In fact, it makes perfect sense that if you’re struggling with one area other things might slip.  What you most want is  to achieve a level of competence in your target area and then go back to the other, slowly integrating your skills with practice and increased confidence.  I now ensure that my feedback is very closely focused on the relevant target for the piece of work I’m looking at to help create a coherent cognitive experience for my students.

I am sure that there are changes I have yet to make and that my current ideas may yet change further in the future.  If there is one area of teaching that calls for regular and honest reflection on the impact of our actions it must be this:

Hendrick and MacPherson (2017) “it is the one and only [area] where we will depart from our mantra on reducing teacher workload and tell you to up the effort.  It always reaps rewards for those who need it most.”

At the moment, though, I use the questions below to help me reflect on my practice in this area:


What is it that I want my students’ brains to be focused on during this task/section of the lesson/experience and how to do remove distractions that could cause cognitive overload?

Do I really understand what they’re struggling with at a cognitive level?  If not am I ready to start firing out solutions and adjustments?

Did the action I took, however laboriously planned, actually have the desired outcome for students?  If not, do I understand why not?

Where is this student succeeding and what can I learn from the practice of my colleagues in terms of supporting them?


I have read widely in this area over the last two years, but it is not just texts on supporting SEND students that are helpful.  Better understanding of memory, processing and cognitive load help with my planning and thinking about how to support all students.  However, if there is one recent read I’d recommend it is:

Carl Hendrick and Robin MacPherson, What Does This Look Like in the Classroom: Bridging the Gap Between Research and Practice, (2017).  Chapter 4, ‘Special Educational Needs: Maggie Snowling & Jarlath O’Brien’.

By trying out in the classroom what has been recommended by others, I’ve come to see the power of multiple-choice quizzes to inform my teaching and provide me with rich data on my students’ progress.

There was a time when I would rarely have thought of using pure knowledge tests as a history teacher and would certainly have been highly sceptical of a multiple-choice quiz as an “easy” option.  I was aware of its use in the American education system as a tool for assessing historians, but inclined to be dismissive of its value.  I would not say that my position was well thought-through but, if asked, would have suggested that a substantial piece of writing was of considerably more value, testing both knowledge and deeper understanding.

In recent years I have come to question some long-held assumptions about the nature of knowledge acquisition.  One part of this journey has been the realisation that I can do more to break down the skills involved in becoming an historian and assess my students on different parts of the learning journey.  Thus an open-book essay (another tool I would never have used) can help them focus on building an effective argument with evidence appropriately deployed, whilst a multiple-choice test can help me to see the gaps in their knowledge that could be acting as a barrier to higher level thinking.

In experimenting with the use of such quizzes in class and for homework over the last few months I have discovered the following things about their use, all of which have surprised me and challenged my assumptions:

  • Multiple-choice quizzes can provide rich data, very, very quickly.

I was always sceptical about how much a multiple-choice quiz could tell me, especially as students had a decent chance of hitting upon the correct answer purely by accident.  However, I have discovered that well-designed quizzes can tell me a lot more than whether my students know a simple fact.  As with most assessments, the trick to getting good quality data out is careful planning.  As the old saying goes, “rubbish in, rubbish out.”  However, with a clear idea of what you want to achieve, a multiple choice quiz can yield a huge amount of information.  Take this question, as an example:

  Which were developments in policing after 1829?
Introduction of CID Turf scandal Police forces compulsory in all towns Introduction of Bow Street Runners


All of these answers had been discussed in class.  Nearly every student put introduction of CID correctly, which showed they had learned something.  Those who didn’t needed some support with the basic facts, which I was able to provide.  Those who put the turf scandal had generally remembered discussing it in lesson but were struggling with the concept of a “development”.  Those who didn’t put “compulsory in all towns” had missed a developmental step that I was worried I had run through too fast.  This was over half the class so after the assessment I retaught that part of the lesson going over it more carefully.  Those who put the “Bow Street Runners” might be clear on developments but uncertain on the chronology and key dates.

I was able to devise suitable follow-up activities for students which deepened their understanding and addressed misconceptions.  However, even more importantly, I was able to do so quickly.  Far from giving students a week or two to write an essay, aiming to turn it around within another week and then having to trek back to review the whole of policing, the multiple-choice quiz ran through the key concepts each lesson for 3 lessons whilst we deepened our understanding, addressed misconceptions and prepared for the essay.

  • The format of multiple-choice quizzes can be very flexible, allowing to test different categories of knowledge in different ways.

I have been delighted how easy it is to play with the format to test different types of knowledge.  At first I spent a lot of time trying to think of meaningful possible alternative answers to a 4-per-question format.  However, the more I thought about what I was trying to test, the more I realised there was no one, single, approach that was needed.

Thus to test chronology I could use a tick-box approach:

  Match the Event/Person with the Correct Period.  Tick the period with which they are associated.
Saxon Norman Late Middle Ages Early Modern Industrial
Matthew Hopkins          
Heresy Laws          
Metropolitan Police          
Harrying of the North          
Bloody Code          

Or a sorting activity:

  Number these key events in the history of crime and punishment 1-5, with 1 being the earliest.
Introduction of the Bloody Code  
Creation of the Metropolitan Police Force  
Creation of the Bow Street Runners  
Introduction of the Forest Laws  
Abolition of Trial by Ordeal  

Reformatting information in different ways helped me to overcome the “were they lucky” question by repeating demands in new questions, and thus ensuring that students really were confident with the information they were called upon to deploy.

  • It is possible to test higher-order thinking skills as well as pure knowledge.

Reading various blogs around this convinced me to have a go at pushing the boundaries for this format of assessment.  Some simple tasks in lesson started to beget high-level discussion of great value to the students.  For example, in one A-level lesson we considered possible introductions to the source essay they had just written:


Which introduction?  Select which introduction you think would best begin this essay.


There is a long-standing historical debate about who was to blame for the split in the Liberal Party in 1916.  Some historians think that it was Asquith’s fault because he was a weak leader, nicknamed “Wait-and-See” Asquith.  Others think that it was Lloyd George because he plotted to become Prime Minister and took advantage of the war situation in 1916 to push Asquith out.  In this essay I am going to look at the sources and draw inferences from them to evaluate who was to blame for the split in the Liberal Party.


The collapse of Asquith’s premiership in 1916 created a rift in the Liberal Party that contributed to their terminal decline.  However, there is considerable debate over who was responsible with Asquith’s supporters attributing responsibility to Lloyd George.  They saw him as an untrustworthy, self-aggrandising manipulator who exploited Britain’s needs to fuel his own ambition.  Source C and B both express such opinions.  On the other hand, Lloyd George’s supporters saw him a saviour who decisively stepped in to rescue the country from a vacillating Prime Minister.  Source D makes this case, and A similarly highlights weaknesses with Asquith’s leadership.



By late 1916 the war looked as if it was going badly for Britain.  The Battle of the Somme had been costly in terms both of lives and ordnance and had severely damaged Britain’s morale.  Asquith himself was devastated. Lloyd George proposed a solution in which he assumed responsibility for the management of the war through a small war cabinet with extensive powers.  However, when Asquith rejected this his government fell and Lloyd George became Prime Minister of the Coalition in December 1916.

Read Full Article