‘Fake news’ has made us all question what, and who, we believe. Scrolling through social media is not enough to get the facts any more. The ‘Information Age’ doesn’t necessarily make us better informed – if anything, it makes me more confused. Is Brexit a good idea? Trump? Democracy…? If someone were to ask me what is going on in the world, I think my answer would probably be one of up-to-date cluelessness.
Short of having an existential crisis, this was my mental backdrop for the Centre for Market Reform of Education (CMRE) evening panel discussion on the London Challenge at the start of the month. At this event, the unprecedented hyper-progression of London’s secondary school pupils between 2003 and 2011 was put under the microscope, and prodded repeatedly.
Jon Coles, the former director of the London Challenge ascribed the improvements largely to the project’s bespoke interventions, independent advisers and peer support programmes. Yet it was pointed out that the London School of Economics and Political Science (LSE) doubts this – or indeed, doubts any simplistic narrative about what drove the success of the London Challenge.
Panellists at the event explained that as well as the prior improvements in primary schools in the 1990s affecting the London Challenge data, the rocketing progress of secondary school pupils could also be due to accelerated changes in ethnic diversity, impacting on pupil aspiration. What’s more, pupils took fewer qualifications overall at this time, and thus surely would have improved their scores in each one taken. And what about the increase of pay weighting for teachers in inner and outer London in 2003, attracting more, and better, practitioners into the profession?
So can the London Challenge make a full claim to unparalleled success, or not? It seems we can never know. Wondering whether I could ever know anything ever again, the post-presentation discussion highlighted some key questions about the nature of learning from research projects:
- Can we ever conduct an education research project in a “sterile” environment, without competing factors affecting our results?
- Should we be launching school projects that stick to the brief and are measurable, and replicable, afterwards? Or is it better to adjust a project as it goes along, to meet the needs of the pupils acting as ‘subjects’ – but without the key learning points to benefit students in the future?
- And are there significant issues with replicating a project anyway, when it has been designed for a specific group or context?
Having taught in schools myself for a few years, my inner teacher is ever-present, and felt the need to pipe up at this point. Every education professional knows to adapt their planning model to their context, be it lesson planning, intervention strategies or just classroom dialogue. Anything designed to work in one context should be re-considered depending on the new context, a.k.a. the young people under your charge. They need a tailored, not wholesale, education – one which works for them as individuals.
Teachers spend all day, every day with their pupils. They know them inside out. Registering the fluctuations of their progress, attitudes and unusual fashion preferences are just part of the day-job. And behind this is a wealth of qualitative data. Teachers are a gold mine of knowledge for what is impacting their pupils, and how it is manifesting itself. Surely this qualitative information is unspeakably valuable in making data meaningful?
I’m not criticising the value of a quantitatively-defined research project – however much over-simplified data analysis can sometimes remind some of pig farming. And this is particularly relevant if there’s a debate over the validity of your data (which was deemed insufficient by economics academics at the panel event).
But a general lesson can be extracted from talking about these big questions. There is a lot we can learn from measuring the before-and-after, and examining the factors at play in the change. Yet if you can’t find the exact reasons for changes, surely you’ll get a broader, fuller picture by talking to the people on the ground?
Yes, it’s harder to number-crunch, and more time consuming to gather. And I don’t think we can solve the mystery of the London Challenge over a pot of tea and some chocolate hob-nobs in the staff room*.
But I’m fortunate to have seen this happening in the schools I have worked in. The senior leaders made time to get out of their office and into the school, to see what motivated their pupils, and how they learnt. Teacher input was valued for analysing the impact of new school initiatives, such as for assessment or ability groupings. And some of my more experienced teacher colleagues had been in the profession (and the school) long enough to have overarching perspectives on trends in teaching and learning – invaluable information for school leaders making big decisions. I wonder whether the increasing pattern of teachers moving between schools, or leaving the profession altogether, will mean we lose some of these longer-term standpoints.
Perhaps quantitative data alone doesn’t always cut it. Amanda Spielman’s speech at the Association of School and College Leaders (ASCL) conference last Friday backs me up here: “… as powerful a tool as data is, it also has its limitations”. Yes, we need information, and evaluation. But we also need to listen.
* Chocolate hob-nobs are definitively the best biscuit. Thankfully, there is something we can be sure of.