Saturday, 27 June 2015

Running to stand still

Yesterday I re-read this from @edudatalab and, following an enlightening discussion with @meenaparam, I took the red pill and discovered that the VA rabbit hole goes deeper than I previously thought. 

Much is made of the issue of progress in junior schools and their correspondingly poor Ofsted outcomes. I've tweeted about the problem numerous times and have written a blog post about it, comparing estimates derived from CATS against VA estimates based on the KS1 results. The differences can be enormous with far higher expectations for KS2 attainment when plotted from KS1 - the gap between the CATS and VA estimates in junior schools is around 3 points on average, with the former being the more accurate predictor. 

Inevitably the finger of blame points squarely at the Infant school, and in some cases this may be justified. I've worked with a number of junior schools where the large proportion of supposedly high ability pupils is completely at odds with both the school's own assessment of pupils on entry and the context of the area. However, as the Education Data Lab article points out, it may not be as simple as this. Is the issue of poor progress in junior schools really about over inflation of results in the infant school? Or is the cause more complicated and less direct than that? 

Could it be that the issue of poor progress in junior schools actually relates to the depression of KS1 results in primary schools?

Eh?

To get your head round this you need to understand how VA works.

VA involves the comparison of a pupil's attainment against the national average outcome for pupils with the same start point. 

Now, what happens if primary schools were dropping KS1 assessments by a sublevel, so, for example 2As became 2Bs? If, on average, all those pupils went on to get a 5C then it would appear that was the national average outcome for a 2B pupil when in actual fact it's the national average outcome for a 2A. The benchmark for a 2B pupil therefore becomes a 5C.

The implications for this in a junior school are huge. It is of course highly unlikely that the infant school would depress their results so even without any grade inflation the junior school is in a tricky position. The benchmark for their 2B pupils is a 5C because that is apparently what is happening nationally. Unfortunately for the junior school their 2B pupils are real 2B pupils, not bumped down 2Aers. 

If we add into this any grade inflation by the infant school then the problem is exacerbated even further. The wholesale depression of baselines by primary schools results in unrealistic expectations for schools whose KS1 data are accurate, and any inflation of results at KS1 pushes the expectation still further out of reach. There are direct and indirect factors which explain why so many junior schools' RAISE reports have a green half (attainment) and a blue half (progress). Essentially pupils in junior schools have to make an extra 2 points of progress to make up for the depression of KS1 results by primary schools nationally and possibly a further 2 points to account for any grade inflation in the infant school. 4 extra points of progress just to break even.

Running to stand still.

Unfortunately, the only way to solve this problem is to have a universally administered baseline test.

Watch this space.

Wednesday, 17 June 2015

Tracked by the Insecurity Services

Last night @LizzieP131 tweeted this:


Which was followed by this:


In the past week I've been told by headteachers using one particular system that their pupils need to achieve 70% of the objectives to be classified as 'secure' whilst another tracking system defines secure as having achieved 67% of the objectives (two thirds). The person who informed us of this was critical of schools choosing to adjust this up to 90% and I'm thinking "hang on! 90%  sounds more logical than 67%, surely".

And then this comes in from @RAS1975:

51%?

Really? 

Achieving half the objectives makes you secure? 

It's like a race to the bottom.

So, secure can be anything from 51% upwards. And mastery starts at 81%.

I'm sorry but how the hell can a pupil be deemed to be secure with huge gaps in their learning? And how can a pupil have achieved 'mastery' (whatever that means) when they only achieved 4/5th of the key objectives for the year?

It makes no sense at all.

This is what happens when we insist on shoehorning pupils into best-fit categories based on arbitrary thresholds: it's meaningless, it doesn't work and it's not even necessary. 

It's also potentially detrimental to a pupil's learning. Just imagine what could happen if we persist in categorising pupils as secure despite them needing to achieve a third of year's objectives.

Ensuring that pupils are not moved on with gaps in their learning is central to the ethos of this new curriculum. Depth beats pace; learning must be embedded, broadened and consolidated. How does this ethos fit with systems  that award labels of 'secure' despite large gaps being present in pupils's knowledge and skills?

The more I look at current approaches to assessment without levels the more frustrated and disillusioned I become. System after system are recreating levels and we have to watch them. They may call them steps or bands but they are levels by another name and are repeating the mistakes of the past. Pupils are being assigned to a best-fit category that tells us nothing about what a pupil can and cannot do and risk being moved on once deemed secure despite learning gaps being present. This is one of the key reasons for getting rid of levels in the first place.

So, take a good look at your system. Look beyond all the bells and whistles, the gizmos and gloss, and ask yourself this: does it really work?

And please, please, please, whatever you do, make sure you....









Friday, 5 June 2015

The Hitchhiker's Guide to Assessment

One of my favourite scenes from The Hitchhiker's Guide to the Galaxy involves the Golgafrinchans who, having been tricked off their own planet, find themselves on Earth and set about trying to colonise it. The following video clip shows their attempt at inventing the wheel.


It's an amusing and apt analogy that regularly pops into my head whenever I'm stuck in a meeting with people discussing minor details whilst the major problem remains unsolved. It's about skewed priorities and blinkered ignorance; those frustrating facepalm moments when you want to scream. Remember Tim's face in every episode of The Office? It's that.

But what has this got to do with anything? It all comes back to this question of how to measure progress, which is distracting us from the vital task of devising systems that effectively record and make best use of formative assessment. In our desperation to quantify progress we are putting the cart before the horse, devising metrics and setting expected rates of progress, and then working backwards, attempting to pin the curriculum to specific numerical values in order to make it fit into a preconceived notion of progress . The big, glaring, issue is that we're focussing on the finishing touches before we've got assessment nailed; and we really can't begin to work out what constitutes expected or better than expected progress until we've finalised our approach to assessment and have some usable data. The assessment wheel is hexagonal yet we're arguing over its colour so let's true it up first and then worry about the details. 

Surely if we concentrate on the basics and get assessment right then everything else should fall into place.