Tuesday, 23 September 2014

Improving school improvement: how LAs use FFT to support and challenge schools

Despite the reduction in scale and scope of local authority education services over the past few years, their school improvement remit remains. LAs have a responsibility to support and challenge schools, to intervene where necessary and to help drive up standards; the aim being for all pupils to have access to a good standard of education. 

School improvement teams rely on a number sources of data to gain insight into school performance; and use that data to identify schools whose performance gives cause for concern. Such schools may be below or close to DfE floor standards, have key groups that are underperforming, or show a downward trend over the past three or more years. Data sources that feed into the school improvement process include those in the public domain such as the DfE school performance tables, Ofsted reports, Ofsted data dashboard, and statistical first releases, which all provide vital information. Then there are NCER systems such as Keypas and EPAS, which provide a huge amount of attainment and progress data from national level down to individual pupil test scores. These systems release data early in the autumn term, thus giving LAs an early look at standards and an often vital head start. Beyond that, LAs make good use of RAISEonline in order to gain an 'Ofsted-eye' view of standards; and issues arising from the RAISE report will often form the basis of a conversation with a school.

In Gloucestershire, the School Improvement team goes one stage further, going through every RAISE report as soon as they are published. Whilst the various sources of data mentioned above provide essential information on standards, they are generally no substitute for getting down to the nitty gritty of pulling a RAISE report apart. If you really want to know what Ofsted thinks about a school's performance then you need to do more than study headline figures. RAISE, however is somewhat lacking as a school improvement tool, and that's where FFT comes in.

The FFT Governor Dashboard

FFT's simple yet intuitive summary report has proved to be a real game changer for both schools and LAs. As its name suggests, it is aimed at school governors, and the report is already an essential element of our data training package for school governors. However, the dashboard has also become popular with school improvement colleagues and senior leaders, all of whom appreciate its simplicity, clarity and focus; and it has now woven itself into the fabric of school improvement. For myself and many of my colleagues it has become the preferred report to gain a snapshot of a school's performance. Despite its mere 4 pages (3 if you don't count the title page), you can gain almost as much from an FFT dashboard as from an entire RAISE report.

Key to its usefulness are the progress comparisons, whereby a school's results are compared against estimated outcomes based on progress made by pupils nationally with the same prior attainment. This is critical when trying to make sense of data from other sources. We may know that a school is above or below the floor standard but what does that mean in terms of that particular cohort? The FFT dashboard allows us to quickly differentiate between those schools that have low attainment due to poor progress and those that have low ability cohorts. It is always interesting to see a school whose attainment dial is in the red and whose progress dial is in the green. The reverse is perhaps even more interesting, from a school improvement point of view. Whilst you can get this information from RAISE by studying attainment and VA data, it is the side-by-side presentation of these data in the FFT Dashboard that makes it such a useful tool for school improvement.

FFT Self Evaluation reports

An extension of the dashboard, these reports give greater detail on progress of pupil groups, both in terms of VA and CVA, with indicators to identify any significant trends over the last 3 years. Future estimates are given for current cohorts, as well as rankings for past performance. Rather than go further into the detail of these reports, I want to share a case study that illustrates how FFT data was used successfully during a recent Ofsted inspection of a primary school.

The school is in a deprived area and has very low ability intakes. Pupils make excellent progress across KS2 - VA placed the school at the 1st percentile in 2013 - and attainment is high. KS1 results however, particularly %L2B+ and %L3+, were below average; and it was KS1 that became the focus of the inspection. One of the limitations of RAISE is its lack of KS1 progress data and prior attainment at EYFS, which means that it is the responsibility of the school to demonstrate pupil progress across KS1. This is particularly where attainment at KS1 is low. The school's tracking data showed pupil progress was good but the low attainment at KS1 continued to be a stumbling block. This is where the school's FFT KS1 self-evaluation report proved invaluable. Like RAISE, it indicated that the school’s KS1 results were significantly below national average, and ranked it below the 90th percentile. VA, on the other hand (i.e. pupil progress compared against that made by pupils with the same EYFS prior attainment nationally), placed the school above the 5th percentile for all indicators, and CVA ranked the school even higher (1st percentile for one measure). These nationally benchmarked progress data and associated rankings, not available in RAISE and not feasible from internal tracking systems, demonstrated that pupils made comparably high progress across KS1 and KS2. The issue of low attainment at key stage one was put into context and the school got the outcome they deserved. 

Estimates & Target Setting

First, let's clarify something: FFT don't set targets; they provide estimates. These estimates can provide the basis of a conversation about target setting and that's what we tend to recommend when working with schools in Gloucestershire. For simplicity's sake, and with view to the future (i.e. FFT Aspire) in mind, I tend to stick to PA (prior attainment) estimates, as these are more in line with the estimates used for VA in RAISE, and focus on 50th (average), 20th (high), and 5th (very high) percentile estimates.  Whilst I understand that target setting should not be about getting a better RAISE report, I do think that schools value having a clear indication of what constitutes 'average' progress and an idea of what pupils would need to achieve in order for the school's VA to be significantly above average (hint: aim for the 20th percentile). Targets based on FFT estimates are certainly more meaningful, realistic and achievable than those based on a blanket approach of setting 4 points per year adopted in many schools. My own analysis of pupil level data in RAISE indicates this is aiming for the 3rd percentile. 

FFT estimates can also help school improvement teams identify schools that are at risk of falling below floor standards. By exporting school level data from FFT Live for current year 6 or 11 cohorts, we can filter on those schools whose future school estimates (based on prior attainment, context and the lasts 3 years results) suggest they may fall below floor standards next year. Whilst these are, of course, only estimates, and will always be considered alongside other sources of information - including the advisor's knowledge of the school - they provide a valuable, early indicator of potential standards, and can form part of the conversation with that school.

The future

This is where things get interesting. There are (at least) three other ways in which LAs can make good use of FFT data, two of which can be done now and I encourage LAs to makes better use of these features. The first is the student explorer. Schools are shocked by this when they see it: the entire pupil census history for their pupils at their fingertips and they didn't know it was there. Not only does it contain pupil characteristics, but attendance and complete school history at the click of a button. I showed it to a secondary school data manager recently and he had the facial expression of someone stepping into the TARDIS. At Gloucestershire LA, we are starting to use it to identify potential NEET pupils by exporting the entire county Y10 and 11 data set, and using certain criteria such as attendance, prior attainment and number of school moves as risk factors. There is therefore potential for this data to be used across teams, departments and even agencies.

Next up, collaborate. With increasing numbers of schools forming federations, partnerships, multi-academy trusts, or working closely in clusters, there is increasing demand to be able to analyse data across a number of sites. Schools are interested in producing benchmarking data for their cluster, or identifying strengths and weaknesses in a particular geographical area. These collaborations in FFT need to be set up by an LA administrator and should be encouraged. By creating these groups, LAs can help foster better partnerships between schools and gain insight into issues in certain areas. Collaborate is already a useful feature but is set to become even more relevant as the education landscape evolves.

And finally, Virtual Schools. Not a feature in FFT Live but it's coming soon to FFT Aspire, giving LAs the power to set up blank schools in FFT, which can populated with pupils by pulling their data into the school via their UPN. This is exactly what Virtual Schools have been crying out for: the ability to analyse the progress made by all their pupils regardless of location. And if there was ever an argument for the appropriateness and relevance of CVA it is surely here, when dealing with children in care. If this feature can be extended to PRUs, even better. 

Summary

So that's just about wraps up this blog on how school improvement teams can make use of FFT. How it challenges RAISE and other sources of data, and supplements our own intelligence about schools. It gives us detailed insight into school performance at all key stages, provides benchmarks, guides target setting, helps schools collaborate, and enhances cross-team and multi-agency working. In short, it provides LAs with an indispensable array of intelligent analyses. There is certainly more to FFT than D.


Tuesday, 16 September 2014

2B or not 2B, that is the question

I've visited a number of schools in the past couple of weeks and nearly all of them intend to continue tracking with levels for all cohorts for this term at least (fine for years 2 and 6, of course). This doesn't surprise me - it's the comfort of the familiar - but it's a bodge and I am becoming increasingly concerned. The big problem is that continuing with levels gives the false impression of parity and compatibility of data either side of the old/new NC boundary. This will inevitably invite comparisons, which are unlikely to do the school any favours. It's like comparing currencies pre- and post-decimalisation. By making a fresh start - using an entirely new approach - any such issues can be avoided. A line has been drawn.

The main issue is that pupils are going to appear to have gone backwards. Schools continuing with a levels-based system are planning to assign those pupils that have met all the key learning objectives for that point in the year a sublevel/point score that historically indicated age-related expectations (ARE) under the old system. So, a pupil that has met all learning objectives for the end of Y4 will be assigned a 3b/21 points, because that's how it used to work. That's the theoretical equivalent. 

Sounds fair enough.

However, the new curriculum does not translate into levels, and those old 'age-related expectations' are not a proxy for having met the key learning objectives of the new curriculum. Implying that they do is going to cause problems. I'll give you an example:

A pupil finishes KS1 with a L3 in reading (that's around 30% of pupils nationally last year). And as you may or may not know, a L3 is treated by the DfE as a secure level 3, i.e. a 3B (21 points). Now, under the old system of levels, a 3B was considered to be age-related expectations for the end of Y4. In the new curriculum, a pupil deemed to be at age-related expectations at the end of Y4 will have met all the learning objectives for that point in the curriculum. So, ask yourself this: has the KS1 L3 pupil done this? The answer is almost certainly no, which means you can't really continue to assign them a 3B. Instead they will have to be assigned a new sublevel; a translated value that reflects their position in the new curriculum, i.e. above expectations, but not 2 years above. Maybe a 2A. Who knows? 

In other words, they've apparently gone backwards.

Which is daft.

Some tracking systems have not helped matters by a) allowing users to continue with levels, and b) mapping new values back to old point scores and sublevels, implying there is a simple conversion. 

I suggest schools do themselves a favour: ditch levels now. You'll have to at some point anyway. Adopt a new assessment system and avoid the pitfalls that will inevitably arise by giving the impression of data continuity. A new system will not invite such comparison. You can start afresh. 

So, use your historical data to show progress and attainment up to the end of last year, and then start again this year. Don't attempt to measure progress across the old/new NC boundary by using end of last year assessments as a baseline. Instead create an early autumn assessment and measure progress from there. Concentrate on tracking percentages of pupils that are below, at and above ARE; hopefully showing increases in those at and above ARE as the year goes on. Individual progress comes down to books and the percentage of objectives met. That's pretty much all we can do at this point. Next year things get easier because you'll have a compatible baseline for more in depth and reliable analyses, but producing the 3 year progress data stipulated in the Ofsted guidance is not going to be easy. I just can't see how it can be done with any degree of reliability and I'm not sure they've thought it through. I suspect these issues will become increasingly apparent over the course of this year. 

And finally, I know that many tracking systems are not quite up to speed, and Ofsted make provision for this in the new guidance (see Ofsted handbook p63, para. 191). So, I'm not advocating throwing everything out but do make sure you ask the right questions of your supplier. It's fine to hold on (for a bit) whilst new versions are rolled out but make sure they have solid plans for assessment without levels (hint: they should have already!). It must be very tempting for established systems to stick as closely as possible to levels and APS because it requires a lot less redevelopment. But that doesn't necessarily mean it's right for schools.

Remember: the tracking should fit the curriculum, not the other way round. 

Good luck!

Friday, 5 September 2014

No attainment gaps please, we're intelligent.

If there's one phrase I hear in the course of my professional life that's guaranteed to result in my forehead colliding with a desk multiple times in quick succession (or wall if I'm standing up), it's 'attainment gaps', especially if it's preceded by the words 'can you provide me with some....' and followed by a question mark. I then punch myself in the face and run out of the room.

I just find it extraordinary that we're still dealing with these half-baked measures. Serious decisions are being made based on data that is taken entirely out of context. Recently the DfE sent letters out to secondary schools that had 'wide' attainment gaps between the percentage of pupil premium and non-pupil premium students achieving 5A*-C including English and Maths. Schools on the list included those where pupil premium attainment was higher than that of pupil premium students nationally, perhaps higher even than overall national averages. Meanwhile those schools that had narrow gaps because both groups had similarly low attainment rates, were left off the list. Bonkers!

On the subject of bonkers, I'm a governor of a junior school that failed to get outstanding purely because of the pupil premium attainment gaps identified in the RAISE report. This was despite us pointing out that their VA was not only higher than that of pupil premium nationally, but higher than non-pupil premium nationally, and that their VA had significantly increased on the previous year. So, the pupil premium pupils enter the school at a lower level than their peers and proceed to make fantastic progress but that was not enough. The gap hadn't closed. And why hadn't the gap closed? Because of the attainment of the higher ability pupils, particularly with the level 6 (worth 39 points) bumping APS even further. It would seem that the only way out of this situation is to not stretch the higher ability pupils (I don't advocate this). And anyway, there's a double agenda to close the gap AND stretch the most able. How does that work? Double bonkers!

And if there's an 8th circle of data hell (the 7th circle already occupied by levels) it should be reserved for one thing and one thing alone:

SEN attainment gaps

Oh yes! Everyone's favourite measure. Basically the gap between SEN and non-SEN pupils achieving a particular measure. These are obviously really useful because they tell us that SEN pupils don't do as well as non-SEN pupils (that was sarcasm, by the way). Often, they're not even split into the various codes of practice; just all SEN grouped together. Genius.

When I'm asked for SEN attainment gaps, I try to calmly explain why it's an utterly pointless measure and that perhaps we should be focussing on progress instead. And maybe we shouldn't be comparing SEN to non-SEN at all. Sometimes my voice goes up an octave and I start ranting. Usually this does no good whatsoever so I punch myself in the face again and run out of the room. 

Ofsted

Buried in the old Ofsted subsidiary guidance was the following table of despair:


It tells us that a 3 point gap indicates a year's difference in ability, and that a 1 point gap equates to a term's difference. Such language was often found in inspection reports; and this is despite stating that 'the DfE does not define expected progress in terms of APS' on page 6 of the same document.

But the times, they are a changing. It is encouraging that this table has been removed from the new handbook. I assume this means no more statements about one group being a year behind another based on APS gaps; and such data having a negative impact on inspections. Ultimately, I hope that the removal of the above table from Ofsted guidance sounds the death knell for this flawed, meaningless and misleading measure, but I won't hold my breath. I'm sure I'll have to punch myself in the face a few more times yet.