Wednesday, 18 April 2018

5 things primary governors should know about data. Part 4: headlines and trends

This is fourth part in a series of five blog posts for primary governors. Part 1 covered statutory assessment, part 2 dealt with sources of data, and part 3 explained the progress measures. Here, we will look at the headline measures governors need to be aware of.

Inspection Data Summary Report (IDSR) Areas to investigate
This is an important place to start. The IDSR lists your school's strengths and weaknesses (under the banner of 'areas to investigate'), as well as information relating to floor standards and coasting, on its front page and governors definitely need to have sight of this. The list of areas to investigate is not exhaustive - your school no doubt has more strengths that those listed (and possibly more weaknesses).

Early Years Foundation Stage
Key measure: % achieving a good level of development
As explained in part 1, pupils at the end of reception are assessed as 'emerging', 'expected' or 'exceeding' in each of the 17 early learning goals (ELGs). If a pupil reaches the expected level of development (i.e. assessed as expected or exceeding) in the 12 main ELGs, this is described as a 'good level of development' (GLD). Our first key measure is therefore:
  • % achieving a good level of development at end of reception
This data is not available in the performance tables (i.e. is not in public domain) but can be found in Analyse School Performance (ASP), where the school's result is shown against both LA and national figures; and in Ofsted's IDSR, which shows a 3 year trend against national figures. Pay attention to the trend: is it going up or down and how does the school compare to national. Always consider the context when comparing the results of different cohorts.

Phonics in year 1 (with possible retake in year 2)
Key measure: % attaining the expected standard
The phonics check is carried out in year 1 and if a pupil does not achieve the pass mark - which, since its inception in 2012, has been 32 words correctly decoded out of 40 - then they take it again in year 2. The key measures that governors should be aware of are:
  • % attaining expected standard in year 1
  • % attaining expected standard by end of year 2
Note: % attaining expected standard by end year 2 takes the whole cohort into account, not just those that retake in year 2. 

Again, this data is not in the public domain. ASP provides a comparison against LA and national figures for year 1 results only (no 'end of year 2' measure') and does not provide a trend; IDSR shows a 3 year trend against national figures. Again, note how the school compares to national, and whether or not standards are improving. Again, always take context into account when looking at trends.

Key Stage 1
Key measures: % attaining expected standard, % attaining greater depth
KS1 assessment, made at the end of year 2, is mainly focussed on reading, writing and maths (but don't completely ignore science!). Pupils can be assessed as 'below' or 'pre-key stage' if they are below the standard of the curriculum, but the vast majority of pupils are either working towards the expected standard, working at expected standards, or working at greater depth. The key measures that governors should be aware of are as follows:
  • % attaining expected standards or above in reading, writing and maths (3 separate measures)
  • % attaining greater depth in reading writing and maths (3 separate measure)
Unlike at KS2 where the DFE produce a single, combined result for reading, writing and maths (see below), here they are kept separate. However, if your school uses FFT you can get a combined result for KS1 (i.e. the dashboards show % pupils attaining expected standards in all three subjects). 

ASP provides us with percentages attaining expected standards in each subject (3 measures) and the same for greater depth; and school results are compared against LA and National figures. Note if your school is above or below these comparators, but make sure you consider prior attainment of pupils (based on EYFS outcomes) when you do this. For this reason, IDSR is more useful because it breaks the results down by prior attainment, namely emerging (low), expected (middle), and exceeding (high), thus providing useful context.

Governors should at least be aware of percentages attaining expected standards and greater depth in reading, writing and maths at KS1, how those results compare to national figures (note that IDSR will indicate if results are in top or bottom 10% nationally), and whether or not they have improved on the previous year. Neither IDSR nor ASP currently provide trend data for KS1, due to there being only two years of comparable data, but you can view and download previous year’s data from ASP if you have access. We can compare 2017 to 2016 results but please ensure you consider context of cohorts (e.g. prior attainment, SEND, EAL etc) when doing this. 

The FFT KS1 dashboard does provide previous year's data, and the overview page can be particularly insightful. It provides a combined reading, writing and maths result for both expected standards and greater depth, displayed as neat, easy to understand speed dial. Unlike ASP and IDSR, the data will indicate if results are significantly above or below national average (green or red dot), and will also show if results are significantly improving or declining (up or down arrow). The right hand side of the report shows how the school's KS1 results compare to estimated outcomes based on pupils' start points (using EYFS data). This is a form of progress measure, and it will reveal if results are above 'expected' despite being below national, or below 'expected' despite being above national, depending on pupils development at the end of foundation stage. 

Key Stage 2
Key measures: % attaining expected and high standards, average scaled scores, progress scores, floor and coasting standards
Let's face it: there are a lot of measures at KS2. The key measures that school's have to display on their websites (and that are shown in the public performance tables) are a good place to start:
  • % attaining expected standard in reading, writing and maths combined*
  • % attaining the higher standard in reading, writing and maths combined**
  • Average progress in reading
  • Average progress in writing
  • Average progress in maths
  • Average scaled score in reading
  • Average scaled score in maths
* score of 100+ in reading and maths test and expected standard in writing
** score of 110+ in reading and maths test and greater depth in writing

Unlike EYFS, phonics, and KS1 data, which is not in the public domain, the KS2 data listed above is neatly presented on the main page of the performance tables for each school, and governors are advised to be aware of this. The school results are shown alongside LA and national figures, and previous years' results are now available (just 2016 at time of writing) for comparison. Again, context of cohorts needs to be taken into account when evaluating performance over time. 

The DfE does not categorise attainment data (i.e. do not indicate if it is significantly above or below average - you'll need FFT reports for that information) but the IDSR will show if results are in the top or bottom 20% nationally (this will be stated on the front page as an 'area to investigate'). Progress scores, however, are categorised (in both ASP and performance tables) as follows:
  • Well above average (dark green): progress is significantly above average and in top 10% nationally
  • Above average (light green): progress is significantly above average but not in top 10%
  • Average (yellow): progress is broadly in line with national average
  • Below average (orange): progress is significantly below average but not in bottom 10%
  • Well below average (red): progress is significantly below average and in bottom 10%
It is vital that governors are aware of their school's progress category for each subject, and most importantly are able to discuss progress in broad terms, particularly issues that have resulted in low progress scores, or what has led to high scores.

As at KS1, FFT's KS2 reports show if attainment and progress is significantly above or below average (green and red dots) and indicate if standards have significantly improved or declined (up and down arrows). Again, the overview page of the FFT governor dashboard is incredibly useful as for quick reference. 

Other KS2 headlines that governors should know about are:
  • Floor standards: is your school below floor or has it been in the past? (see IDSR front page)
  • Coasting: similar to floor standards but over 3 years. Is your school defined as coasting? (again, see IDSR front page)
  • Absence: it's not an academic measure but it's vital we know about school's overall and persistent absence figures, how they compare to national and if they are going up or down
There are of course other KS2 measures including percentages attaining expected and higher standards in individual subjects. It may be that the combined result is low due to underperformance in all subjects or just one, and it's important that we investigate this. Also, don't ignore science and grammar, punctuation and spelling (GPS/SPaG), results for which can be found in IDSR. Also note that FFT dashboards show progress data for grammar, punctuation and spelling, and - as for other subjects - indicate if results are improving or declining.

This is a lot to take in and governors cannot be expected to carry all this information around in their heads. Focus on the headline measures in the bullet point lists above; be aware of how results relate to national figures, and whether or not those results are improving.

And, of course, we also need to know the performance of key groups of pupils, and that's the subject of the last post in this series. 

Saturday, 14 April 2018

The Progress Horizon

Following the release of further details on the proposed reception baseline and future progress measures this week, and the inevitable battle for the soul of primary education already in full swing, I find myself distracted by the possible mechanics of these measures, specifically what will happen to those pupils that change schools.

The issue of measuring the progress of 'mobile' pupils is a murky and complex one. Currently, it is straightforward in design and yet deemed by many to be extremely unfair. When a pupil moves schools they take their baseline - their KS1 or KS2 results - with them (if they have KS1 or KS2 results of course), and they are included in the new school's progress measures. The new school is solely responsible for the progress that pupil makes, even if they arrive late in Year 6 or Year 11. Of course, in some cases a school may benefit by admitting a pupil that does very well in relation to their specific start point, but often pupils that change schools do less well than their more rooted peers.

But what of this new reception baseline to KS2 progress measure? How will it deal with mobile pupils? Will it include them or not? Over the last year or so a number of people have told me that the new progress measure would be 'a cohort-level measure'; that it would not involve progress of individual pupils and would not take account of movement of pupils in and out of the school. If this were true then it would be a radical departure from the current measure which does just that. I assumed that this resulted from a misinterpretation of information in the primary assessment consultation, which states that reception baseline data will be used 'to calculate their school’s cohort-level progress measures.'; that it would not be used to evaluate the progress of individual pupils.

This is no different to the guidance on the current progress measure, which is 'a school-level accountability measure. Progress is calculated for individual pupils solely in order to calculate the school’s overall progress scores. There is no need for schools to share individual pupil progress scores with their pupils or parents.'

On the subject of whether or not so-called mobile pupils will be included in future progress measures, we are getting mixed messages even from the experts. In the TES (16th March 2018, p14), Greg Watson, chief executive of GL Assessment, lists 'three key challenges: matching the pupil data accurately in the first place, keeping track of the data as pupils move between schools, and, in the cases where pupils have moved, deciding how much credit each school gets for progress.' This suggests that the issue of mobility is high on the agenda, and the last point - apportioning credit for progress between schools in the case where a pupil moves - is an interesting and new development that deserves some serious consideration.

And yet Professor Robert Coe, director of the Centre for Evaluation and Monitoring at Durham University, is quoted in the same article as saying "Does it make sense to wait seven years from the time children start school to make a punitive judgement about the school based on the performance of whatever proportion of that small number of children are still at the same school? Not remotely"

Putting aside the main point - which I agree with - this implies that the measure will only involve those pupils retained since the start of reception. I assumed that the current methodology would continue, whereby individual pupil's progress scores are calculated and aggregated to generate the school's progress score; and that any pupil that changes school will be matched back to their baseline score and included in the new school's measures - no matter how unfair that seems.

But there is an interesting sentence in the primary assessment consultation response* which is maybe the source of much of the confusion:

In addition, we will work with analytical experts to develop the rules around the new progress measures, for example the minimum cohort size required and the minimum proportion of pupils that need to have been in the same school between reception and key stage 2.

But this can be interpreted in two ways:
  1. Mobile pupils ARE included in the progress measures. The DfE calculate the percentage of pupils retained since reception and do not publish progress data if retention falls below a certain threshold.
  2. Mobile pupils are NOT included in the progress measures. The DfE calculate the percentage of pupils retained since reception and do not publish progress data if retention falls below a certain threshold. 
A measure of retention would certainly provide useful contextual information, and perhaps progress measures should be withheld for those schools with high mobility, but I'm not sure I want to see mobile pupils omitted from measures full stop. What percentage of pupils actually remain in a school from reception to KS2 anyway? We could see a lot of pupils excluded from progress measures. 

The way I see it, we have four choices:
  1. Simply compare average attainment at the start of reception to the average attainment at the end of KS2, and ignore any movement in between. This would be a crude and meaningless measure. You only need look at the difference between the KS1 prior attainment of the current year 6 in a school and the KS1 results four year ago to see that such an approach would not work. This measure takes no account of movement in and out of the school.
  2. Measure only the progress for those pupils retained since reception, and not include any new arrivals. Many schools will therefore have small numbers of matched pupils, and, according to statement above, could end up having no published progress measures if retention falls below a certain threshold. This measure removes those that leave but does not add those that arrive.
  3. Carry on as now, including all pupils with a baseline in the school's progress measure, regardless of where that baseline was administered and how long the pupil has been in the school. This measure takes account of those that leave and arrive.
  4. As above but apportioning progress between schools in the cases where pupils have moved. This measure takes account of those that leave and arrive but is proportional (but no doubt complicated).
I know all of this is years away - the first cohort of reception baseliners reach the end of KS2 in 2027 - but I dwell on these things and some clarity, or at least some vague proposals, would be welcome. Otherwise we'll all continue to speculate and worry. I have tweeted the DfE for an answer.

I eagerly await their  response.

*Many thanks to Kate Barker (@K8ebarker) for bringing this to my attention.

Tuesday, 10 April 2018

5 things primary governors should know about data. Part 3: progress measures

The key stage 1-2 (KS1-2) progress measure is a value added (VA) measure, and this is nothing new. We have had VA measures for years, both at KS2 and at KS4. But previously these VA measures - which took up pages of the old RAISE reports - played second fiddle to the levels of progress measure. This was for a number of reasons:
  1. Levels of progress was a key measure with floor standards attached
  2. It was in the same language as everyday assessment
  3. It made target setting easy (just add 2 levels to KS1 result)
  4. It was simple and everyone understood it
But levels have gone, and for good reason: they labelled children, they were best-fit so pupils could have serious gaps in learning but still be placed within a level, and progress became synonymous with moving on to next level rather than consolidating learning and developing deeper understanding. Plus, they were never designed to be split into sublevels or points and used as for progress measures anyway. 

Most confusing of all, the two progress measures - VA and levels of progress - often contradicted one another. It was possible, for example, for a school to have all pupils make 'expected' progress of 2 levels, and yet have a VA score that was significantly below average. This was because - contrary to popular belief - the VA measure had nothing to do with levels; it was all to do with average KS2 scores from KS1 start points. 2 levels might be enough from one start point but nowhere near enough from another. 

But this is all rather academic now because levels have gone and we are left with a single progress measure: VA.

So, what is VA? 

VA involves comparing a pupil's attainment score at KS2 to the average score for pupils with similar prior attainment. There are a few myths we need to bust first, before we continue:
  1. We do not need data in the same format at either end of the measure to calculate VA. Currently we have KS1 (sub)levels at the beginning and KS2 scaled scores at the end. These data are not in the same format. We needed compatible data for the levels of progress measure but not for VA. This misconception is a hangover from levels, and it's something that is better understood in secondary schools where they have KS2 scores at one end and GCSE results at the other.
  2. We do not even need the same subjects at either end. Again, this is better understood in secondary schools, where the baseline comprises KS2 scores in reading and maths (note: no writing) and the end point is any GCSE the student sits. VA can be measured from KS2 test scores in reading and maths to GCSE result in Russian or Art, for example. 
  3. KS1-2 VA has nothing to do with that magic expected standard score of 100. Plenty of pupils get positive progress scores at KS2 without achieving a score of 100 in KS2 tests. They just need to exceed the national average score of pupils with the same prior attainment, and scoring 92 might be enough, depending on start point. And pupils that achieved 2b at KS1 (often referred to as 'expected' in old money) do not have to achieve 100 to make 'good' progress; in 2017 they had to exceed 102!
Each pupil's KS1 result - their prior attainment or start point - is therefore crucial to this process. Each p-scale, level and sublevel in reading, writing and maths at KS1 has a point value, which enables the DfE to calculate a KS1 average point score (APS) across the three subjects for every child that has a KS1 result (note: pupils without a KS1 result are excluded from progress measures). Their KS1 APS is then used to place pupils into a prior attainment group (PAG), of which currently we have 24, ranging from pupils that were on p-scales at KS1 (pupils with SEND) up to pupils that were Level 3 in all subjects. There is even a PAG for pupils that were level 4 at KS1, but there aren't many pupils in that group. 

All pupils with KS1 results are therefore slotted into PAGs alongside thousands of other pupils nationally. The DfE then take in all the KS2 test scores and calculate the average KS2 score for each PAG. Let's look at two examples for reading at KS2 (the process is the same for maths):
  • We have two pupils in a class that have KS1 prior attainment of 16 APS (2b in reading and writing and 2a in maths at KS1). They are placed into the same PAG as thousands of other children nationally with 16 APS at KS1. The DfE take in all the thousands of reading test scores for all the pupils in this PAG and calculate the average score, which for this PAG is 105 (note: in reality benchmarks are to 2 decimal places e.g. 104.08). 105 therefore becomes the benchmark for this group. Our two pupils scored 108 and 101 in their KS2 tests and both have met the expected standard. However, only one pupil has a positive progress score. The pupil scoring 108 has beaten the national benchmark by 3 whilst the other has fallen short by 4. These pupils' VA scores are therefore +3 and -4 respectively.
  • We have two other pupils in our class who have KS1 prior attainment of 10 APS (2c in reading and Level 1 in writing and maths). They are in the same PAG as thousands of other children nationally with 10 APS at KS1. The DfE collect the reading test scores for all pupils in the group nationally calculate the KS2 average score, which in this case is 94 (again, in reality this would be to 2 decimal places). 94 therefore becomes the benchmark for this group. Our two pupils scored 98 and 88 in their KS2 tests. Neither have met the expected standard but the first pupil has beaten the national benchmark by 3 whilst the other has fallen short by 7. These pupils' VA scores are therefore +4 and -6 respectively.
This process is repeated for each pupil that has a KS1 result. All pupils are placed into PAGs and their scores in KS2 tests are compared to the national average score (the benchmark) for pupils in the same PAG. If a pupil beats the benchmark, they have a positive progress score; if they fall short, their progress score is negative. Page 17-18 of the primary accountability guidance has a table of all PAGs with their corresponding KS2 benchmarks in reading, writing and maths. 

What happens next?

The DfE take all progress scores for all pupils in the year 6 cohort in your school, and calculate the average. In our example above we have four pupils (two with prior attainment of 16 APS and two with 10 APS). Let's imagine that is our entire Y6 cohort (it's a small school!). We add up the progress scores (3 + -4 + 3 + -7 = -5) and calculate the average (-5 / 4 pupils = -1.25). This school's VA score is therefore -1.25, and you will see these aggregated progress scores presented in the performance tables and ASP (where they are colour coded and categorised), in Ofsted's IDSR (where they inform the areas to investigate), and in FFT reports (where they are shown to be in line with, or significantly above or below average). 

And what does -1.25 mean. Putting it crudely, it tells us that, on average, pupils scored 1.25 fewer points in their test than similar children nationally. And when the DfE say 'similar children', they are basing this on prior attainment alone, not contextual factors. A progress measure that takes context into account is called Contextual Value Added (CVA), which the DfE scrapped in 2011, but which FFT still offer. CVA is an attempt to create a like-for-like progress measure but is not favoured by government.

Are there any issues with KS1-2 progress measures? Whilst VA is preferable to levels of progress, there are numerous problems:
  1. Writing! There is no test for writing at KS2 but there is still a progress measure. As in reading and maths, pupils are set benchmarks in writing that are fine graded to decimal points (see p17-18 here), but because pupils do not have a test score, these benchmarks are essentially unachievable. Instead, the DfE have assigned 'nominal' scores to teacher assessments for writing, which makes for a very clunky measure. The vast majority of pupils are assessed as either working towards the expected standard, working at the expected standard, or working at greater depth. These attract values of 91, 103, and 113 respectively. In reading, and maths pupils can achieve test scores in the range of 80-120; in writing, they get 91, 103 or 113. It doesn't work.
  2. Pupils below the standard of the tests/curriculum are also assigned nominal scores, which range from 59 for the lowest p-scales, up to 79 for the highest of the pre-key stage assessments. These pupils often have SEND and tend to end up with big negative progress scores, which can have a detrimental impact on a school's overall progress scores. The system is therefore punitive towards those schools that have large groups of pupils with SEND (or towards small schools with just one such pupil). The DfE plan to mitigate this issue by capping negative scores this year. 
  3. It can't be predicted. The benchmarks change every year (they are the national average scores for each PAG that year), and we don't know what they are until after pupils have left. This is a headache for many headteachers and senior leaders.
  4. It relies on the accuracy of KS1 results. I’ll say no more about that. 
Now you know how these progress measures are calculated, and what the issues are. But what do they mean in terms of school accountability? 

That's the subject of the next post in this series: headlines and trends.

Saturday, 7 April 2018

5 things primary governors should know about data. Part 2: data sources

In part 1 we dealt with statutory assessment - the data that the DfE collects from primary schools. In part 2 we're going to look at the main sources of data that governors should be aware of: the key reports, when they're made available, who has access, and what they contain. Here we will focus on four main sources: the Performance Tables, Analyse School Performance (ASP) system, Ofsted's Inspection Data Summary Report (IDSR), and Fischer Family Trust (FFT) dashboards, plus a couple of other important points in the calendar.

1) Results day
Published first week in July
Availability: Secure NCA Tools website. Login required. No access for governors.
Schools receive pupils' KS2 test scores via the NCA Tools website and a summary of results. Not in public domain

2) Checking exercise
Published 31st August/1st September
Availability: Secure Data Checking website. Login required. No access for governors.
Data checking website opens for schools to check and query results. Also contains pupils' progress scores and summary sheet of all results to be shown in the performance tables. Not in public domain.

3) The Performance Tables
Published annually in December
Availability: Public domain. Requires no login.
Often referred to as the league tables, it is important to note that this is the only publicly available data source on the list. Information on most schools in England can be found here:

For secondary school data it is a fantastic resource, and contains nearly as much as ASP. For primary schools it is more limited and only contains data for key stage 2 (KS2). There is no school level data for early years foundation stage (EYFS), phonics (PSC), or key stage 1 (KS1) contained in the performance tables. 

It is also important to note that data is only released in the performance tables once it has been validated. After it has been collected, school data goes through a checking exercise in the autumn term to ensure it is a true reflection of a school's results. Pupils may be discounted from results if they are recent arrivals from overseas for example, and these will be removed during the checking exercise. Also, a school may have had some test scripts successfully re-marked. All of these changes will be taken account of in the validated data, and this is why the DfE do not publish school data in the public domain until it is deemed 'clean'. Even after this process, some schools may still find errors which can be corrected before data is finalised. The other (non-public) sources, detailed below, publish data prior to validation, during the autumn term (referred to as unvalidated data), and governors need to be aware that these reports may contain errors (which will be corrected in future releases).

The performance tables contain the following data for KS2
  • % attaining expected standard in reading, writing and maths (single combined measure)
  • % attaining higher standard in reading, writing and maths (single combined measure)
  • Average scaled scores (two separate measures)
  • Average progress scores in reading, writing and maths (three separate measures)
The school results are shown alongside the national and local authority figures in all cases except for progress (note: progress is a relative measure and schools are compared to 0. This will be explained in the next blog). The performance tables now contain the past 2 years results (2016 and 2017) for all key measures - scroll down to dropdown links below main results.

The performance tables also provide data for certain pupil groups: disadvantaged (pupils on free school meals in past 6 years (FSM6), looked after children (CLA/LAC), or pupils adopted from care), gender, prior attainment (low, middle, high prior attainment groups defined by pupils' level at KS1), English additional language (EAL), and mobility (pupils that joined during years 5 or 6). There is no data for SEN pupils in the performance tables - they are included in overall figures but results are not shown for that group specifically. It is important that governors have an awareness of data for particular pupil groups, especially disadvantaged pupils, and this will be the subject of the final blogpost in this series.

The main page of performance tables is essential information for governors and it's worth printing it out.

4) Analyse School Performance (ASP)
Published annually in October/November and updated with later released data (eg EYFSP and attendance) and validated data (December)
Availability: Secure site requiring login.
ASP replaced RAISEonline and is not publicly accessible. Governors can be granted access but - please take note - only to the anonymised version of the system, which contains no pupil-level data. It is a fairly simple system and, apart from extended data on pupil groups and individual subjects, it does not provide much more than the performance tables in terms of KS2 data. It does however contain results for EYFSP, phonics and KS1, and presents them in the familiar, performance tables-style format with national and LA comparators. From a governor point of view, ASP does not have a great deal to offer and governors are advised to focus more on the Ofsted Inspection Data Summary Report (IDSR). 

5) Inspection Data Summary Report
Published annually in October/November
Availability: Download via ASP. Login required.
The IDSR, which replaced the Ofsted Dashboard, is an inspector's key source of school data and is therefore essential reading for governors. It is a PDF document downloaded from ASP and is not in the public domain. The front page of the report lists areas to investigate - which rather confusingly may be positive or negative statements - and shows if the school is below floor standards or deemed to be 'coasting'. The following pages contain contextual information about the school including absence, exclusions, deprivation, numbers of pupils in certain key groups, and prior attainment of cohorts in reading, writing and maths.

The report shows a breakdown of progress and attainment (% attaining expected standards and higher standards) in reading, writing and maths at KS2. Scatter plots are used to reveal outliers. Results in grammar, punctuation and spelling tests, and science assessments at KS2 are also provided. Green and red boxes are drawn around data that is significantly above or below national average, although there is inconsistency here and governors should be aware that in some parts of the report this highlighting indicates that the school is in the top or bottom 10%, whereas elsewhere it does not (hint: read the small print). Statistical significance indicators are only used for KS2 progress data. A 3 year trend is provided for progress in each subject (based on school's national ranking each year) but no trends are provided for KS1 or KS2 attainment (due to there only being 2 years of comparable data).

KS1 results (% attaining expected standards and greater depth) are presented in the same format as KS2. Note: KS1 data is attainment only; there are no progress measures for KS1. Phonics results (% attaining expected standard in Y1, and by the end of Y2) are shown as three year trends against national figures. IDSR also provides 2 pages of EYFSP outcomes and governors should pay particular attention to the percentage reaching a good level of development (GLD), which is also presented as a three year trend against national figures. One little quirk: on other pages of the IDSR, light blue bars indicate attainment of expected standards, and dark blue bars show attainment of higher standards. For EYFSP, light blue bars indicate the result for the whole cohort whilst dark blue bars are used for FSM pupils. 

In all cases, comparisons are made against national figures. Unlike in ASP and performance tables, local authority comparators do not feature in IDSR.

Data on performance of pupil groups is very limited in the IDSR. The report focusses on four key groups: disadvantaged pupils; and low, middle and high prior attainment pupils. At KS2. prior attainment is based on levels achieved at KS1 (low = below L2, middle = broadly L2, high = broadly L3). At KS1, prior attainment is based on development in the EYFSP and is defined as emerging, expected, or exceeding, but these can be viewed as low, middle and high prior attaining. 

To sum up, the IDSR is not the most user friendly report, but it is essential that governors are familiar with it. Schools need to devote time to IDSR training for governors.

6) FFT Dashboards
Published September and updated with validated data.
Availability: Secure site. Subscription to FFT required. Governor login available. 
FFT data is published earlier than other reports and is presented in a clear, accessible format, which makes it an attractive option for many schools. FFT dashboards provide analysis of KS1 and KS2 data; they do not provide analysis of phonics or EYFSP. FFT compare results to national figures (attainment) but they also compare them to an estimated outcome based on pupils' start points (progress). This means that a low prior attaining cohort may have results that are below national average, but above the estimated outcome (low attainment. high progress). Equally, a high prior attaining cohort may have results that are above national average but below the estimated outcome(high attainment, low progress). FFT provide trends for both progress and attainment (they have converted pre-2016 results into 'new money'), and indicate where results are significantly improving or declining, and where data is significantly above or below national average. Reports also show where the school ranks nationally for both attainment and progress. The overview page is particularly useful for governors, with its clear 'speed dial' format and table showing higher and lower performing groups in each subject. The pupil groups page is also extremely valuable in that it ranks groups in order of progress from lowest to highest - using a combined reading and maths progress measure - and provides red and green indicators to show if any group's progress is significantly below or above average. This clarity is something that is missing from ASP. If your school has an FFT subscription, it's definitely worth taking a look at FFT dashboards.

Coming up in the next blog.....

KS1-2 progress measures in reading, writing and maths are a critical part of the accountability machine, and it's important that governors have some understanding of how they are calculated, what they mean, and what the flaws are in these measures. This is the subject of the next blog post. 

Friday, 6 April 2018

5 things primary governors should know about data. Part 1: statutory assessment

Data is a minefield. It’s hard enough for head teachers and senior leaders to find their way through it all, but for many  governors it is a minefield in a swamp, with ditches filled with spikes, and pits of burning tar. And crocodiles. 

It is vital that governors have a sound working knowledge of school data. They need to know what data is collected and when, where they can find data for their school, and what - if anything - it tells them. They need to be able to separate the important stuff from the noise, pull out the key messages, ask the right questions, and understand the limitations of data. They need to understand how their school tracks the progress of its pupils but also understand that they must not seek to influence this process. They also pretty much need to learn a new language in order to get by, and this means knowing 1001 acronyms, which we'll deal with as we go along. 

Data is a minefield and governors need to be able to navigate it. So let’s begin our 5 things governors need to know about data.

I originally intended this to be a single blog post, by the way, but it would have been huge, so I've decided to make it bitesize. Hope it's useful.

1) Know what data is collected (statutory assessment)
Before we move on to those all important sources of data, we first need to know what data the DfE collect. This data is derived from statutory assessment, and there are four statutory assessment points in the primary phase: Early Years Foundation Stage Profile, Phonics, Key Stage 1, and Key Stage 2.

Early Years Foundation Stage Profile (EYFSP) is carried out at the end of the reception year (YR). Pupils' development is assessed against 17 Early Learning Goals (ELG) spread across 7 areas of learning. In each of these ELGs, a pupil's development is assessed as either emerging, expected or exceeding. If a pupil reaches the expected level of development in the 12 key ELGs - those that make up the prime areas of communication and language, physical development, and personal, social and emotional development; and the specific areas of numeracy and literacy - then they are deemed to have made a 'Good Level of Development' (GLD). The percentage reaching GLD at the end of reception is a key measure and it's worth governors being aware of it. This data is collected by the DfE but is not available in the public domain (i.e. in performance tables).

Phonics Screening Check (PSC) is carried out at the end of Year 1 (Y1). The assessment involves pupils attempting to decode 40 words, half of which are real, the other half made-up. The pass mark has been 32/40 since the assessment was introduced in 2012 but this may change at some point. If pupils do not achieve the pass mark in Y1 then they are assessed again in Y2. The percentage achieving the phonics check a) at the end of Y1 and b) by the end of Y2 are key measures, and again governors are advised to know - or be aware of - these figures. This data is collected by the DfE but is not available in the public domain. 

Key Stage 1 (KS1) assessments are made at the end of year 2 (Y2). Pupils sit tests in reading and maths (there is no test for writing). These tests are marked internally and marks are converted into a scaled score in the range 85-115. A score of 100 or above equates to the expected standard (unlike at KS2 there is no definition of a high score). KS1 scaled scores are only used to inform the final teacher assessment; the scores are NOT collected by the DfE and are therefore not used in accountability measures. Pupils receive a teacher assessment in reading, writing, maths and science. In science, pupils are simply assessed as having met or not met expected standards (science does not form part of key measures). In reading, writing and maths the vast majority of pupils are assessed as either working towards the expected standard (WTS), working at the expected standard (EXS), or working at greater depth within the expected standard (GDS). These are all pupils that are working within the KS1 curriculum. Your school may also have a small number of pupils assessed as pre-key stage (PKS) or working below (BLW). These are usually pupils with special educational needs (SEN) but may also be pupils for whom English is an additional language (EAL). The percentage of pupils attaining expected standards and greater depth in reading, writing and maths are key measures that governors should be aware of. Again, this data is collected by the DfE but is not available in the public domain. 

Key stage 2 (KS2) assessment is made at the end of year 6 (Y6). With the exception of writing in which there is only teacher assessment, at KS2 the test is king. There are tests in reading, maths, and grammar, punctuation and spelling (GPS, commonly referred to as SPaG). These tests are externally marked and scores are in the range of 80-120, with a score of 100 or above indicating that the pupil has attained the expected standard, and a score of 110+ classified as a high score. If pupils score below 100, they are deemed to have not met the expected standard (schools may  challenge this if the pupil misses the standard by a mark). Note that 110+ score is defined as a 'high score', not 'greater depth' (greater depth is for writing only). Pupils below the standard of the reading and maths test - due to SEN or EAL - will be assessed as  pre-key stage (PKS) or working below (BLW). Note: BLW/PKS assessments do not apply to GPS.

In writing, where there is no test, pupils are assessed by the teacher. Assessments are in the same format as KS1, with pupils assessed as either working towards the expected standard (WTS), working at the expected standard (EXS), or working at greater depth within the expected standard (GDS). As in reading and maths, pupils can also be defined as working below (BLW) or pre-key stage (PKS) if they are below the KS2 assessment criteria

In science, pupils are assessed as either meeting or not meeting expected standards. There are no pre-key standards.

KS2 results are collected by the DfE and are published in the public domain via the performance tables. The key measures are as follows:
  • Percentage of pupils attaining the expected standard in reading, writing, and maths combined
  • Percentage of pupils attaining a high standard in reading, writing and maths combined
  • Average scaled score in reading and maths tests
  • Average progress in reading, writing and maths
These are the measures that schools must publish on their websites and governors should certainly know about these figures. There is of course a lot more detail in the various sources of data, which governors should be aware of.

And it's those sources of data that we'll deal with in part 2. 

Thursday, 29 March 2018

Data and the Minotaur

Accountability is a dark, inescapable labyrinth of tunnels. No matter how bright the torch, fear always lurks in the shadows and the walls press close. At the labyrinth's heart is a monster - the Minotaur - but not one with the body of a man and the head of a bull. This monster is a multi-faceted, shape-shifting, and disorientating beast, no less terrifying than that of Greek legend, and with an appetite no more satiable. This Minotaur feeds on data. And some schools have to feed the Minotaur more than others.

What or who is the Minotaur? for some it is Ofsted, for others it is the Local Authority, or maybe the MAT. It could be the Regional Schools Commissioner, a consultant, or even the governors. Whatever the Minotaur is, the response is the same: feed it more data. Not just statutory assessment data but data in other forms too: teacher assessment, test scores, comments, written feedback, marking, lists of highlighted objectives, photographic evidence. All is consumed by the Minotaur.

It has to be said that Ofsted have shifted a long way on data with a specific statement in the handbook relating to tracking:

Ofsted does not expect performance and pupil-tracking information to be presented in a particular format. Such information should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of pupils in that school.

But follow any thread on twitter regarding assessment, data and tracking (a regular topic and one that has dominated the last 24 hours) and it won't be long before someone claims that an inspector asked for data in a certain form, usually to prove progress. Last week I spoke to two Headteachers that were adamant that their inspections would have been a lot more difficult without the progress tracking points generated by their particular system. Yes, this is anecdotal, but these anecdotes appear in vast numbers. Have they all got the wrong end of the stick? Or are they being disingenuous? And besides, if the data is not generated for Ofsted then it's done to appease another face of the Minotaur. There is no escape.

The greatest tragedy is that there is now general awareness that much of the data we feed the Minotaur is inaccurate - meaningless even. Vast amounts of time and money are wasted and we have come to accept this as unavoidable collateral damage. A necessary sacrifice. And in those schools that are most vulnerable - perhaps already judged RI or told that they are at risk of being so - the response is depressingly familiar: rather than decrease the amount of data that is collected in order to concentrate more effort on teaching and learning, the demand for data increases as does the number of 'data drops'. 'Measure more, more often' becomes the school's mantra, mistaken in the belief that data can prove anything if they collect enough of it; that it will somehow result in improvement, and, for a while at least, sate the Minotaur.

Saddest of all are those schools that openly admit that their systems are no longer fit for purpose, that what they are doing is wrong, and what they are asking their teachers to collect and record is a waste of time. And yet they continue on that path, often increasing the demands, watching as teachers and senior leaders buckle under the pressure, seemingly unaware that learning will suffer and all efforts are ultimately in vain. In these situations a common theme has emerged. In response to the question "why not change?", the schools state that they can't until they are out of the situation they are in. Think about that for a minute: schools feel that they can't begin to get things right until the pressure is off. Until that day - when they finally have some breathing space - they will just have to carry feeding on the Minotaur in full knowledge that what they are doing is quite likely damaging every aspect of the school.

It is time to tackle this monster. In the legend Theseus killed the Minotaur by entering its lair with a sword. Our Minotaur is perhaps too big to be dispatched by a single person, and its labyrinth far too complex.

But we can starve it.

Monday, 5 February 2018

The Progress Obsession

Despite my best efforts to convince people of the futility of the exercise, probably the most common question I get asked is:

"How do I show progress?" 

Why is this futile? Because what they are really asking is: "How do I use data to 'prove' that pupils have made 'good' progress?"

The reason for the inverted commas is because data does not really 'prove' anything - especially when it's based on something as subjective as teacher assessment - and what constitutes 'good' progress varies from pupil to pupil. What is regarded as 'good' for one pupil, may not be enough for the next. One pupil's gentle stroll is another pupil's mountain to climb. Progress is a multi-faceted thing. It is catching up, filling gaps, deepening understanding, and overcoming those difficult barriers to learning. It can be accelerating through curriculum content, or it can be consolidating what has been learnt; it can mean no longer needing support with fundamental concepts, or it can be about mastering complex skills. Different pupils progress at different rates and get to their destination in different ways.

Progress is not simple, neat or linear - there is no one-size-fits-all pathway - and yet all too often we assume it is for the sake of a convenient metric. We are so desperate for neat numbers - for numerical proxies of learning - that we are all too willing to overlook the fact that they contradict reality, and in some cases may even shoot us in the foot by presenting an average line that no one follows in reality. Rather than a line that fits the pupil, we make pupils fit the line.

Basically, we want two numbers that supposedly represent pupils' learning at different points in time. We then subtract the first number from later one and, if the numbers go up - as they invariably do - then this is somehow seen as evidence of the progress that pupils have made. Perhaps if they have gone up by a certain amount then this is defined as 'expected', and if it's gone up by more than that it's 'above expected'. We can now RAG rate our pupils, place them into one of three convenient boxes, ready for when Ofsted or the LA advisor pay a visit. Some pupils are always red, and that frustrates us because it doesn't truly reflect the fantastic progress those children have actually made, but what can we do? That's the way the system works. We have to do this because we have to show progress.


First, let's get one thing straight: data in a tracking system just proves that someone entered some data in a tracking system. It proves nothing about learning - it could be entirely made up. The more onerous the tracking process - remember that 30 objectives for 30 pupils is 900 assessments - the more likely teachers are to leave it all to the last minute and block fill. The cracks in the system are already beginning to show. If we then assign pupils into some sort of best-fit category based on how many objectives have been ticked as achieved (count the green ones!) we have recreated levels. These categories are inevitably separated by arbitrary thresholds, which can encourage teachers to give the benefit of the doubt and tick the objectives that push pupils into the next box (depending on the time of year of course - we don't want to show too much progress too early). Those cracks are getting wider. And finally, each category has a score attached, which now becomes the main focus. The entire curriculum is portioned into equal units of equal value and progress through it is seen as linear. Those cracks have now become an oceanic rift with the data on one side and the classroom on the other.

Assessment is detached from learning.

This rift can be healed but only if we a) wean ourselves off our obsession with measuring progress, and b) sever the link between teacher assessment and accountability. Teacher assessment should be ring-fenced: it should be used for formative purposes alone. Once we introduce an element of accountability into the process, the game is lost and data will almost inevitably become distorted. Besides, it's not possible to use teacher assessment to measure progress without recreating some form of level, with all their inherent flaws and risks.

Having a progress measure is desirable but does our desire for data outweigh the need for accuracy and meaning? Do our progress measures promote pace at the expense of depth? Can they influence the curriculum that pupils experience? And can such measures lead to the distortion of data, rendering it useless? It is somewhat ironic that measures put in place for the purposes of school improvement may actually be a risk to children's learning.

It's worth thinking about.