2016 KS2 SATs results – experts give their reactions to maths & SPaG

The provisional results have been published for the KS2 SATs tests. This year pupils sat new tests in maths, writing and spelling, punctuation and grammar. They show that the majority of children have achieved the new higher standard expected of them. Test results from previous years were under an entirely different system of assessment.

The results show:

53% of pupils met the new expected standard in reading, writing and mathematics

66% of pupils met the new expected standard in reading

70% of pupils met the new expected standard in mathematics

72% of pupils met the new expected standard in grammar, punctuation and spelling

74% of pupils met the new expected standard in writing

Here two experts give their analysis of the tests and the results.

Expert comment and analysis on the KS2 mathematics SATs results from Anne Watson, Emeritus Professor of Mathematics Education, University of Oxford:

“The aim of the new curriculum was to raise the standard of mathematics and make sure pupils were ready for secondary mathematics, and the test had to adhere to a test framework that related closely to the curriculum aims and content.

It is terrific news that teachers and children have worked hard to get some success in these tests, especially as it tests the whole of the year 6 curriculum for which they have only had two terms by the time they are tested.

The aim to raise standards has resulted in a new way to measure performance so that no comparative judgements can be made. This means we do not know from the data alone whether the government has done a good job or a bad job and whether the test designers and score-scalers have done a good job or a bad job. All we know is that 30% of children might now be labelled as ‘failures’ in these tests. We know that to ‘pass’ you had to achieve 60 out of the available 110 raw marks, but we do not know how these marks were achieved and whether this means adequate performance across all three tests, or exceptional performance on two of them. Of course the ‘pass mark’ mark can have some meaning in terms of mathematical knowledge and achievement, or can also be a notional figure arrived at by some algorithm. We do not know what it means or whether it indicates preparedness for the secondary curriculum without looking at how marks were gained across the three mathematics papers. It would be helpful if teachers could have specific feedback about areas that need further attention before next school year so they can plan their teaching.

But is it the right test? It is not enough merely to be a harder test, it also has to be an appropriate test, mathematically coherent, and good preparation for secondary school.

It is not just the teachers who need to work smarter, but also the test-writers, as there are some flaws in the tests.

1. These tests were not graduated so some pupils will have found it impossible to demonstrate all their knowledge. The mark scheme does not distinguish sufficiently between pupils who can do all of the question correctly, pupils who have some understanding but make an error in the calculation, and pupils who have no understanding. In some of the reasoning questions there were only two marks available for three or four reasoning steps. The range of raw scores could have been increased to give more credit to pupils who can do the parts of questions successfully. I suspect that there were many children who know more than these results imply.

2. The tests are geared strongly towards rewarding students who could use formal methods in arithmetic. However, most of the questions in paper 1 could be done very quickly by mental methods which would indicate strong knowledge of number, strong conceptual understanding, and appropriate flexibility in problem-solving and high levels of numeracy. The provision of squared paper to show ‘workings’, and the emphasis on formal methods in the test preparation literature, may have led some pupils to embark on written methods where mental methods and number knowledge would be quicker and more appropriate. An unnecessarily high level of ‘test behaviour’ is needed to make these choices. For example, the question 326 ÷ 1 should not have had squared paper provided.

3. On the plus side, the tests establish an appropriate standard of progress towards proportional reasoning, numerical fluency and understanding of place value, that are important foundations for secondary mathematics.

4. Papers 2 and 3 contain many interesting questions that require a flexible understanding of content. However, it does not seem fair that pupils who have some understanding of, for example, angles and area can only show that understanding if they can correctly decipher a complex situation. If they fail in their deciphering, nobody knows whether or not they have the basic knowledge which would be a foundation for secondary mathematics. Only calculation is tested in a way that shows graduated knowledge and competence. Again, I suspect there are children who know more than these results imply.

And are the tests the best they can be to develop mathematical competence, and do they make mathematical sense?

1. Given that the tests influence what is taught at KS2, the implied progression ignores international research in the development of algebraic understanding, competence and efficiency in calculation, the development of geometrical reasoning, and the processes of problem solving. The framework for test development did take account of these, but I am unconvinced that these tests adhere fully to the published testing framework.

2. The tests use the convention of separating groups of digits in numbers over 999 using a comma. The International Organization of Standardization requires that digits are separated in groups of three, and the use of a comma is only as the decimal sign. No justification for the internationally unacceptable use of the comma in these tests has been given.

3. The curriculum, the sample papers, the supportive material for teacher assessment, and all leading mathematical dictionaries and authorities, describe ‘formula’ as a representation of a relationship between quantities, or a process for generating particular values. In the test, there was an algebraic question which tested the abstract use of an expression and an equation. These are not in the curriculum for KS2 where symbolism is confined to those relationships that have some mathematical meaning, such as area of a rectangle = length x width. However, by setting algebra questions in a more abstract way, teachers may be tempted to teach algebra in a traditional, meaningless, way that in the past has led to confusion and dislike.”

Expert reaction to the results of the 2016 KS2 SATs for Spelling, Punctuation and Grammar from Professor Richard Hudson, Fellow of the British Academy and Emeritus Professor of Linguistics at UCL:

“The Department for Education released a provisional summary of the results for these tests on July 5th 2016. As the DfE explains, these results are not directly comparable with any of the results from previous years because

  • they were based on a different National Curriculum.
  • for the first time, they did not use the attainment levels defined by the previous curriculum but a numerical scale; instead, they gave each pupil a score out of 120, where the expected standard is 100.
  • the expected standard was more ambitious than in previous years.
  • all pupils took the same test (in contrast with previous years, when there was a separate test for the most able pupils).

Consequently there is little to say about underlying trends. However, one important difference, noted by the DfE, is that scores for SPaG are the highest of all the test scores (i.e. higher than the scores for Reading and Mathematics; in contrast with these three externally-assessed subjects, Writing was assessed internally): 72% of pupils achieved the expected standard for SPaG, compared with 70% for Mathematics and 66% for Reading. Moreover, the average scores for SPaG was 104, compared with 103 for both Mathematics and Reading.

In this case, it is possible to compare 2016 with previous years, when SPaG scores were the lowest of all the subjects; e.g. in 2015, only 80% of pupils reached the expected level (Level 4) in SPaG compared to 89% in reading, 87% in writing and 87% in mathematics. The reasons for this change are a matter for conjecture (e.g. it may be due to the changes in the expected standards), but regardless of the explanation, the relatively high scores for SPaG suggest that most schools are teaching SPaG at least as effectively as the other subjects.

The scores for SPaG are particularly encouraging because the 2016 test was much more challenging in its use of technical terms for grammatical concepts. For example, the 2015 test for the most able pupils used 17 grammatical terms, but the 2016 test for all pupils used 34. (The terms are listed in the document attached).”

SPaG comparison table

 

 

Education Media Centre is a registered charity No. 1153567

All worldwide rights reserved for all content on this site.
Copyright © Education Media Centre 2013

learnedly broadcast by