Do we still need national tests in primary schools in England?
I asked that question at the end of last week's column on the Sats fiasco and I want to return to it.
Judging by your responses, many of you think England should be like Scotland, Wales and Northern Ireland and simply have tests that are marked in schools by the children's own teachers.
These "teacher assessment" should then not be used for league tables.
Many see the current problems with marking of the Sats as symptomatic of an overloaded testing system.
The swell of opinion opposed to externally marked, national tests has been growing.
The Conservatives, who of course introduced the national tests and league tables almost 20 years ago, have just set up an inquiry into the future of testing.
Indeed, I detect a sea change.
Not many years ago, there was a widely held view that children in England were under-tested; now that seems to have swung right round to a belief that they are over-tested.
We have certainly been through big changes. The first national curriculum tests were introduced at age seven in 1991 and at age 11 in 1994.
Before that primary school pupils took no national tests.
Indeed, after the 11-plus tests for grammar school selection were phased out in most parts of England, Wales and Scotland, there were no externally set and marked tests at all for primary schools.
The argument for the introduction of national testing was that it would drive up standards.
Tests would serve several purposes: provide teachers with information on pupils' progress, bring public accountability for teachers and schools, and offer governments essential data on national standards.
However, there is now growing doubt about the wisdom of using one set of tests for so many different measures.
Because of their use as an accountability measure, the tests have taken on high stakes.
This can skew the behaviour of teachers who feel pressurised into spending a lot of time on test preparation.
The recent Schools Select Committee report on testing concluded that the current national tests served "too many purposes".
'Teaching to the test'
In particular, the MPs' committee argued that the use of test results for league tables often distorted teaching and prevented a rounded education.
We have just seen the response of the schools' inspectorate, Ofsted, and the government to this report.
Ofsted has confirmed there is a strong element of "teaching to the test".
It agreed that in some schools the emphasis on the tests in English, mathematics and science "limits the range of work" in these subjects.
However, neither Ofsted nor the government accept the argument that the tests are serving too many purposes.
The government argues that the problem of "teaching to the test" can be dealt with through guidance to schools.
Experts are also divided on this. Sir Michael Barber, former advisor to Labour Education Secretary David Blunkett, denies that English primary school pupils are over-tested.
He believes the tests have helped raise standards. However, as he told the Select Committee, he did accept that some of the improvement in test results is down to "teaching to the test".
But in the same evidence session at the Select Committee, Professor Peter Tymms from Durham University argued that there was too much testing at the top end of primary schools.
Interestingly, Professor Tymms did not claim that having national tests at seven and 11 was too much in itself; the problem was caused by all the subsidiary testing imposed by schools as preparation for these high stakes tests.
The government's response to this is to advise teachers to stop this over-preparation.
Some say children are getting better at doing tests
But this rings rather hollow to teachers and head teachers.
They know their reputation, and sometimes their jobs, depend on the result of the tests since they determine league table position and the attitude of Ofsted inspectors.
However, Professor Tymms had even more damaging criticism of the tests. He does not believe they have brought a rise in standards.
Indeed, on the basis of several different studies, he concludes that reading standards have not changed at all since tests came in.
The rise in the results, he argues, is down to two things.
Firstly, children are simply getting better at doing the tests because they practise so much. Secondly, the exam authorities failed to set the standards properly.
The latter issue has now been corrected and this, he argues, is why standards have flat-lined again.
He cites a fascinating study in which a sample of pupils in Northern Ireland was given, simultaneously, the 1996 and the 1999 English tests at the same time.
If the standard of the two tests was identical then the children should have scored the same on both. In fact, they did better in the later test, suggesting it was easier.
Because the national tests have to change every year, it is a challenge to ensure the standard remains the same.
As Professor Tymms argues if the marks are adjusted even slightly (for example changing the marks needed for Level 4 from 30 to 31) then the percentage of pupils achieving that result will move by 2-3%, enough to cause headlines.
By contrast, Professor Tymms did trials with tests that were the same every year and found no change in reading standards.
His arguments appear to have carried force with the Select Committee, which floated the idea of using a sample of students to establish national standards.
With sampling, tests can stay the same year after year. There would also be no need for the huge annual marking process, which has proved so difficult to achieve, especially this year.
Children could still be tested by their own teachers, with the results of these teacher assessments used to guide teaching and to inform parents.
This way, at least two of the purposes of national testing would be achieved without the costly system of externally marked, high stakes testing that we have now.
For now, though, it seems the government is not convinced by the argument and the current accident-prone system of external testing is here to stay.