By Gary Eason
Education editor, BBC News website
Question: Are A-levels easier than they used to be?
The exams regulator reviews changes over time
Answer: They're different.
A few years ago the exams regulator in England, the Qualifications and Curriculum Authority (QCA), invited an international panel of assessment experts to tackle the thorny question.
Their report said that to answer the question of comparability of standards over time, certain conditions would need to have been met:
But in fact there had been numerous changes to the system.
- the specifications and syllabuses had remained constant;
- the examinations were common over time or could be equated;
- there had been no changes in educational policy or practice intended to raise performance.
The QCA tries to track these changes and their impact, with periodical reviews by experts.
Changes in the subject of geography, for example, illustrate the complexity of the task.
Comparisons are complicated by the fact that there are different exam boards.
Their syllabuses have also changed, generally becoming longer and more explicit about what is required.
Key events included the introduction of a voluntary "common core" in 1983 and a required "subject core" in 1995 - after which there was an upwards jump in the results.
In 1980, A-level geography candidates followed a two-year course of study covering physical, human and regional geography.
By 2000, the period covered by the QCA review, things were organised around themes and there was a much greater focus on the interaction of people with their environments.
There was much less call for an understanding of physical processes - the way landscapes are formed - but greater emphasis on such things as flood control and pollution.
From 1980 regional geography was phased out. But after 1997 detailed locational knowledge came in, with students having to show understanding through specific case studies.
The required range of skills was broadened, including fieldwork enquiry and the interpretation, analysis and synthesis of a "very wide range" of data and visual resources along with more problem solving and decision making.
The reviewers concluded that "the loss of more challenging skills was compensated for largely by an increase in the breadth of skills required".
The candidates in 1990 and particularly 1995 seemed to have faced the greatest range of requirements.
By 2000, geography was "similar, in terms of the overall level of demand" to that in 1980.
In 1980 there were two or three examination papers of relatively short questions requiring essay-type answers, totalling seven hours.
The reviewers felt questions were intellectually demanding but some could have been answered by "well-rehearsed, regurgitated responses".
In 2000, total exam time was a little longer and comprised five modules of written exams plus a personal fieldwork study of about 4,000 words.
Again, any reduction in required knowledge was said to have been generally balanced by a need for wider skills and the application of knowledge.
During the 1990s there were more short-answer questions using a range of "stimulus material".
Some of this material involved large amounts of text and often complex data - which perhaps meant students were being credited for their comprehension skills rather than their geographical understanding.
Some syllabuses included what is known as "pre-release" material which students could read before the exam.
The time given for such study and the predictability of the questions they then faced in the exam varied but could have resulted in a reduction in the demand on the students, the reviewers felt.
Crucially they then looked at the responses that had been required to gain different grades, in particular a pass and a grade A.
They found that in 2000, candidates awarded an A grade often "gained high marks for skill and opinion without necessarily demonstrating geographical understanding".
Overall, performance "did not quite match the performance descriptions" - that is, it did not come up to what was supposed to have been required.
For earlier years the reviewers were hampered by a lack of evidence but what there was "showed that performance was at least as good as the performance descriptions" - that is, it was better than required.
Results - not referred to in the official review - rose steadily.
In 1992, for example, the pass rate was 81.6% and 11.7% achieved grade A.
In 2001, 93.2% passed and 19.5% achieved an A.
Then the whole structure of A-levels changed significantly, splitting the qualification into two parts: AS and A2.
Students study typically four subjects to AS-level for their first year, then focus on three in the second, for their A2 exams. Each stage involves three units.
In a QCA analysis of chemistry across this watershed year, the reviewers felt that subject specifications and exam papers had become highly detailed.
There were also "substantial supporting materials and detailed published marking schemes" so schools could not be in any doubt about precisely what was needed.
The reviewers considered this meant schools might "concentrate on delivery of the specification, relating it heavily to the anticipated assessment and giving candidates a narrower learning experience".
This relentless focus on "teaching to the test" might be expected to produce better results.
Moreover, because students are formally assessed for the AS-level, they know how well they are doing and tend to drop their weakest subject.
They get two opportunities to take exams, in January and in June.
After these A-level changes, QCA experts warned that comparisons with earlier years could not be made accurately.
And they unashamedly predicted in 2002 that a 100% pass rate would soon be achieved - which seems to have been forgotten in the renewed debate about standards.