Rank ordering – a comparability method based upon expert judgement – has the potential to lead to exciting innovations in several aspects of the assessment process. Cambridge Assessment has been at the forefront of these developments and the use of rank ordering in operational aspects of examinations is being explored and validated.
To find out more watch the short video below.
But as with all approaches, it has not and will not be adopted in specific settings without testing its suitability – principally its validity and utility. This requirement for validation is in line with the standards and criteria laid down in The Cambridge Approach.
What is Rank ordering?
In paired comparison or rank-ordering exercises, experts are asked to place two or more objects into rank order according to some attribute. The ‘objects’ can be examination scripts, portfolios, individual essays, recordings of oral examinations or musical performances, videos etc; or even examination questions. The attribute is usually ‘perceived overall quality’, but in the case of examination questions it is ‘perceived difficulty’. Analysis of all the judgments creates a scale with each object represented by a number – its ‘measure’. The greater the distance between two objects on the scale, the greater the probability that the one with the higher measure would be ranked above the one with the lower measure.
The idea of using expert judgment to link the mark scales on two (or more) tests has been the subject of a great deal of research at Cambridge Assessment, leading to several conference papers and publications.
Rank ordering is now extensively used in the comparability research work of Cambridge Assessment. A detailed evaluation of the rank-ordering method as a method for maintaining standards, or for investigating comparability of standards, can be found in Bramley & Gill (2010).
A more radical use of paired comparisons or rank ordering is to replace marking (Pollitt, 2004), the idea being that the resulting scale is more valid than the raw score scale that results from conventional marking.
Innovation and openness to new ideas are fundamental to the core values of Cambridge Assessment, and the use of paired comparisons and rank-ordering in the assessment process appears to hold considerable potential. However, we are also committed to providing good evidence to support any innovations we introduce.
We are continuing to investigate both the technical/statistical aspects of the methods, and the underlying psychology of expert judgment that they depend upon. Our current position is that they are best deployed in standard-maintaining contexts, when the assessments being compared are as similar as possible (e.g. examinations from the same board in the same subject in consecutive examination sessions). We are actively exploring their applicability to more general investigations of comparability and to mainstream qualifications and assessments.
Importance of Rank ordering:
- The main theoretical attraction of the method from the point of view of comparability of examination standards is that the individual judges’ personal standards ‘cancel out’ in the paired comparison method (Andrich, 1978).
- Black & Bramley (2008) have argued that it is a better (more valid) use of expert judgment than the method that is currently used as part of the regulator-mandated grade boundary setting process in GCSEs and A levels, and that it could have a role to play in providing one source of evidence for decisions on where to set the grade boundaries.
- However, further research is needed in order to evaluate the quality of assessment outcomes based entirely on paired comparison or rank-order judgments, and to identify the circumstances in which these outcomes are ‘better’ than those produced by conventional marking.
References:
Andrich, D. (1978). Relationships between the Thurstone and Rasch approaches to item scaling. Applied Psychological Measurement, 2, 449-460.
Black, B. & Bramley, T. (2008). Investigating a judgmental rank-ordering method for maintaining standards in UK examinations. Research Papers in Education, 23(3), 357-373.
Bramley, T. & Gill, T. (2010). Evaluating the rank-ordering method for standard maintaining. Research Papers in Education, 25 (3), 293-317.
Pollitt, A. (2004) Let’s stop marking exams. Paper presented at the 30th Annual Conference of the International Association for Educational Assessment, June, Philadelphia, USA.
Related materials