Dr Simon Child (Head of Assessment Training for the Cambridge Assessment Network) and his twin brother Adam Child (Head of Academic Quality at Bournemouth University) share their thoughts on assessment in Higher Education.
In the last year there has been considerable discussion concerning ‘grade inflation’ in higher education. A series of reports has explored both the statistical trends in the proportion of students achieving upper seconds or above, as well as the input variables that might explain some of these differences, such as entry qualifications.
The Office for Students’ (OfS) own analysis concluded that, in the majority of institutions, there had been increases in the proportion of ‘good honours’ over and above what might have been expected, controlling for background characteristics. These differences have been described as ‘unexplained’. The response from the OfS has been clear; providers have been warned of the need to curb inappropriate increases in the awarding of first class and upper second degrees.
Think tanks, quality practitioners and academic staff have offered numerous hypotheses to explain the ‘unexplained’, ranging from the influence of university league tables, the evolution of degree classification algorithms and flaws in the external examining system. On a more positive note, institutions can point to significant investment in upskilling teaching staff, improving facilities and a much more nuanced understanding on how to support students from underrepresented backgrounds. In a post-2012 era of higher tuition fees, it is also possible that raw student motivation is a factor. Educational researchers able to identify the cause and effect between these factors and degree outcomes would be able to take credit for a highly impactful output.
The centrepiece of UK higher education’s approach to quality assurance is the UK Quality Code. In 2018, the Quality Code was renewed in response to the fundamental regulatory changes brought about by the Higher Education and Research Act (2017). There is significant overlap between the expectations laid down in the Code and the OfS’s conditions of registration for quality and standards. Each individual higher education institution, regardless of its size or reputation, has to ensure robust means for meeting these conditions in order to continue recruiting and teaching students.
In the absence of a comprehensive evidence base, we are left with the hanging notion that a good degree is now easier to achieve than it was in the past. This challenge cuts to the heart of quality assurance in higher education. One of the Quality Code’s expectations is that the value of qualifications awarded to students at the point of qualification, and over time, is in line with sector-recognised standards. Anything that throws doubt on the value of a degree undermines public confidence, frustrates employers and undermines the efforts students make to secure the best possible degree outcome.
The sector’s response has been to double down on the infrastructure that assures quality and standards, led by the UK Standing Committee on Quality Assessment (UKSCQA) with the support of the Quality Assurance Agency (QAA). The theory is that through greater transparency sunlight will be the best disinfectant. The main thrust of recent work has been to improve public understanding of the mechanisms already in place through a ‘degree outcomes statement’ outlining how institutions utilise sector reference points, engage in calibration of standards and facilitate the work of external examiners. Should these proposals become embedded within quality frameworks and regulation, there would also be more accessible information on the choices institutions make with respect to their academic regulations, governance arrangements and headline data on degree outcomes.
The measures above will have some impact through a focus on the top-down policy decisions implemented within an institutional context. However, if we are to have the most robust possible response to newspaper headlines about grade inflation, it is essential that we consider the local level decisions made on a daily basis concerning assessment design. The level of student achievement in an individual piece of work is assessed based on the assessor’s understanding of the interaction between the intended learning outcomes and the assessment criteria, which in turn are generally derived from generic assessment criteria laid down in university regulations. This structure allows each student to be independently assessed without reference to the achievement of other students, a necessity where cohorts are small and the statistical approaches to calibrating the proportion of grades cannot be employed fairly.
With a normal undergraduate degree consisting of perhaps forty individual pieces of assessment, issues with the reliability or validity of chosen assessment methods are exacerbated. If practitioners can demonstrate that the chosen assessment methods are secure, and accurately measure achievement against the criteria, concerns about the overall degree classification begin to melt away.
The UK Professional Standards Framework (UKPSF) forms the basis for teaching qualifications and accreditation schemes operating in many universities. Whilst it clearly recognises the value of continuing professional development in assessment, the emphasis when applied within PG Certs and similar programmes is often on the range of assessment methods employed and less on construct validity, reliability, and other essential assessment principles.
Validity is a central issue in educational measurement and concerns the interpretive judgement of how well a particular assessment (or set of assessments) allows users (e.g. employers) to make secure inferences about the abilities of students. Underpinning this judgement is the accumulation and analysis of evidence related to claims made by qualification developers, for example, that particular scores or grades reflect the quality of performance on assessment tasks. Assessment decisions made by practitioners that influence reliability, fairness, comparability or assessment standards will have implications for overall construct validity and thus qualification quality.
Knowledge and understanding of the principles of assessment is an important entry point to improving local level assessment practice in higher education related to item writing, mark scheme (or rubric) design, moderation and standard setting. However, to enact more far-reaching and sustainable change in assessment within higher education, practitioners have to be empowered to innovate and shift away from longstanding traditions regarding assessment at department, faculty, and institutional levels. Research on assessment practitioner ‘identities’ suggests that professional learning in assessment needs to foreground the development of critical approaches for new insights, and subsequent actions, to become possible. There is therefore a need for higher education institutions to create the space for the interrogation of their established traditions by practitioners. Fortunately, institutions continue to invest in development programmes for academic staff, creating many opportunities to interrogate and debate these matters.
The welcome emphasis on improving teaching and learning within higher education creates the conditions required to take academic practice in assessment to the next level of development.
The UKPSF encourages practitioners to engage with professional learning in assessment, but more could be done to achieve more widespread use of adaptive practices. We suggest that professional learning in assessment should be first of all principle-led, but with specific emphasis on affording opportunities to link assessment theory to contextually-bound considerations from which new insights can be achieved. This deeply reflective approach creates two possibilities. First, practitioners develop their criticality related to assessment, including for emerging innovations in higher education, for example the introduction of online marking.
Second, the quality of the evidence accumulated related to validity claims can be improved. Evidence of this robustness will undoubtedly support any narrative deployed to assure the public of the enduring value of a university qualification and, in turn, protect the achievements of the students we educate.
Adam Child – Head of Academic Quality, Bournemouth University
Adam has been Head of Academic Quality at Bournemouth University since July 2017 having previously worked at Lancaster University, firstly as Assistant Registrar and later as Senior Policy and Strategy Officer within the Vice-Chancellor’s Office. His first roles in higher education were at the Higher Education Academy (now part of Advance HE) where he developed interests in the use of student surveys and the impact of enhancement activities on quality and the student experience. Adam Child is writing in a personal capacity. Follow Adam on twitter.
Dr Simon Child – Head of Assessment Training, Cambridge Assessment
Simon is Head of Assessment Training at the Cambridge Assessment Network. Previously, he was a Senior Research Officer in the Assessment Research and Development Division of Cambridge Assessment. He has conducted research in the field of qualifications reform and development since 2012. His other research interests include quality of marking processes, curriculum development, formative assessment and Higher Education. His background is in developmental psychology. In 2011, he received his Ph.D from the University of Manchester, which focused on the development of symbolic cognition in pre-school children. Follow Simon on twitter.
Develop your understanding of key assessment concepts with the Cambridge Assessment Network. We run a range online, face-to-face and bespoke training courses and qualifications to help you develop your expertise in assessment.