Cambridge Digital Assessment Programme at the AEA-Europe conference 2024

by Sarah Hughes, 30 October 2024
Image of researcher presenting

This November, research from the Cambridge Digital Assessment Programme will be presented at the 25th AEA-Europe conference in Cyprus. In this blog, Research and Thought Leadership Lead Sarah Hughes summarises the research her team will present at this global event, all of which focusses on digital assessment. 

At Cambridge our approach to assessment has always been evidence-based and digital examinations are no different – informed by rigorous research, the latest technological developments, and working in partnership with our teachers and learners. The Annual Conference of the Association for Educational Assessment – Europe is a great community of assessment researchers and professionals in which Cambridge has always had an active role.

This year’s conference theme focuses on Technology, Artificial Intelligence, and Process Data for Assessment in the 21st Century. The Cambridge Digital Assessment Programme team are taking four papers to the conference: 

Enhancing the user experience of Digital Exams: A User-Centric Approach 

Mohammad Abbas Abadi, UX Design Lead, is taking a poster which outlines how Cambridge employs a user-centred design methodology to improve digital exams. And how, by focusing on understanding learners' needs, ensuring usability, and prioritising accessibility, we aim to provide a seamless transition from traditional paper-based assessments to digital formats.

The work is being used in Cambridge to guide projects based on the users’ needs, enhancing the quality of digital education by creating accessible and user-friendly digital assessments. For example, insights from user interviews have led to simplifying navigation in our exam platforms, making it easier for students to focus on content rather than technology. A user-centred approach ensures that our digital exams meet the diverse needs of learners and educators, in line with our intention to use digital tools to make our assessments accessible. 

Learners and teachers value our approach of engaging them from the earliest stages of our product development. One teacher said: “Seeing our input directly impact the design and functionality of the digital exams was incredibly rewarding.”

Authentically assessing computational thinking

Abdullah Ali Khan, Digital Assessment Lead, will be sharing an example of how the user-centred product development described by Mohammad is being used alongside assessment research to design a high-stakes digital examination for computer science - with a focus on computational thinking and scenario-based programming. While there are plenty of examples of digital formative assessment in programming and computer science, an equivalent in the high-stakes realm is uncommon. The poster will outline the many dimensions to consider when designing such an examination, from operational feasibility to construct validity.

Abdullah describes how stimulating it has been to start with a blank canvas which allows us to reimagine high-stakes assessment untethered from traditional paradigms whilst also pulling on the expertise that Cambridge brings from a long history of paper-based traditional exams: ”Employing a user-centred approach has been inspiring and allowed us to learn from IGCSE Computer Science teachers during interviews about how they bring the current paper-based syllabus to life. For instance, a teacher in Chile described how he has created his own board game for logic gates to engage students by gamifying the topic.” 

Using assessment and response times data to evaluate our Digital Mocks Service

The conference theme describes ‘process data’ as encompassing: response times, answer changes, keystrokes, and other behavioural indicators. This data can provide unprecedented opportunities to gain deep insights into students’ thought processes, learning strategies, and test-taking strategies. The capture and use of process data from digital assessments has the potential to help quality assure tests, understand test-takers’ behaviours, engagement and motivation, and improve the quality and reliability of our assessments. 

Researchers Carmen Vidal Rodeiro and Tim Gill used learners’ marks along with data on how long learners take to answer questions to evaluate our Digital Mocks Service. The research helped us understand how the questions and the assessment worked when delivered digitally and assured us that learners are able to show what they know, and can do in a digital test in a similar way to how they would in a traditional paper-based exam. In particular, their analyses showed that: 

  • the questions in the digital mock ranged from difficult to fairly easy in a similar way they do in live paper examinations and their relative difficulty did not change when moving from paper to digital.
  • learners spent similar amounts of time answering questions at the end of the test as questions at the beginning, showing that they didn’t have to rush at the end or were disengaged.

We don’t routinely get data about how long learners take to answer each question in paper exams, so by going digital we are adding to the types of information we can collect about our learners, for example, to understand how engaged and motivated they are.

Evaluation of an AI Auto-marker

In this session Sanjay Mistry (Head of Digital Research) and Jesse Dvorchak (Deputy Director of Digital Products and Services Innovation) describe the ability and accuracy of a third-party AI-powered auto-marker to mark a question with response lengths. The accuracy data will be presented as well as how well the accuracy compared with the human marked equivalents.

This paper provides the basis for using the power of Generative AI to transform marking. It is not the intention to remove the human from the loop entirely in the marking process, but to use human expertise to selectively review the output from the AI auto-marker, adding the extra layer of confidence in the results, hence being AI assisted marking rather than pure AI auto-marking 

There were some interesting findings showing that straightforward, shorter text-based responses generally had a higher accuracy of marking compared with either longer mark responses, or those that had a more technical element. This has provided a good basis for further research on the topic. 

Two-way information sharing at the conference

As well as sharing our work, attending the conference will bring lots of value: our research will receive feedback from other experts in the field, we will learn from colleagues working in the area and make connections. As Mohammad says, “it’s a great place to engage with others in the assessment field, sharing our insights, and gathering feedback to use to further enhance our digital exam solutions.” Abdullah is looking forward to seeing and learning from other organisations and teams engaging in the same sort of multi-disciplinary work as us.

This year’s focus on process data and AI are really relevant to the Cambridge Digital Assessment Programme. Carmen, Tim and myself will be co-presenting research on the evaluation of the Digital Mocks Service.

We are also keen to hear about using process data to measure things that aren’t currently measured in our paper exams, including the processes that learners use to answer their exam questions, not just the actual answer they give. Our team is also excited to see other presentations on how AI is being used in the assessment world, at a school and educational institution level. 

Find out more about the latest Cambridge Digital Assessment Research here.

Before you go... Did you find this article on Twitter, LinkedIn or Facebook? Remember to go back and share it with your friends and colleagues!

Related blogs

Key bloggers

Tim Oates Blogger
Tim Oates

Director of Assessment and Research

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.