Handwriting versus typing exam scripts: Evidence from the literature

by Santi Lestari, 18 October 2024
Image of a student writing on paper next to an laptop

The move from paper-based exams to a digital format has been discussed for many years, yet still many high-stakes end-of-schooling exams remain on paper. In England, for example, using a word processor (with the spelling and grammar checkers disabled) is currently only permitted if it is the learner’s “normal way of working” within the school[i].

A report from the UK regulator of school exams[ii] identified insufficient and unequal digital provision in schools across the country as one key barrier to full adoption of a digital mode for high-stakes school exams. Many other countries also experience this barrier. One way to overcome this barrier is by gradual adoption involving the running of paper and digital exams alongside each other. However, this approach creates the risk of mode effects. These are differences between paper and digital exams intended to be equivalent which make it less likely that scores from both are directly comparable. One difference between paper and digital exams is the way learners answer questions that involve extended writing: whether learners are required to handwrite or type their answers using a computer.

We searched the literature to see what was known about the similarities and differences between handwriting and typing answers. We included a mixture of 47 academic articles, dissertations, conference papers and institutional reports published between 1990 and 2021. The literature included evidence about four areas:

  1. comparability of scores
  2. comparability of marking
  3. comparability of text characteristics
  4. comparability of composing processes.

Scores

Research findings have varied because the contexts, designs and assessment in the research also varied. However, studies - especially the recent ones[iii] - tend to find that it makes little difference to scores whether students use handwriting or typing. A few studies found that English language ability and computer familiarity could influence the results. Learners with weaker English language ability tended to do better on handwritten essays, while those with better English performed about the same whether handwriting or typing[iv]. Learners more familiar with word-processing software on a computer tended to do equally well either typing or handwriting their answers, while learners with less experience tended to do better handwriting, and write more[v].

Marking

Research showed that markers tended to award handwritten essays higher scores than the word-processed version of the same essays[vi] although some studies found no meaningful differences in the scores given. Markers tended to perceive word-processed essays to be shorter than the handwritten versions. Because word-processed essays are easier to read, errors in spelling and punctuation become more obvious to markers[vii]. Whereas in handwritten essays, these errors could be less detectable. One important caveat is that many studies in the area of marking comparability are about 20 years old, so we need to be cautious in interpreting the findings.

Text characteristics

We learned that word processed answers on average tended to be longer than handwritten answers. Also, the length of word-processed essays varied more than the length of handwritten essays[viii]. This could be because of learners’ different levels of computer familiarity and typing skills[ix]. Other differences, such as the complexity of language and frequency of errors, have also been detected, but they are usually very minor and may not be reflected in scores.

How learners compose their answers

Processes learners go through when answering an essay question include planning, generating ideas, producing text, and reviewing and editing text. Research showed that learners generally engaged in similar processes when handwriting or typing their answers, but a few minor differences were observed[x]. For example, one study[xi] found that learners with low and moderate levels of computer familiarity said that they planned better when handwriting their answer on paper. An interesting explanation was given by a learner: when handwriting their answer, they tended to plan more carefully before writing to avoid making corrections which would affect the tidiness of their answer. Whereas, when typing their answer on the computer, they could review and edit their answer more flexibly, so they were less inclined to plan more carefully before writing.

What does this mean for our digital assessments?

  • When a group of learners varies greatly in their level of digital literacy, it is advisable to offer the option of handwriting or typing. That is why we are providing digital exams as an option. We also acknowledge that it might not always be appropriate to offer both a paper and digital version of an exam. For example, we are developing some exams to be digital only - this allows us to assess skills which would otherwise be difficult to assess on paper, such as programming skills.
  • Learners should have the opportunity to familiarise themselves with the test platform and its functionality before taking the exam. From our extensive user research we found that familiarisation with a digital platform is paramount. Therefore, for our digital exams we offer learners access to platform tutorials, digital practice tests and a Digital Mock Service to build their familiarity and confidence with the digital test-taking experience.
  • When marking typed and handwritten scripts, markers should be made aware of the risk of being biased by presentation differences. Our markers will continue to use the same marking platform for both paper and digital exams to ensure continuity of the marking process. They will also receive training in marking digital exams alongside paper-based assessments.

Find out more about our Cambridge Digital Exams and our digital assessment research.

References

[i] Joint Council for Qualifications CIC. (2023). Adjustment for candidates with disabilities and learning difficulties: Access arrangements and reasonable adjustments. p. 58. https://www.jcq.org.uk/wp-content/uploads/2023/09/AA_regs_Revision_One_Sep23_FINAL.pdf

[ii] Coombe, G., Lester, A., & Moores, L. (2020). Online and on-screen assessment in high-stakes, sessional qualifications: A review of the barriers to greater adoption and how these might be overcome. (Ofqual/20/6723/1). Ofqual. https://assets.publishing.service.gov.uk/media/5fd361b7e90e0766326f7f6e/Barriers_to_online_111220.pdf

[iii] For example: Charman, M. (2014). Linguistic analysis of extended examination answers: Differences between on-screen and paper-based, high- and low-scoring answers. British Journal of Educational Technology, 45(5), 834-843. https://doi.org/10.1111/bjet.12100; Barkaoui, K., & Knouzi, I. (2018). The effects of writing mode and computer ability on L2 test-takers' essay characteristics and scores. Assessing Writing, 36, 19-31. https://doi.org/10.1016/j.asw.2018.02.005

[iv] Wolfe, E. W., & Manalo, J. R. (2004). Composition medium comparability in a direct writing assessment of non-native English speakers. Language Learning & Technology, 8(1), 53-65. https://www.lltjournal.org/item/10125-25229/

[v] Wolfe, E. W., Bolton, S., Feltovich, B. & Bangert, A. W. (1995, April 18-22). The influence of computers on student performance on a direct writing assessment [Paper presentation]. The Annual Meeting of the American Educational Research Association, San Francisco, CA, United States. https://files.eric.ed.gov/fulltext/ED383741.pdf

[vi] For example: Russell, M., & Tao, W. (2004). Effects of handwriting and computer-print on composition scores: A follow-up to Powers, Fowles, Farnum, & Ramsey. Practical Assessment, Research, and Evaluation, 9(1). https://doi.org/10.7275/9g7k-yr32

[vii] For example: Shaw, S. D. (2003). Legibility and the rating of second language writing: the effect on examiners when assessing handwritten and word-processed scripts. Research Notes, 11, 7-11. https://www.cambridgeenglish.org/Images/23125-research-notes-11.pdf

[viii] For example: Endres, H. (2012). A comparability study of computer-based and paper-based Writing tests. Research Notes, 49, 26-33. https://www.cambridgeenglish.org/Images/23166-research-notes-49.pdf

[ix] Barkaoui, K., & Knouzi, I. (2018). The effects of writing mode and computer ability on L2 test-takers' essay characteristics and scores. Assessing Writing, 36, 19-31. https://doi.org/10.1016/j.asw.2018.02.005

[x] For example: Chan, S., Bax, S., & Weir, C. (2018). Researching the comparability of paper-based and computer-based delivery in a high-stakes writing test. Assessing Writing, 36, 32-48. https://doi.org/10.1016/j.asw.2018.03.008

[xi] Jin, Y., & Yan, M. (2017). Computer literacy and the construct validity of a high-stakes computer-based writing assessment. Language Assessment Quarterly, 14(2), 101-119. https://doi.org/10.1080/15434303.2016.1261293

Before you go... Did you find this article on Twitter, LinkedIn or Facebook? Remember to go back and share it with your friends and colleagues!

Related blogs

Key bloggers

Tim Oates Blogger
Tim Oates

Director of Assessment and Research

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.