Technology-Enhanced Language Assessment

Students using virtual reality goggles

Dr Hye-Won Lee of Cambridge Assessment English explains how the organisation is implementing new technologies such as computer-based testing and artificial intelligence while always keeping the learner central to the process.

The past two decades have seen major changes in the way people communicate. In this digital age, we can deliver messages instantaneously at any time and from anywhere as long as there is internet or mobile network coverage. Writing today is mostly done and shared on screens rather than in print and is usually supported by smart spelling and grammar tools and/or online dictionaries.

Video-conferencing technology has removed the physical barrier of face-to-face communication over a distance, allowing us to reach out conveniently to people from all over the world. We can even interact with electronic devices supported by artificial intelligence (AI) technology to perform various everyday tasks such as creating reminders and turning on lights at home.

Information and communications technology has also changed language learning and teaching. A traditional classroom is no longer the only place where formal learning takes place. Distance learning that allows students to attend classes and access learning materials remotely has made education more affordable and flexible.

Paper-based textbooks are being replaced by more engaging digital textbooks containing hyperlinks, interactive presentations, and videos. Additionally, a digital learning environment enables students to set their own pace of study and teachers to track students’ progress more efficiently.

These technological advances in the educational context as well as our daily life have greatly impacted the assessment of English language proficiency. Cambridge English strives to adapt to these changes in our current and future test design without compromising our philosophy of communicative language assessment with the learner at the centre.

Our endeavours to integrate new technology for the better have had several outcomes, outlined below:

  1. Adaptive testing based on a candidate’s level of ability.
  2. Quick reporting of results enhanced by AI-enabled marking.
  3. Various modes of testing available for the stakeholders.
  4. Instantaneous feedback for enhancing learning and teaching.
  5. Innovative assessment for the future.

1. Questions adaptive to learner ability

Computer-adaptive testing (CAT) is based on the tailoring of test questions to each candidate’s language ability, as in the Linguaskill Reading and Listening components, for instance. In a paper-based (PB) linear test, candidates from the same testing cycle usually receive the same set of questions, possibly in exactly the same order. Although the test is targeted at a majority of the candidates, the questions cannot help but be too difficult or easy for certain learners.

In contrast, CAT, currently available in listening and reading tests, provides questions according to a candidate’s test performance. The computer decides the candidate’s level of ability based on his or her response pattern using an iterative algorithm and eventually provides only a select subset of the questions from a large test item bank to measure the target ability. All the questions in the test item bank have been carefully calibrated prior to test delivery, which enables the computer to change the difficulty level of questions as the test unfolds.

These individualised tests may give candidates a more positive test experience by reducing their anxiety or fatigue during test-taking. Because an adaptive test consists mostly of items targeted at language skills associated with a specific level of ability, it is commonly shorter in length than a PB linear test and can offer immediate results by the end of the test. More information about how Cambridge English CAT is constructed and administered can be found in issue 59 of the Cambridge Assessment English publication Research Notes.1


2. Quick turnaround of test results

Computer-based (CB) tests in general can increase flexibility and efficiency in test administration and scoring. On-demand testing is possible as testing can take place at a time and place that best suits the stakeholders, which allows decision-makers to receive the test results in a timely manner.

Automated marking of writing and speaking skills is also made possible by recent advances in AI. Machine-learning algorithms can be trained to perform human-like evaluation on learner speech and writing. The Writing module of Linguaskill, for example, is marked by an auto-marker and a group of human examiners. When the auto-marker reports that it is not able to accurately assign a score, it escalates the script to a human examiner. The examiner mark is then added into the training data of the auto-marker to further improve its scoring capacity - for more detailed information, see Insights: Keeping artificial intelligence human.

Compared to labour-intensive human marking, this ‘hybrid’ approach maximises the strengths of both parties, thus making it more powerful and time efficient. Owing to speedy auto-marking, Linguaskill candidates typically receive their Writing test results within half a day of completion of the test.


3. Flexibility to meet stakeholder preferences

Despite the advantages of CB testing, concerns may arise about the impact of candidates’ computer literacy, such as their familiarity with reading on a screen and proficiency of typing on a keyboard, on their test performance. Further concerns arise around the impact of technology on the test construct itself, i.e. on the skills the test aims to elicit. From the early stages of test development, Cambridge English carefully considers and researches such issues related to test validity.

An accumulating body of research conducted on Cambridge English tests has indicated that the Listening, Reading and Writing sections of the traditional PB mode and new CB mode result in comparable test scores.2 Based on this research, tests for these language skills are offered to candidates in the CB mode as an alternative to the PB equivalents. For example, the Listening, Reading, and Writing components of IELTS are currently available in both PB and CB versions.3 Candidates thus enjoy the freedom to select the test mode which reflects their primary means of communication.

Cambridge English is also exploring the option to deliver direct oral proficiency interviews remotely using video-conferencing technology. This innovative test mode preserves the interactional nature of speaking and also makes face-to-face speaking tests more accessible to learners from geographically remote or politically unstable areas. Recent studies on the IELTS Speaking test4 demonstrated score equivalency between the standard and video-conferencing modes of test delivery, thus suggesting great promise of the new test mode.


4. Automated feedback for personalised learning

With the support of fast-developing technology, learners can receive instant, detail-oriented automated feedback on their performances, which can promote individualised learning more effectively, in contrast to many previous forms of assessment that focus only on the outcome of an instructional unit. Ideally, individualised feedback provided to learners can also be utilised by teachers to inform teaching.

These views have been manifested in a Learning Oriented Assessment (LOA)5 philosophy, which is a systematic approach to linking assessment to learning that underlie Cambridge English tests and learning materials design.

A CB diagnostic language test – of which Cambridge English has so far developed and trialled two prototypes – is a good example of how learning-oriented assessment is realised in the context of classroom assessment. Instead of giving test scores, the test provides each test-taker with instant diagnostic feedback on their strengths and weaknesses. Teachers also receive group-level feedback that details students’ areas for improvement and links to relevant online teacher resources based on the Cambridge English Curriculum. In this case, automated feedback equips teachers with knowledge of their students and enables learners to take control of their learning, thus creating an intimate connection between assessment and learning.

Write & Improve is another example of LOA, where learners can practise and improve their writing skills with the help of AI-powered automated feedback. Learners can submit their work to this free online tool repeatedly and make changes referring to real-time, machine-generated word and sentence-level feedback on their working drafts.6

Engaged in this scaffolded, customised evaluation cycle, learners can concentrate on areas that need more attention, polish their writing, and learn from this iterative process (see the Insights article linked to above for more details).

In addition, course-level instruction can (and should) be systematically aligned with learning-oriented assessment, as achieved in Empower,7 a successful outcome from a collaborative project between Cambridge English and Cambridge University Press.

Empower is a series of general English course books that comprise online unit progress tests on the target lexis, grammar and functional language, and automated speaking tests on pronunciation and fluency.8 An individual learner takes the unit tests on an online learning management system and, depending on his/her achievement in each section of the test, is immediately and automatically assigned to personalised online activities for further practice.

The impact studies on Empower have shown that learners demonstrated improved performance on their second attempt of the same unit test after additional practice, and reported that the seamlessly integrated learning and assessment cycle helped them to learn and understand their own strengths and weaknesses better.9


5. Innovation for next generation assessment

New digital assessment and learning is expected to take ever more innovative forms, some of which we are actively exploring:

  • Quiz your English is a gamified, multiplayer mobile application for practising vocabulary and grammar skills.
  • Game-based assessment is seen as a fun and ideal way to immerse learners in the cycle of learning and assessment.
  • Virtual reality technology is being trialled as a medium for simulating real-life tasks and eliciting more authentic learner performance.

We believe that these future assessments will pave the way for a learning ecosystem where technology supports meaningful experiences for learners.


To conclude...

Technology is integral to our daily lives nowadays and will continue to change the way we use and learn languages. Cambridge English endeavours to improve stakeholders’ experience by integrating state-of-the-art technologies into its test design, keeping the test construct or the language skills being assessed up-to-date, and uniting assessment with learning.

On this threshold of transformations in language testing, we foresee that future assessment will move away from the one-time test model to a truly personalised and engaging learner experience that draws on the power of both technology and humans.

It is true that language assessment is evolving into a new phase, but it should be always remembered that learners are at the centre of every aspect of the process and we make these changes to better help people learn English and prove their skills to the world.

Dr Hye-Won Lee
Senior Research Manager, Cambridge Assessment English


To find more about current Cambridge English projects on innovative assessment and learning, visit the Cambridge BETA website.



1Walczak (2015)
2Blackhurst (2005); Chan, Bax, and Weir (2017); Green, & Maycock (2004); Weir, O'Sullivan, Yan, and Bax (2007)
3As of March 2019, CB IELTS is offered in 45 countries.
4Berry, Nakatsuhara, Inoue, & Galaczi (2018); Nakatsuhara, Inoue, Berry, & Galaczi (2016; 2017a; 2017b)
5Jones and Saville (2016)
6Besides the detailed feedback, learners also receive scores on the proficiency level and topic relevance of their writing.
7Salamoura & Unsworth (2016)
8Mid and end-of-course competency tests of reading, writing, listening and speaking are also included.
9Cambridge University Press & Cambridge Assessment English (2017)

References

Berry, V., Nakatsuhara, F., Inoue, C., & Galaczi, E. (2018). Exploring the use of video-conferencing technology to deliver the IELTS Speaking Test: Phase 3 technical trial. IELTS Partnership Research Papers, 2018/1. IELTS Partners: British Council/Cambridge Assessment English/IDP: IELTS Australia. Retrieved from https://www.ielts.org/-/media/research-reports/ielts-research-partner-paper-4.ashx

Blackhurst, A. (2005). Listening, Reading and Writing on computer-based and paper-based versions of IELTS. Research Notes, 21, 14–17.

Cambridge University Press, & Cambridge Assessment English. (2017). Cambridge English Empower: Impact studies (Unpublished report). Cambridge: Cambridge University Press/Cambridge Assessment English.

Chan, S., Bax S., & Weir. C. (2017). Researching participants taking IELTS Academic Writing Task 2 (AWT2) in paper mode and in computer mode in terms of score equivalence, cognitive validity and other factors. IELTS Research Reports Online Series 2017/14. British Council/Cambridge English Language Assessment/IDP: IELTS Australia. Retrieved from https://www.ielts.org/-/media/research-reports/ielts_online_rr_2017-4.ashx

Green, T., & Maycock, L. (2004). Computer-based IELTS and paper-based versions of IELTS. Research Notes, 18, 3–6.

Jones, N., & Saville, N. (2016). Learning Oriented Assessment: A systemic approach. Studies in Language Testing volume 45. Cambridge: UCLES/Cambridge University Press.

Nakatsuhara, F., Inoue, C., Berry, V. & Galaczi, E. (2016). Exploring performance across two delivery modes for the same L2 speaking test: Face-to-face and video-conferencing delivery. A preliminary comparison of test-taker and examiner behaviour. IELTS Partnership Research Papers, 1. IELTS Partners: British Council/Cambridge English Language Assessment/IDP: IELTS Australia. Retrieved from https://www.ielts.org/-/media/research-reports/ielts-partnership-research-paper-1.ashx

Nakatsuhara, F., Inoue, C., Berry, V., & Galaczi, E. (2017a). Exploring the use of video-conferencing technology in the assessment of spoken language: A mixed-methods study. Language Assessment Quarterly, 14(1), 1–18.

Nakatsuhara, F., Inoue, C., Berry, V., & Galaczi, E. (2017b). Exploring performance across two delivery modes for the IELTS Speaking Test: Face-to-face and video-conferencing delivery (Phase 2). IELTS Partnership Research Papers, 3. IELTS Partners: British Council/Cambridge English Language Assessment/IDP: IELTS Australia. Retrieved from https://www.ielts.org/-/media/research-reports/ielts-research-partner-paper-3.ashx

Salamoura, A., & Unsworth, S. (2016). Learning Oriented Assessment: Putting learning, teaching and assessment together. Retrieved from https://www.cambridge.org/files/7614/6002/5030/ELT_33261_Empow_kit_03_16-PRINT.pdf

Walczak, A. (2015). Computer-adaptive testing. Research Notes, 59, 35–39.

Weir, C., O'Sullivan, B., Yan, J. & Bax, S. (2007). Does the computer make a difference? Reaction of candidates to a computer-based versus a traditional handwritten form of the IELTS Writing component: effects and impact. IELTS Research Report, 7(6), 1–37. Retrieved from https://pdfs.semanticscholar.org/3707/438f25a94f73d97d4d834f01a90e618eea4c.pdf


Before you go... Did you find this article on Twitter, LinkedIn or Facebook? Remember to go back and share it with your friends and colleagues!

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.