Date: |
Dates to be confirmed |
Venue: |
Online
|
Type: |
Workshop series |
Fee: |
£265 (Members - £238.50) |
Register your interest
This workshop series is accredited for continuing professional development (4.5 CPD hours), with certification on successful completion.
This interactive series of three workshops introduces the basic concepts of Computer Adaptive Tests (CATs) and their use in different assessment contexts. Taken consecutively over three weeks, each session builds on the learning from the last, with a chance to implement and reflect on the content during the weeks in between.
The workshops have been designed for those with little or no previous experience of CATs. So if you are new to adaptive testing, or have limited knowledge of key elements such as Item Banking, Item Response Theory or practical implementation of CATs, this series will give you an introduction to everything you need to consider when planning your use of Computer Adaptive Tests.
Workshop dates
Week 1 |
TBC | 12:30 - 14:00 (UK time) |
Title: |
Introduction to the theory of CAT |
Week 2 |
TBC | 12:30 - 14:00 (UK time) |
Title: |
Theoretical CAT to practical CAT - part one |
Week 3 |
TBC | 12:30 - 14:00 (UK time) |
Title: |
Theoretical CAT to practical CAT - part two
|
Course outline
This series of three workshops will cover an overview of what computer adaptive testing is, and what forms it can take. It aims to provide a basis from which to evaluate if and how the use of a CAT may fit testing in your own context.
The sessions will give an overview of different examples of test format such as Item Level, Testlet and Multi Stage tests, as well as different ways that tests can ‘adapt’ a user journey through available content (for example by item difficulty, decision making or diagnostic categories).
By the end of the series you will have an understanding of the approaches to implementing an adaptive test, including the key decision-making stages, and considerations for ongoing maintenance and management of the test materials.
Throughout the sessions, examples will demonstrate these ideas and how they can be applied to your own assessment context.
- Week 1 – Learn about the fundamentals of Computer Adaptive Tests, and different types of adaptive tests. Understand the advantages CATs may give, and which types of CATs best fit different testing contexts. You will also learn about the basic psychometric elements that underpin CATs like Item Banking and Item Response Theory, and the elements which manage test operation such as starting points, selection algorithms and termination criteria.
- Week 2 – Learn how to approach the process of adopting a CAT, working through a high-level staged framework for managing the considerations and decisions needed to make, and a way of structuring preparation work. You will work through a case study for the introduction of a test which adapts at individual item level and focusses on the reporting of trait ability. You will also consider questions around feasibility, item bank development, calibration, simulation, test publishing and test maintenance.
- Week 3 – Explore a range of example CATs to consider how and why adaptive testing has been used in each situation and consider the advantages and limitations of CATs as related to test purpose. You will also have a chance to discuss and answer questions you may have about your own testing context and the possible application of CATs.
In addition to the workshops, you will take away resources to support you in applying the learning to your own context.
Key learning outcomes
By the end of the three sessions you will have:
- Gained an understanding of the key concepts and considerations around the use of CATs, as well as the basic psychometric elements that underpin CATs
- Developed confidence in applying key considerations when adopting a CAT, having worked through a case study
- Built knowledge about a range of real-life applications of CATs to help you evaluate effectiveness and whether the use of a CAT may fit testing in your own context
Course trainer
Christopher Hubbard has been working at Cambridge Assessment since 2001, and in that time has been involved in a wide range of assessment contexts and projects across a number of international settings. Some of Chris's special areas of focus have been Performance testing, Assessment scale development, Statistical analysis & test creation, Benchmark testing & Educational planning and Computer Adaptive online Testing (CAT).