Martin Johnson

Martin Johnson

Martin Johnson

I have worked as a researcher in the organisation since 2002. Prior to this I was a primary teacher. I am an Honorary Associate Professor at UCL Institute of Education, a Fellow of the International Society for Design and Development in Education, an Executive Member of the British Association for International and Comparative Education, and a member of the Editorial Board of the Journal of Vocational Education & Training. The focus of much of my work is on the interaction between assessment, learning, and curriculum issues. My projects range across academic and vocationally related contexts and investigate assessment issues in diverse sectors (e.g., primary through to post-compulsory education). My general research interest is on how to better understand assessment as enacted practice, and this has involved using assorted qualitative research methods to gather the perspectives of those involved with, or affected by, assessment.

Publications

2024

What is non-formal learning (and how do we know it when we see it)? A pilot study report
Johnson, M., & Majewska, D. (2024). What is non-formal learning (and how do we know it when we see it)? A pilot study report. Discover Education 3, 148. https://doi.org/10.1007/s44217-024-00255-y
Uncovering the landscape of cross-national UK education research: An exploratory review.

Majewska, D., & Johnson, M. (2024). Uncovering the landscape of cross-national UK education research: An exploratory review. Educational Research, 1-23. https://doi.org/10.1080/00131881.2024.2334751

Development challenges in challenging contexts: A 3-stage curriculum framework design approach for Education in Emergencies.

Johnson, M., Fitzsimons, S., & Coleman, V. (2024, March). Development challenges in challenging contexts: A 3-stage curriculum framework design approach for Education in Emergencies [Paper presentation]. Comparative and International Education Society, Miami, Florida, USA. https://link.springer.com/article/10.1007/s11125-022-09601-0

Seeing what they think: Using MAXQDA to explore how concept maps work.

Johnson, M. (2024, February). Seeing what they think: Using MAXQDA to explore how concept maps work [Poster presentation]. MAXQDA International Conference, Berlin, Germany. https://www.maxqda.com/wp/wp-content/uploads/sites/2/Martin-Johnson_Seeing-what-they-think.pdf

2023

Formal, informal and non-formal learning: Key differences and implications for research

Johnson, M., & Majewska, D. (2023, September 12–14). Formal, informal and non-formal learning: Key differences and implications for research [Paper presentation]. Annual Conference of the British Educational Research Association, Aston University, UK.

Teaching in uncertain times: Exploring links between the pandemic, assessment workload, and teacher wellbeing in England
Johnson, M. & Coleman, V. (2023). Teaching in uncertain times: Exploring links between the pandemic, assessment workload, and teacher wellbeing in England. Research in Education.
Teachers’ research diaries – reflection and reconnection in times of social isolation
Johnson, M. & Coleman, V. (2023). Teachers’ research diaries – reflection and reconnection in times of social isolation. International Journal of Research & Method in Education.
Who controls what and how? A comparison of regulation and autonomy in the UK nations’ education systems

Kreijkes, P., & Johnson, M. (2023). Who controls what and how? A comparison of regulation and autonomy in the UK nations’ education systems. Research Matters: A Cambridge University Press & Assessment publication, 35, 60-79.

In this paper we explore the concept of the middle tier in education systems, outlining how it is a crucial element that links high-level education policy to the practices that are carried out in schools. Reflecting on the similarities and differences in the profiles of the middle tiers of the four nations of the United Kingdom (UK), we observe how they are part of a complex educational ecosystem. While noting that there are variations in the profiles of the middle tiers we also highlight how they share some common functions that are key to mediating the way that policy links with schools. Using a four nations comparative approach to analyse the middle tier allows us a more nuanced understanding of how education policy works in general, but also how policy works in each particular national context.

Research Matters 35: Spring 2023
  • Foreword Tim Oates
  • Editorial Tom Bramley
  • Creating Cambridge Learner Profiles: A holistic framework for teacher insights from assessments and evaluationsIrenka Suto
  • A conceptual approach to validating competence frameworksSimon Child, Stuart Shaw
  • Teachers’ and students’ views of access arrangements in high stakes examinationsCarmen Vidal Rodeiro, Sylwia Macinska
  • Who controls what and how? A comparison of regulation and autonomy in the UK nations’ education systemsPia Kreijkes, Martin Johnson
  • Assessment in England at a crossroads: which way should we go?Tony Leech
  • Research NewsLisa Bowett
Innovative Research Methodologies: Solicited diaries with Martin Johnson
Johnson, M. (2023). Innovative Research Methodologies: Solicited diaries with Martin Johnson. BERA UK Podcast, BERA.

2022

Exploring the role of Assessment Literacy in times of uncertainty.
Johnson, M., Tsagari, D., Richardson, M., Correia, C. & Child, S. (2022) Symposium: Exploring the role of Assessment Literacy in times of uncertainty. Association of Educational Assessment-Europe Annual Conference, Dún Laoghaire, November.
Formal, non-formal, and informal learning: What are they, and how can we research them?
Johnson, M., and Majewska, D. (2022). Formal, non-formal, and informal learning: What are they, and how can we research them? Cambridge University Press & Assessment Research Report.
What are ‘recovery curricula’ and what do they include? a literature review

Johnson, M. (2022). What are “recovery curricula” and what do they include? A literature review. Research Matters: A Cambridge University Press & Assessment publication, 34, 57–75.

The concept of educational recovery is relevant to many systems, both those that experience some form of sudden disruption as well as those that historically have been prone to disruption. Our involvement in developing a curriculum framework for displaced learners in the Learning Passport project (UNICEF, 2020) made us more aware of the field of Education in Emergencies. An educational emergency is a situation where “man-made or natural disasters destroy, within a short period of time, the usual conditions of life, care and education facilities for children and therefore disrupt, deny, hinder, progress or delay the realisation of the right to education” (Committee on the Rights of the Child, 2008, p. 1). The COVID-19 pandemic has made the concept of emergency and recovery more relevant to even more education systems. The literature review described in this article was carried out to identify what recovery curricula are (e.g., what they seek to achieve, what information they cover, etc.), as well as to consider any evidence for their efficacy. By exploring the recovery curricula literature, we also wanted to consider the extent to which the concept is a singular, generalisable one, or whether it is tied to specific contexts.

Research Matters 34: Autumn 2022
  • Foreword Tim Oates
  • Editorial Tom Bramley
  • Learning loss in the Covid-19 pandemic: teachers’ views on the nature and extent of loss Matthew Carroll, Filio Constantinou
  • Which assessment is harder? Some limits of statistical linking Tom Benton, Joanna Williamson
  • Progress in the first year at school Chris Jellis
  • What are "recovery curricula" and what do they include? A literature review Martin Johnson
  • What's in a name? Are surnames derived from trades and occupations associated with lower GCSE scores? Joanna Williamson, Tom Bramley
  • Research News Lisa Bowett
Teacher workload and wellbeing during the lockdown in England: insights from a teacher diary study.
Johnson, M. & Coleman, V. (2022, September 6-8). Teacher workload and wellbeing during the lockdown in England: insights from a teacher diary study. [Paper presentation]. Annual conference of the British Educational Research Association, University of Liverpool, UK.
Watch the video
Development challenges in challenging contexts: A 3-stage curriculum framework design approach for Education in Emergencies

Johnson, M., Fitzsimons, S. & Coleman, V. (2022) Development challenges in challenging contexts: A 3-stage curriculum framework design approach for Education in Emergencies. Prospects

2021

Assessment Literacy – How does being an examiner enhance teachers’ understanding of assessment?
Coleman, V. & Johnson, M. (2021, November 3-5). Assessment Literacy – How does being an examiner enhance teachers’ understanding of assessment? [Paper presentation]. Annual conference of the Association for Educational Assessment – Europe (AEA-Europe), Dublin, Republic of Ireland (online).
Teachers in the Pandemic: Practices, Equity, and Wellbeing
Johnson, M., and Coleman, T. (2021). Teachers in the Pandemic: Practices, Equity, and Wellbeing. Cambridge University Press & Assessment Research Report.
Why decolonisation should start with teacher training.
Johnson, M. & Mouthaan, M. (2021) Why decolonisation should start with teacher training. Public Sector Focus, 35 (July/August), 78-79.
Decolonising the curriculum: the importance of teacher training and development.
Johnson, M. & Mouthaan, M. (2021) Decolonising the curriculum: the importance of teacher training and development. Race Matters/Runnymede Trust Blog
Design for learning in uncertain contexts: developing a maths curriculum framework for emergency situations
Johnson, M., Horsman, R. & Macey, D. (2021) Design for learning in uncertain contexts: developing a maths curriculum framework for emergency situations. Educational Designer 4(14).
Early policy response to COVID-19 in education—A comparative case study of the UK countries

Mouthaan, M., Johnson, M., Greatorex, J., Coleman, V., and Fitzsimons, S. (2021). Early policy response to COVID-19 in education—A comparative case study of the UK countries. Research Matters: A Cambridge Assessment publication, 31, 51-67.

Inspired by the work of David Raffe and his co-authors who set out the positive benefits gained from comparing the policies of “the UK home nations” in an article published in 1999, researchers in the Education and Curriculum Team launched a project in early 2020 that we called Curriculum Watch. The aim of this project was to collate a literature and documents database of education and curriculum policies, research and analyses from across the four countries of the United Kingdom (UK).

In this article, we draw on our literature database to make sense of the rapid changes in education policy that occurred in the early stages of the COVID-19 pandemic in the four UK nations of England, Scotland, Wales and Northern Ireland. We analyse some of the key areas of UK policy formation and content (in relation to curriculum, pedagogy and assessment) that we observed during the first six months of the unfolding pandemic. In addition, we reiterate the clear benefits of using comparative research methods in the UK context: our research findings support the idea that closeness of national contexts offers the opportunity for evidence exchange and policy learning in education.

Research Matters 31: Spring 2021
  • Foreword Tim Oates, CBE
  • Editorial Tom Bramley
  • Attitudes to fair assessment in the light of COVID-19 Stuart Shaw, Isabel Nisbet
  • On using generosity to combat unreliability Tom Benton
  • A guide to what happened with Vocational and Technical Qualifications in summer 2020 Sarah Mattey
  • Early policy response to COVID-19 in education—A comparative case study of the UK countries Melissa Mouthaan, Martin Johnson, Jackie Greatorex, Tori Coleman, Sinead Fitzsimons
  • Generation Covid and the impact of lockdown Gill Elliott
  • Disruption to school examinations in our past Gillian Cooke, Gill Elliott
  • Research News Anouk Peigne
How well do we understand wellbeing?
Johnson, M., Coleman, T., Suto, I. & Lauder, K. (2021) How well do we understand wellbeing? Centre for Evaluation and Monitoring Blog
Seeing what they think: using concept maps to explore educators’ Assessment Literacy.
Johnson, M. & Coleman, T. (2021) Seeing what they think: using concept maps to explore educators’ Assessment Literacy. MAXQDA Research Blog

2020

How Collaborative Project Development Theory Can Be Used to Provide Guidance for International Curriculum Partnerships
Fitzsimons, S. and Johnson, M. (2020) How Collaborative Project Development Theory Can Be Used to Provide Guidance for International Curriculum Partnerships. International Dialogues on Education: Past and Present 7(2), 24-39.
Context matters—Adaptation guidance for developing a local curriculum from an international curriculum framework

Fitzsimons, S., Coleman, V., Greatorex, J., Salem, H., and Johnson, M. (2020). Context matters—Adaptation guidance for developing a local curriculum from an international curriculum framework. Research Matters: A Cambridge Assessment publication, 30, 12-18.

The Learning Passport (LP) is a collaborative project between the University of Cambridge, UNICEF and Microsoft, which aims to support the UNICEF goal of providing quality education provision for children and youth whose education has been disrupted by crisis or disaster. A core component of this project is a curriculum framework for Mathematics, Science and Literacy which supports educators working in emergency contexts. This framework provides a broad outline of the essential content progressions that should be incorporated into a curriculum to support quality learning in each subject area, and is intended to act as a blueprint for localised curriculum development across a variety of contexts. To support educators in the development of this localised curriculum an LP Adaptation Guidance document was also created. This document provides guidance on several factors that local curriculum developers should consider before using the LP Curriculum Framework for their own curriculum development process. This article discusses how key areas within the LP Adaptation Guidance have broader relevance beyond education in emergencies, highlighting that the challenges that exist within some of the most deprived educational contexts have applicability in all contexts.

Research Matters 30: Autumn 2020
  • Foreword Tim Oates, CBE
  • Editorial Tom Bramley
  • A New Cambridge Assessment Archive Collection Exploring Cambridge English Exams in Germany and England in JPLO Gillian Cooke
  • Perspectives on curriculum design: comparing the spiral and the network models Jo Ireland, Melissa Mouthaan
  • Context matters—Adaptation guidance for developing a local curriculum from an international curriculum framework Sinead Fitszimons, Victoria Coleman, Jackie Greatorex, Hiba Salem, Martin Johnson
  • Setting and reviewing questions on-screen: issues and challenges Victoria Crisp, Stuart Shaw
  • A way of using taxonomies to demonstrate that applied qualifcations and curricula cover multiple domains of knowledge Irenka Suto, Jackie Greatorex, Sylvia Vitello, Simon Child
  • Research News Anouk Peigne
Out of their heads: using concept maps to elicit teacher-examiners’ assessment knowledge
Johnson, M. and Coleman, V. (2020). Out of their heads: using concept maps to elicit teacher-examiners’ assessment knowledge. International Journal of Research & Method in Education (ahead of print).
The Learning Passport: Curriculum Framework (Maths, Science, Literacy).
Cambridge Assessment. (2020). The Learning Passport: Curriculum Framework (Maths, Science, Literacy). Cambridge, UK: Cambridge Assessment.
The Learning Passport Research and Recommendations Report.
Cambridge University Press & Cambridge Assessment. (2019). The Learning Passport Research and Recommendations Report: Summary of Findings. Cambridge, UK: Cambridge University Press & Cambridge Assessment. 

View the summary of the Learning Passport Research and Recommendations Report.

2019

Getting out of their heads – using concept maps to elicit teachers’ assessment literacy
Johnson, M. & Coleman, V. (2019, November 16-19). Getting out of their heads – using concept maps to elicit teachers’ assessment literacy [Paper presentation]. Annual conference of the Association for Educational Assessment – Europe (AEA-Europe), Lisbon, Portugal.
Research Matters 28: Autumn 2019
  • Foreword Tim Oates, CBE
  • Editorial Tom Bramley
  • Which is better: one experienced marker or many inexperienced markers? Tom Benton
  • "Learning progressions": A historical and theoretical discussion Tom Gallacher, Martin Johnson
  • The impact of A Level subject choice and students' background characteristics on Higher Education participation Carmen Vidal Rodeiro
  • Studying English and Mathematics at Level 2 post-16: issues and challenges Jo Ireland
  • Methods used by teachers to predict final A Level grades for their students Tim Gill
  • Research News David Beauchamp
"Learning Progressions": A historical and theoretical discussion

Gallacher, T. and Johnson, M. (2019). "Learning progressions": A historical and theoretical discussion. Research Matters: A Cambridge Assessment publication, 28, 10-16.

‘Learning Progressions’ are a relatively recent approach that describe the progression that can be expected of learners through their education. A Learning Progressions framework can also be used to support teaching and learning, assessment, and curriculum design. To benefit teaching and learning, a Learning Progressions approach aims to provide detailed instruction on the optimal order for presenting material within a subject. To support assessment, a Learning Progressions approach aims to provide a framework for comparing different learners in order for the results of such comparisons to be useful for learners. To support curriculum design, a Learning Progressions approach aims to provide a method of refining the material presented to learners. This article outlines the specific theory of learning that underpins the Learning Progressions approach, and then explores a series of simplifications that are inherent to it, to evaluate the suitability of the Learning Progressions approach for supporting the design aims outlined above.

What Is Computer-Based Testing Washback, How Can It Be Evaluated, And How Can This Support Practitioner Research?

Johnson, M. and Shaw, S. D. (2019). What Is Computer-Based Testing Washback, How Can It Be Evaluated, And How Can This Support Practitioner Research? Journal of Further and Higher Education. 43(9), 1255-1270.

Development Challenges in Challenging Contexts: A story of EiE curriculum framework development.
Johnson, M., Coleman, V. & Fitzsimons, S. (2019, September 16-19). Development Challenges in Challenging Contexts: A story of EiE curriculum framework development. [Paper presentation]. Annual conference of the International Society for Design and Development in Education (ISDDE), Pittsburgh, USA.
Learning to think alike: Using Sociocultural Discourse Analysis to explore examiners' standardised professional discourse
Johnson, M. and Mercer, N. (2019). Learning to think alike: Using Sociocultural Discourse Analysis to explore examiners' standardised professional discourse. Presented at the Journal of Vocational Education and Training International Conference, University of Oxford, UK, 28-30 June 2019.
A culture of question writing: Professional examination question writers’ practices
Johnson, M. and Rushton, N. (2019). A culture of question writing: Professional examination question writers’ practices.  Educational Research, 61(2), 197-213.
Using sociocultural discourse analysis to analyse professional discourse
Johnson, M. and Mercer, N. (2019). Using sociocultural discourse analysis to analyse professional discourse. Learning, Culture and Social Interaction, 21, 267-277.
A question of quality: Conceptualisations of quality in the context of educational test questions

Crisp, V., Johnson, M. and Constantinou, F. (2019) A question of quality: Conceptualisations of quality in the context of educational test questions. Research in Education, 105 (1), 18-41.

2018

Articulation Work: How do senior examiners construct feedback to encourage both examiner alignment and examiner development?

Johnson, M. (2018). Articulation Work: How do senior examiners construct feedback to encourage both examiner alignment and examiner development? Research Matters: A Cambridge Assessment publication, 26, 9-14.

This is a study of the marking feedback given to a group of examiners by their Team Leaders (more senior examiners who oversee and monitor the quality of examiner marking in their team). This feedback has an important quality assurance function but also has a developmental dimension, allowing less senior examiners to gain insights into the thinking of more senior examiners. When looked at from this perspective, marking feedback supports a form of examiner professional learning.

This study set out to look at this area of examiner practice in detail. To do this, I captured and analysed a set of feedback interactions involving 30 examiners across three Advanced level General Certificate of Education subjects. For my analysis, I used a mixture of learning theory and sociological theory to explore how the feedback was being used and how it attained its dual goals of examiner monitoring and examiner development.

Learning to think alike: A study of professional examiners' feedback interactions in a UK Qualification Awarding Organisation
Johnson, M. (2018). Learning to think alike: A study of professional examiners' feedback interactions in a UK Qualification Awarding Organisation. Presented at the 9th international conference of the EARLI SIG 14 - Learning and Professional Development, Geneva, Switzerland, 12-14 September 2018.
A review of instruments for assessing complex vocational competence

Greatorex, J., Johnson, M. and Coleman, V. (2017). A review of instruments for assessing complex vocational competence. Research Matters: A Cambridge Assessment publication, 23, 35-42.

The aim of the research was to explore the measurement qualities of checklists and Global Rating Scales [GRS] in the context of assessing complex competence. Firstly, we reviewed the literature about the affordances of human judgement and the mechanical combination of human judgements. Secondly, we reviewed examples of checklists and GRS which are used to assess complex competence in highly regarded professions. These examples served to contextualise and elucidate assessment matters. Thirdly, we compiled research evidence from the outcomes of systematic reviews which compared advantages and disadvantages of checklists and GRS. Together the evidence provides a nuanced and firm basis for conclusions. Overall, literature shows that mechanical combination can outperform the human integration of evidence when assessing complex competence, and that therefore a good use of human judgements is in making decisions about individual traits, which are then mechanically combined. The weight of evidence suggests that GRS generally achieve better reliability and validity than checklists, but that a high quality checklist is better than a poor quality GRS. The review is a reminder that including assessors in designing assessment instruments processes can helps to maximise manageability.

2017

A culture of question writing: How do question writers compose examination questions in an examination paper?
Johnson, M. and Rushton, N. (2017). Presented at the 18th annual AEA Europe conference, Prague, 9-11 November 2017.
What is effective feedback in a professional learning context? A study of how examination markers feedback to each other on their marking performance
Johnson, M. (2017). Presented at the annual conference of the British Educational Research Association, University of Sussex, Brighton, UK, 5-7 September 2017.
More like work or more like school? Insights into learning cultures from a study of skatepark users
Johnson, M. and Oates, T. (2017). Presented at the Journal of Vocational Education and Training International Conference, University of Oxford, UK, 7-9 July 2017
Multiple voices in tests: towards a macro theory of test writing
Constantinou, F., Crisp, V. and Johnson, M. (2017).  Multiple voices in tests: towards a macro theory of test writing.  Cambridge Journal of Education, 48(8), 411-426.
How do question writers compose external examination questions? Question writing as a socio-cognitive process
Johnson, M., Constantinou, F. and Crisp, V. (2017). How do question writers compose external examination questions? Question writing as a socio-cognitive process. British Educational Research Journal (BERJ). 43(4), 700-719.

2016

Making Sense of a Learning Space: How Freestyle Scooter-riders Learn in a Skate Park
Johnson, M. and Oates, T. (2016). Informal Learning Review, 140, 17-21.
The challenges of researching digital technology use: examples from an assessment context
Johnson, M. (2016). International Journal of e-Assessment, 1(2), 1-10.
Reading between the lines: exploring methods for analysing professional examiner feedback discourse
Johnson, M. (2017). Reading between the lines: Exploring Methods for Analysing Professioanl Examiner Feedback Discourse. International Journal of Research & Method in Education, 40(5), 456-470.
Feedback effectiveness in professional learning contexts
Johnson, M. (2016). Review of Education, 4(2), 195-229.
How do question writers compose examination questions? Question writing as a socio-cognitive process
Johnson, M., Constantinou, F. and Crisp, V. (2016). Paper presented at the AEA-Europe annual conference, Limassol, Cyprus, 3-5 November 2016
'Question quality': The concept of quality in the context of exam questions
Crisp, V., Johnson, M. and Constantinou, F. (2016). Paper presented at the AEA-Europe annual conference, Limassol, Cyprus,3-5 November 2016
Writing questions for examination papers: a creative process?
Constantinou, F., Crisp, V. and Johnson, M. (2016). Paper presented at the 8th Biennial Conference of the European Association for Research in Learning and Instruction (EARLI) SIG 1 - Assessment and Evaluation, Munich, Germany, 24-26 August 2016
Researching effective feedback in a professional learning context
Johnson, M. (2016). Paper presented at the 7th Nordic Conference on Cultural and Activity Research, Helsingør, Denmark, 16-18 June 2016
All in good time: Influences on team leaders’ communication choices when giving feedback to examiners

Johnson, M. (2016). All in good time: Influences on team leaders’ communication choices when giving feedback to examiners. Research Matters: A Cambridge Assessment publication, 21, 28-33.

During the standardisation and live marking phases of an examination session it is increasingly common for team leaders and examiners to interact via a digital marking environment. This environment supports a number of quality assurance functions. Team leaders can see examiners’ real time scripts and mark submissions, and they can compare examiners’ marks with preordained definitive marks on special monitoring scripts to check marking accuracy. The digital marking system also allows team leaders to give examiners feedback on their marking. This article focuses on feedback practices, using observation, interview and survey data from 22 team leaders and six examiners to explore the rationales and communication choices involved in such practices. The analyses suggest that the objective of giving feedback is to construct messages that allow examiners insights into a team leader’s thinking, and this interaction is central to the development of an examiner’s understanding of how to interpret and apply mark schemes in accordance with their team leader’s views. This article discusses how the concept of synchrony underpins the feedback practices of team leaders and examiners, ensuring that the participants develop a shared focus so that both feedback message intention and reception are aligned.

2015

Finding the common ground: Teachers' and employers' representations of English in an assessment context
Child, S., Johnson, M., Mehta, S. and Charles, A. (2015).  English in Education, 49(2), 150-166.
Reading between the lines: exploring the characteristics of feedback that support examiners’ professional knowledge building
Johnson, M. (2015) Paper presented at the British Educational Research Association (BERA) conference, Belfast, 15-17 September 2015
Articulation work: Insights into examiners' expertise from their remote feedback sessions
Johnson, M. (2015). Communication & Language at work. 1(4), 28-52
Assessment, aim and actuality: insights from teachers in England about the validity of a new language assessment model
Johnson, M., Mehta, S. and Rushton, N. (2015). Pedagogies: An International Journal. 10(2), 128-148.

2014

Assessment for Learning in International Contexts: exploring shared and divergent dimensions in teacher values and practices
Warwick P., Shaw, S. D. and Johnson, M. (2014). Assessment for Learning in International Contexts: exploring shared and divergent dimensions in teacher values and practices. The Curriculum Journal, 25(4), 1-31.
Insights into contextualised learning
Johnson, M (2014). Insights into contextualised learning: how do professional examiners construct shared understanding through feedback? E-Learning and Digital Media, 11(4), 363-378.
A case study of inter-examiner feedback from a UK context
Johnson, M (2014). A case study of inter-examiner feedback from a UK context: Mixing research methods to gain insights into situated learning interactions. Formation et pratiques d’enseignement en questions, 17, 67-88.

2013

'Seeing what they say' - Mapping the characteristics of effective remote feedback

Johnson, M. (2013) Paper presented at the European Conference on Computer Supported Cooperative Work, 21-25 September 2013.

Can you dig it?
Johnson, M and Lewis, C. (2013). Developing an approach to validly assessing diverse skills in an archaeological context. Journal of Vocational Education and Training. 65(2), 177-192
Cambridge Assessment Qualitative Research Methods Reading Group

Johnson, M. (2013). Cambridge Assessment Qualitative Research Methods Reading Group. Research Matters: A Cambridge Assessment publication, 15, 38.

This article is an update on the status of a research-based reading group that was formed to with the intention of sharing methods-related expertise within the Cambridge Assessement group. Since 2011 a series of Research Division-based reading groups have been organised. The remit of the group was initially to bring together researchers from across the Cambridge Assessment group to look at a variety of different qualitative research methods. The initiative was considered to be a useful way of sharing expertise amongst colleagues as well as being an important opportunity to raise awareness of the ways of using qualitative research methods in Cambridge Assessment’s own research.

Assessment for Learning in International Contexts (ALIC): understanding values and practices across diverse contexts

Shaw, S., Johnson, M. and Warwick, P. (2013). Assessment for Learning in International Contexts (ALIC): understanding values and practices across diverse contexts Research Matters: A Cambridge Assessment publication, 15, 17-28.

The Assessment for Learning in International Contexts (ALIC) project sought to extend knowledge around teachers’ understandings of Assessment for Learning (AfL). Using a modified version of a survey devised by James and Pedder for use with teachers in England, evidence was gathered about the assessment practices that were highly valued by teachers across international contexts (Argentina, India, Indonesia, Saudi Arabia, and Nigeria). The extent of congruence between these values and teachers’ reported classroom practices was then explored. In very broad terms, the items most valued by the teachers in this study demonstrate the considerable value placed upon practices linked positively to formative assessment principles and strategies. Certainly it seems that teachers have a particular concern with learning more about student learning and with promoting the development of pupil agency in assessment and learning.

2012

The Assessment for Learning in International Contexts (ALIC) Research Project
Shaw, S., Johnson, M. and Warwick, P. (2012) Research Intelligence 119, 14-15
Interpreting examiners’ annotations on examination scripts: a sociocultural analysis
Johnson, M. and Shaw, S. (2012) Irish Educational Studies 31(4), 467-485
Feedback as scaffolding: Senior Examiner monitoring processes and their effects on examiner marking
Johnson, M. and Black, B. (2012) Research in Post-Compulsory Education 17(4), 391-407
Essay Marking on Screen: Factsheet 3
Cognitive workload is greater on screen
Essay Marking on Screen: Factsheet 2
Marking behaviours differ on screen
Essay Marking on Screen: Factsheet 1
Essay marking accuracy is reliable across modes
The effects of features of examination questions on the performance of students with dyslexia
Crisp, V., Johnson, M. and Novakovic, N. (2012) British Educational Research Journal, 38(5), 813-839.
What's going on? Analysing visual data to understand context-based decision-making processes
Johnson, M. and Black, B. (2012) International Journal of Research & Method in Education, 35(3), 243-250
Technologically mediated communication: methods for exploring examiners’ real-time feedback interactions
Johnson, M. and Black, B. (2012) EARLI Special Interest Group 17 (Qualitative and Quantitative Approaches to Learning and Instruction) conference on Mixed Methods in Educational Interactions, Saxion University, Deventer, September 2012
Feedback as scaffolding: Senior Examiner monitoring processes and their effects on examiner marking
Johnson, M. and Black, B. (2012) British Educational Research Association Annual Conference, University of Manchester, September 2012
A review of the uses of the Kelly's Repertory Grid method in educational assessment and comparability research studies
Johnson, N. and Nádas, R. (2012) Educational Research and Evaluation, 18(5) 425-440
Extended essay marking on screen: is examiner marking accuracy influenced by marking mode?
Johnson, M., Hopkin, R., Shiell, H. and Bell, J.F. (2012) Educational Research and Evaluation: An International Journal on Theory and Practice, 18, 2, 107-124
Marking extended essays on screen: exploring the link between marking processes and comprehension.
Johnson, M., Hopkin, R, and Shiell, H. (2012) E-Learning and Digital Media

2011

Extended essay marking on screen: does marking mode influence marking outcomes and processes?
Johnson, M., Hopkin, R., Shiell, H. and Bell, J. F. (2011). Paper presented at the British Educational Research Association annual conference, University of London Institute of Education, September 2011.
Extended essay marking on screen: Does marking mode influence marking outcomes and processes?

Shiell, H., Johnson, M., Hopkin, R., Nadas, R. and Bell, J. (2011). Extended essay marking on screen: Does marking mode influence marking outcomes and processes? Research Matters: A Cambridge University Press & Assessment publication, A selection of articles (2011) 14-19. First published in Research Matters, Issue 7, January 2009

Research into comparisons between how people read texts on paper and computer screen suggests that the medium in which a text is read might influence the way that a reader comprehends that text. This is because some of the reading behaviours that support comprehension building, such as seamless navigation and annotation of text, are not easily replicated on screen.

Additional research also suggests that reading long texts can be more cognitively demanding on screen, and that this extra demand can have a detrimental effect on how readers comprehend longer texts. In the context of examination marking, there might be concerns that such a mode-related effect might lead to essays being marked less accurately when marked on screen compared with when they are marked on paper.

To investigate further the potential links between marking mode and the outcomes and processes of extended essay marking, the current project replicated an earlier study (Johnson and Nádas, 2009), replacing GCSE essays with longer Advanced GCE essays. The current project considered three broad areas of enquiry, exploring mode-related influences on (i) marking outcomes, (ii) manual marking processes and (iii) cognitive marking processes.

Can you dig it?': developing an approach to validly assessing diverse skills in an archaeological context
Johnson, M. and Lewis, C. (2011). Paper presented at the Journal of Vocational Education and Training International conference, Oxford, July 2011.
Evaluating the CRAS framework: Development and recommendations

Johnson, M. and Mehta, S. (2011). Evaluating the CRAS framework: Development and recommendations. Research Matters: A Cambridge Assessment publication, 12, 27-33.

This article reviews conceptual issues surrounding comparisons of demand through a critical evaluation of the CRAS (Complexity-Resources-Abstractness-Strategy) framework (Pollitt, Hughes, Ahmed, Fisher-Hoch and Bramley, 1998).

The article outlines the origins of the CRAS framework in the scale of cognitive demand (Edwards and Dall’Alba, 1981). The characteristics of the CRAS framework are then outlined, with attention being drawn to the assumptions that underlie these characteristic features. The article culminates in a set of recommendations and guidance that are relevant for potential users of the CRAS framework.

Extended essay marking on screen: Does marking mode influence marking outcomes and processes?

Shiell, H., Johnson, M., Hopkin, R., Nadas, R. and Bell, J. (2011). Extended essay marking on screen: Does marking mode influence marking outcomes and processes? Research Matters: A Cambridge Assessment publication, 11, 2-7

Research into comparisons between how people read texts on paper and computer screen suggests that the medium in which a text is read might influence the way that a reader comprehends that text. This is because some of the reading behaviours that support comprehension building, such as seamless navigation and annotation of text, are not easily replicated on screen.

Additional research also suggests that reading long texts can be more cognitively demanding on screen, and that this extra demand can have a detrimental effect on how readers comprehend longer texts. In the context of examination marking, there might be concerns that such a mode-related effect might lead to essays being marked less accurately when marked on screen compared with when they are marked on paper.

To investigate further the potential links between marking mode and the outcomes and processes of extended essay marking, the current project replicated an earlier study (Johnson and Nádas, 2009), replacing GCSE essays with longer Advanced GCE essays. The current project considered three broad areas of enquiry, exploring mode-related influences on (i) marking outcomes, (ii) manual marking processes and (iii) cognitive marking processes.

Extended essay marking: Does the transition from paper to screen influence examiners' cognitive workload?
Johnson, M., Hopkin, R. and Shiell, H. (2011) The International Journal of e-assessment, 1, 1, 1-25

2010

Marking essays on screen: an investigation into the reliability of marking extended subjective texts
Johnson, M., Nadas, R. and Bell, J.F. (2010) British Journal of Educational Technology, 41, 5, 814-826
Towards an understanding of the impact of annotations on returned exam scripts

Johnson, M. and Shaw, S. (2010). Towards an understanding of the impact of annotations on returned exam scripts. Research Matters: A Cambridge Assessment publication, 10, 16-21.

There is little empirical study into practices around scripts returned to centres. Returned scripts often include information from examiners about the performance being assessed. As well as the total score given for the performance, additional information is carried in the form of the annotations left on the script by the marking examiner.

Examiners’ annotations have been the subject of a number of research studies (Crisp and Johnson, 2007; Johnson and Shaw, 2008; Johnson and Nádas, 2009) but as far as we know there has been no research into how this information is used by centres or candidates and whether it has any influence on future teaching and learning. This study set out to look at how teachers and students interact with examiners’ annotations  on scripts.

This study used survey and interview methods to explore:
 
1. How do teachers and centres use annotations?
2. What is the scale of such use?
3. What importance is attached to the annotations?
4. What factors might influence the interpretation of the annotations?

School Based Assessment in International Practice
Johnson, M. and Burdett, N. (2010) Problems in Modern Education, 4, 64-73
Marking essays on screen and on paper
Johnson, M., Nadas, R. and Green, S. (2010) Education Journal, 121, 39-41
Intention, interpretation and implementation: some paradoxes of Assessment for Learning across educational contexts
Johnson, M. and Burdett, N. (2010) Research in Comparative and International Education, 5, 2, 122-130

2009

Marginalised behaviour: digital annotations, spatial encoding and the implications for reading comprehension
Johnson, M. and Nadas, R. (2009) Learning, Media and Technology, 34, 4, 323-336
An exploration of the effect of pre-release examination materials on classroom practice in the UK
Johnson, M. and Crisp, V. (2009) Research in Education, 82, 47-59
An investigation into marker reliability and other qualitative aspects of on-screen essay marking
Johnson, M., Nadas, R. and Shiell, H. (2009) British Educational Research Association (BERA) Annual Conference
An investigation into marker reliability and some qualitative aspects of on-screen marking

Johnson, M. and Nadas, R. (2009). An investigation into marker reliability and some qualitative aspects of on-screen marking. Research Matters: A Cambridge Assessment publication, 8, 2-7.

There is a growing body of research literature that considers how the mode of assessment, either computer- or paper-based, might affect candidates’ performances (Paek, 2005). Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when dealing with extended textual answers in different modes.

This study involved 12 examiners marking 90 GCSE English Literature essays on paper and on screen and considered 6 questions: 1. Does mode affect marker reliability? 2. Construct validity – do examiners consider different features of the essays when marking in different modes? 3. Is mental workload greater for marking on screen? 4. Is spatial encoding influenced by mode? 5. Is navigation influenced by mode? 6. Is ‘active reading’ influenced by mode?

2008

A case of positive washback: an exploration of the effect of pre-release examination materials on classroom practice: ECER abstract
Johnson, M. & Crisp, V. (2008) European Conference on Educational Research (ECER), Gothenburg
Annotating to comprehend: a marginalised activity?

Johnson, M. and Shaw, S. (2008). Annotating to comprehend: a marginalised activity? Research Matters: A Cambridge Assessment publication, 6, 19-24.

One of the important premises underlying this article is that the cognitive processes involved in reading can play a significant role in assessment judgements. Although we acknowledge that not all assessments of performance rely on assessors appraising written texts, many tests use written evidence as an indicator of performance. As a result, it is important to consider the role of assessors’ comprehension building when reading candidates’ textual responses, particularly where candidates are offered a greater freedom in determining the form and scope of their responses.

This paper brings together literature about linguistics and annotation practices, both empirical and theoretical, and suggests that a critical link exists between annotating and reading activities. Through making the different functions of annotation explicit the intention of this paper is to primarily amplify the importance of the impact of annotating on assessor comprehension.

'3Rs' of assessment research: Respect, Relationships and Responsibility – what do they have to do with research methods?

Johnson, M. (2008). '3Rs' of assessment research: Respect, Relationships and Responsibility – what do they have to do with research methods? Research Matters: A Cambridge Assessment publication, 6, 2-4.

This article focuses on the merits and challenges of using qualitative research methods, and how these can contribute positively to the study of assessment. Based on a presentation from a methods-related research seminar, this paper appeals for research engagement with those areas where assessment affects the lives of others. This appeal means not only asking the difficult questions but also having the appropriate methodologies to try to answer them. The paper goes on to champion the strengths of mixed methods approaches that allow for triangulation, complementarity (where findings gained through one method offer insights into other findings) and expansion (of the breadth and scope of the research beyond initial findings).

Holistic judgement of a borderline vocationally-related portfolio: a study of some influencing factors

Johnson, M. (2008). Holistic judgement of a borderline vocationally-related portfolio: a study of some influencing factors. Research Matters: A Cambridge Assessment publication, 6, 16-19.

The assessment of a large portfolio of mainly textual evidence demands an assessor to accommodate a great deal of information. This comprehension process is influenced by the linear nature of the reading process which leads to the gradual construction of a mental representation of the text in the head of the reader (Johnson and Laird, 1983).

Understanding how assessors work with portfolios also requires us to consider how assessors integrate and combine different aspects of an holistic performance into a final judgement. Sanderson (2001) suggests that the social context of the assessor is important to consider since it recognises their participation in a community of practice (Wenger, 1998) and constitutes an ‘outer frame’ for their activity.

This study sought to explore issues of consistent assessor judgement by gathering data about individual assessors’ cognitive activity as well as the socio-contextual features in which their practices were undertaken. It focused on an OCR Nationals unit in Health and Social Care (Level 2). Six assessors were asked to ‘think aloud’ whilst they judged the unit. This commentary was then transcribed into a verbal protocol and analysed with qualitative text analysis software. A modified Kelly’s Repertory Grid (KRG) interview technique was also used to gather data about different assessors’ perceptions of constructs within the same assessment criteria.

Exploring assessor consistency in a Health and Social Care qualification using a sociocultural perspective
Johnson, M. (2008) Journal of Vocational Education and Training, 60, 2, 173-187
Grading in competence-based qualifications – is it desirable and how might it affect validity?
Johnson, M. (2008) Journal of Further and Higher Education, 32, 2, 175-184
School-based assessment in international practice

Johnson, M. and Burdett, N. (2008). School-based assessment in international practice. Research Matters: A Cambridge Assessment publication, 5, 24-29.

The term ‘school-based assessment’ (SBA) can conjure up diverse and not necessarily synonymous meanings which often include forms of ongoing and continual classroom assessment of a formative nature. This article attempts to clarify how, and why, SBA has been successfully introduced in various contexts and the importance of the context in its success or otherwise. It reviews SBA research literature and lists some of the advantages of SBA, some of the reservations about using SBA, and some principles about when and why SBA should be used.

Judging Text Presented on Screen: implications for validity
Johnson, M. and Greatorex, J. (2008) E-Learning, 5, 1, 40-50

2007

The use of annotations in examination marking: opening a window into markers’ minds
Crisp, V. and Johnson, M. (2007) British Educational Research Journal, 33(6), 943–961
Assessors’ holistic judgements about borderline performances: some influencing factors
Johnson, M. and Greatorex, J. (2007) British Educational Research Association (BERA) Annual Conference
The effects of features of GCSE questions on the performance of students with dyslexia
Crisp, V., Johnson, M. and Novakovic, N. (2007) British Educational Research Association (BERA) Annual Conference
Does the anticipation of a merit grade motivate vocational test-takers?
Johnson, M (2007) Research in Post-Compulsory Education, 12, 2, 159-179
Is passing just enough? Some issues to consider in grading competence-based assessments

Johnson, M. (2007). Is passing just enough? Some issues to consider in grading competence-based assessments. Research Matters: A Cambridge Assessment publication, 3, 27-30.

Competence-based assessment involves judgements about whether candidates are competent or not. For a variety of historical reasons, competency-based assessment has had an ambivalent relationship with grading (i.e., identifying different levels of competence), although it is accepted by some that ‘grading is a reality’ (Thomson, Saunders and Foyster, 2001, p.4). The question of grading in competence-based qualifications is particularly important in the light of recent national and international moves towards developing unified frameworks for linking qualifications. This article uses validity as a basis for discussing some of the issues that surround the grading of competence-based assessments and is structured around 10 key points.

Grading, motivation and vocational assessment
Johnson, M. (2007) The Journal of Vocational Education and Training International Conference, University of Oxford

2006

Vocational review 2002-2006: Issues of validity, reliability and accessibility
Johnson, M. (2006) British Educational Research Association (BERA) Annual Conference
Examiners annotations: practice and purpose

Crisp, V. and Johnson, M. (2006). Examiners annotations: practice and purpose. Research Matters: A Cambridge Assessment publication, 2, 11-14.

The processes of reading and writing are recognised to be inextricably intertwined. Writing helps to support cognitive demands made upon the reader whilst processing a text (e.g., O’Hara, 1996; Benson, 2001). Examiners annotate scripts whilst marking (e.g., underlining, circling, using abbreviations or making comments) and this may reflect the cognitive support for comprehension building that annotations can provide. There is also some existing evidence that annotations might act as a communicative device in relation to accountability and that annotating might have a positive influence on markers’ perceptions and affect their feelings of efficacy.

This research investigated the use of annotations during marking and the role that annotations might be playing in the marking process. Six mathematics GCSE examiners and six business studies GCSE examiners who had previously been standardised to mark the paper were recruited. Examiners initially marked ten scripts which were then reviewed by their Team Leader. Examiners then marked a further 46 (Business Studies) or 40 (Mathematics) scripts.

The examiners later attended individual meetings with researchers. The session began with each examiner marking a small number of new scripts to re-familiarise themselves with the examination paper and mark scheme. A researcher then observed each examiner as they continued marking a few further scripts. Each examiner was interviewed about their use of annotations.

The findings portray a clear sense that markers in both subjects believed that annotating performed two distinct functions. The first appeared to be justificatory, communicating the reasons for their marking decisions to others. This mirrors the statutory requirements for awarding bodies to establish transparent, accountable procedures which ensure quality, consistency, accuracy and fairness. The second purpose was to support their thinking and marking decisions. In addition to helping markers with administrative aspects of marking (for example, keeping a running tally of marks), there are claims that annotations also support higher order reading comprehension processes.

A review of vocational research in the UK 2002-2006: measurement and accessibility issues.
Johnson, M. (2006) International Journal of Training Research, 4, 2, 48-71
On-line Mathematics Assessment: The Impact of Mode on Performance and Question Answering Strategies
Johnson, M. and Green, S. (2006) The Journal of Technology, Learning and Assessment, 4, 5

2005

Judging learners’ work on screen. How valid and fair are assessment judgements?
Johnson, M. and Greatorex, J. (2005) British Educational Research Association (BERA) Annual Conference
The use of annotations in examination marking: opening a window into markers' minds
Crisp, V. and Johnson, M (2005) British Educational Research Association (BERA) Annual Conference
Concepts of Difficulty: A Child's Eye View
Johnson, M. (2005) In: M. Pandis, A. Ward and Mathews, S. R. (Eds), Reading, Writing, Thinking. 199-207. Newark, DE: International Reading Association

2004

On-line assessment: the impact of mode on students’ strategies, perceptions and behaviours
Johnson, M. and Green, S. (2004) British Educational Research Association (BERA) Annual Conference
On-line assessment: the impact of mode on student performance
Johnson, M. and Green, S. (2004) British Educational Research Association (BERA) Annual Conference

2003

Concepts of difficulty - a child's eye view
Johnson, M. (2003) British Educational Research Association (BERA) Annual Conference
Changes in Key Stage Two Writing from 1995 to 2002
Green, S., Johnson, M., O’Donovan, N. and Sutton, P. (2003) United Kingdom Reading Association Conference

2002

What makes a good writing stimulus?
Johnson, M. (2002) British Educational Research Association (BERA) Annual Conference
Comparability Study of Pupils' Writing from Different Key Stages
Green, S., Pollitt, A., Johnson, M. and Sutton, P. (2002) British Educational Research Association (BERA) Annual Conference

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.