Research Matters 24

  • Research Matters 24 - Foreword

    Oates, T. (2017). Foreword. Research Matters: A Cambridge Assessment publication, 24, 1.

    Download

  • Research Matters 24 - Editorial

    Bramley, T. (2017) Editorial. Research Matters: A Cambridge Assessment publication, 24, 1.

    Download

  • Undergraduate Mathematics students’ views of their pre-university mathematical preparation

    Darlington, E. and Bowyer, J. (2017). Undergraduate Mathematics students’ views of their pre-university mathematical preparation. Research Matters: A Cambridge Assessment publication, 24, 2-11.

    In response to planned reforms to A levels, undergraduate mathematicians who had taken AS or A level Further Mathematics were surveyed. A total of 928 mathematics undergraduates at 42 British universities responded to an online questionnaire regarding their experiences of A level Mathematics and Further Mathematics, and their mathematical preparedness for undergraduate study. Students’ responses suggest that Further Mathematics is a worthwhile qualification for undergraduate Mathematics applicants to take, in terms of the mathematical background with which it provides students.

    Participants described Further Pure Mathematics most favourably of all of the optional strands of study within Further Mathematics, with Statistics and Mechanics receiving positive feedback and Decision Mathematics receiving negative feedback. Participants who were not required to have taken Further Mathematics in order to be accepted onto their university course were generally more enthusiastic about their experience of it and of their perceptions of its usefulness than those who were required to have taken it. This suggests that it would be beneficial for prospective undergraduate mathematicians to study A level Further Mathematics, regardless of whether or not the universities they apply to require it for entry.

    Download

  • Question selection and volatility in schools’ Mathematics GCSE results

    Crawford, C. (2017). Question selection and volatility in schools’ Mathematics GCSE results. Research Matters: A Cambridge Assessment publication, 24, 11-16.

    This research estimated the extent to which volatility in schools’ scores may be attributable to changes in the selection of questions in exam papers. This question was addressed by comparing candidates’ performance on two halves of the same assessment. Once student grades were calculated for each half-test, these were aggregated within each school to form school-level outcomes for each half-test (e.g., percentage of students with a grade C or above). Comparing the variation in schools’ outcomes for their students’ performance on two parts of a single test should give us some idea of the amount of variation in actual year-to-year results that could be due to changes in test questions. This process was applied to a Mathematics GCSE and the results suggest that the exact choice of questions in an exam ha only a small impact on school-level results.

    Download

  • Utilising technology in the assessment of collaboration: A critique of PISA’s collaborative problem-solving tasks

    Shaw, S. and  Child, S. (2017). Utilising technology in the assessment of collaboration: A critique of PISA’s collaborative problem-solving tasks. Research Matters: A Cambridge Assessment publication, 24, 17-22.

    This article presents the outcomes of an exercise which we conducted to map the assessment approach of PISA 2015 to pertinent facets of the collaborative process, and recent theoretical developments related to engenderment of collaboration within assessment tasks. PISA’s assessment of collaborative problem-solving was mapped onto six facets of collaboration identified in a recent review of the literature (Child & Shaw, 2016) and five elements of task design that were identified in the previous review as contributing to the optimal engenderment of collaborative activity.

    The mapping approach afforded the opportunity to investigate in detail the advantages and disadvantages of PISA’s approach to the use of technology in their assessment of collaboration. The present article’s critique of PISA could lead to future work that analyses the elements of the process of collaboration that have been targeted effectively, and areas for future improvement. This will be of interest to awarding organisations and others that are looking to develop qualifications in this important twenty-first century skill.

    Download

  • Partial absences in GCSE and AS/A level examinations

    Vidal Rodeiro, C. L. (2017). Partial absences in GCSE and AS/A level examinations. Research Matters: A Cambridge Assessment publication, 24, 23-30.

    There are certain situations in which a candidate does not have a mark for a component/unit in a GCSE or AS/A level examination. For example, if they were ill on the day of the exam, if their paper was lost, or if their controlled assessment was invalid as a result of individual or centre malpractice. Subject to certain rules, the awarding body can calculate an estimated mark for the component/unit with the missing mark to enable the candidate to certificate, rather than having to wait for the next assessment opportunity.

    This article explores the use of statistical methods for handling missing data, specifically regression imputation, to estimate the mark for a missing unit/component in GCSE and AS/A level qualifications. The marks (and grades) obtained in this way are compared with the marks (and grades) obtained applying two different methods currently used by some of the awarding boards in England: the z-score method and the percentile (cum% position) method.

    Download

  • On the reliability of applying educational taxonomies

    Coleman, V. (2017). On the reliability of applying educational taxonomies. Research Matters: A Cambridge Assessment publication, 24, 30-37.

    Educational taxonomies are classification schemes that organise thinking skills according to their level of complexity, providing a unifying framework and common terminology. They can be used to analyse and design educational materials, analyse students’ levels of thinking and analyse and ensure alignment between learning objectives and corresponding assessment materials. There are numerous educational taxonomies that have been created and this article reviews studies that have examined their reliability, in particular Bloom’s was a frequently used taxonomy.

    It was found that there were very few studies specifically examining the reliability of educational taxonomies. Furthermore, where reliability was measured, this was primarily inter-rater reliability with very few studies discussing intra-rater reliability. Many of the studies reviewed provided only limited information about how reliability was calculated and the type of reliability measure used varied greatly between studies.

    Finally, this article also highlights factors that influence reliability and that therefore offer potential avenues for improving reliability when using educational taxonomies, including training and practice, the use of expert raters, and the number of categories in a taxonomy. Overall it was not possible to draw conclusions about the reliability of specific educational taxonomies and it seems that the field would benefit from further targeted studies about their reliability.

    Download

  • How much do I need to write to get top marks?

    Benton, T. (2017). How much do I need to write to get top marks? Research Matters: A Cambridge Assessment publication, 24, 37-40.

    This article looks at the relationship between how much candidates write and the grade they are awarded in an English Literature GCSE examination. Although such analyses are common within computer-based testing, far less had been written about this relationship for traditional exams taken with a pen and paper. This article briefly describes how we estimated word counts based on images of exam scripts, validates the method against a short answer question from a Biology examination, and then uses the method to examine how the length of candidates’ English Literature essays in an exam relate to the grade they were awarded. It shows that candidates awarded a grade A* wrote around 700 words on average in a 45-minute exam - an average rate of 15-words per minute across the period. In contrast, grade E candidates who produced around 450 words - an average rate of 10-words per minute. Whilst it cannot be emphasised strongly enough that performance in GCSEs is judged by what students write and not how much, the results of this research may help students facing examinations have a reasonable idea of the kind of length that is generally expected.

    Download

  • Research News

    Beauchamp, D., Barden, K., Elliott, G., and Cooke, G. (2017). Research News. Research Matters: A Cambridge Assessment publication, 24, 42-44.

    A summary of recent conferences and seminars, statistics reports, Data Bytes and research articles published since the last issue of Research Matters.

    Download

Data Bytes

A regular series of graphics from our research team, highlighting the latest research findings and trends in education and assessment.