Robinson, C.L. Hernandez-Martinez, P. and Broughton, S.J., 2012. Mathematics lecturers’ practice and perception of computer-aided assessment. In: P. Iannone and A. Simpson, eds. Mapping University Mathematics Assessment Practices. Norwich: University of East Anglia. Ch.21: 105-118.
A case study of the issues arising as lecturers use computer-aided assessment in mathematics modules
- CETL-MSOR Conference 2012
- 12-13 July 2012
- University of Sheffield, Sheffield
Recent research has developed a wealth of knowledge on the development of computer-aided assessment (CAA) systems (e.g. Greenhow et al. 2003, Pidcock et al. 2004, Sangwin 2007) and CAA has become a useful tool for lecturers to assess and give immediate feedback to large cohorts of students.
This case study examines the use of a CAA system using the QuestionMark Perception software at a UK university with a strong engineering tradition. This system has been used with around 1000 mathematics and engineering first-year undergraduates each year since the start of its development in 2002.
Though it would appear that the system is firmly established as an efficient teaching and learning resource, it has not been without problem. For example, large group sizes have precluded invigilating the online tests. Also, the questions were developed to test students’ ability to carry out common mathematical procedures, so their conceptual knowledge is not tested to the same extent.
Lecturers have responded to issues such as these and, today, a diverse range of practices aimed at mitigating these issues has emerged. Questionnaires completed by nine lecturers at the target institution highlighted stark differences in approach: from non-invigilated online tests taken by students at the time and location of their choosing, to replacement paper tests that are invigilated and incorporate questions designed to assess conceptual understanding.
Six lecturers took part in follow up interviews that sought to address the following questions:
- What issues do lecturers identify with the use of CAA?
- How are lecturers responding to these issues?
- What impact will this have on their future use of CAA?
The interviews suggest that these lecturers are experiencing further issues while using CAA. For example, the paper tests delay the return of feedback and require lecturer time to mark. As for the non-invigilated test, the lecturers have no means to monitor the work that students do and have gathered anecdotes of students using mathematical software, groups of friends or prepared solutions in order to gain high marks in the tests.
Further changes have been made in order to mitigate the effect of these issues too. Some lecturers have attempted to develop new CAA questions that seek students’ conceptual understanding; and some lecturers have reduced the allocation of module marks given to CAA to minimise the advantages gained by abusing the system.
This case study aims to serve as a starting point from which to discuss good practice with computer-aided assessment. Moreover it will outline future plans at this university for better addressing some of the issues concerned.
As I lay awake to the sound of mating foxes and postgraduate students (please be reassured that, so far as I can discern, they are purely intra-species relations), I look back at what has happened since my blog update.
The conference season has taken me to Brighton, Sheffield and Salamanca: to the BSRLM Day Conference, CETL-MSOR Conference and SEFI MWG Seminar, respectively. At each I presented my findings on lecturers’ use of computer-aided assessment.
Since then, I have been continuing with data collection; first interviewing students and, second, interviewing lecturers that do not use CAA. In this time I have also finished a first draft of a thesis chapter, which feels like a milestone passing despite the shortcomings I see in it.
I managed to squeeze in a holiday to Rome, which was very nice. I particularly liked the Vittorio Emanuele II monument for its ostentatiousness and sheer ridiculousness. It reached a dazzling 37°C while we were there: a feat so far unmatched by the drizzling United Kingdom. We, of course, remain hopeful of a late summer surge!
Broughton, S.J. Robinson, C.L. and Hernandez-Martinez, P., 2012. Lecturers’ beliefs and practices on the use of computer-aided assessment to enhance learning. In: Proceedings of the 16th SEFI MWG Seminar. University of Salamanca, Salamanca, Spain 28-30 June 2012.
Lecturers’ beliefs and practices on the use of computer-aided assessment to enhance learning
- 16th SEFI MWG Seminar
- 28-30 June 2012
- Universidad de Salamanca, Salamanca, Spain
This project examines the effectiveness of the use of computer-aided assessment (CAA) to enhance the learning of mathematics in one of the largest cohort of mathematics and engineering undergraduates in the country. The CAA system in use at this university in the United Kingdom provides lecturers with a means to test hundreds of students efficiently. A question bank of thousands of items, with feedback for each item, is available and dedicated staff prepare and upload tests.
However, despite easing the workload and giving students the opportunity to obtain immediate feedback on their work, many lecturers experience conflict between computer-aided assessment practice and their own assessment beliefs. For example, lecturers may wish to assess conceptual understanding and mathematical competencies, but there are some concerns that CAA questions may assess only procedural understanding. Some lecturers would like to develop new questions but worry that the restrictions imposed by the system narrow the scope of what students can be tested on. Lecturers may wish to encourage collaboration and discussion between students for practice (formative) tests, but what if the students collaborate for real (summative) tests? Should summative tests always be invigilated? Many students can obtain 100% in summative tests, but lecturers are not clear of the level of understanding that students have or what learning has taken place.
In this study, we examine lecturers’ beliefs and practices on the use of computer-aided assessment to enhance learning. Our methodological approach was to administer questionnaires to nine lecturers of mathematics-based first-year modules, followed by interviews with seven lecturers that volunteered to be interviewed. A number of different practices emerged and lecturers explained their choices. The interviews explored the options that lecturers selected in the questionnaire in more detail. They were designed to elicit the collaborations and conflicts within this learning community that arise through CAA and to establish lecturers’ views of the effectiveness of CAA at assessing students. Such conflicts give scope for lecturers to change the way they use CAA (Engeström and Sannino 2010). The lecturers described what they believe CAA measures in these students and how compatible these measures are with their ideals.
We present the lecturers’ perceptions of CAA from these questionnaires and interviews. These findings are a progression towards our overarching aims to identify best practices in the delivery of CAA to mathematics and engineering students and to evaluate the effectiveness of CAA at assessing and advancing students, from the perspectives of both the lecturer and the student.
Lecturers’ adaptations to CAA practice
- BSRLM Day Conference
- 9 June 2012
- University of Sussex, Brighton
Computer-aided assessment (CAA) has been used for ten years at a university with one of the largest engineering and mathematics student cohorts in the country. This efficient and time-saving tool for assessing students on the mathematical content in these courses allows lecturers to monitor and record the progress of hundreds of students by selecting from a bank of thousands of questions.
Although this would appear to provide a straightforward means of testing large numbers of students, lecturers have developed diverse practices when using CAA with students. For example, some lecturers invigilate students while they do the online test, while others have replaced the online test with a paper equivalent.
Such changes may be explained by the notion of contradictions proposed by Engeström (2000). For example, some lecturers believe that the questions in the online test are too prescriptive and do not test what is desired. To resolve this conflict, these lecturers may replace the online test with a paper test in order to achieve their assessment goals.
This session will examine the findings from questionnaires and interviews conducted with lecturers of first year mathematics modules at this university. By these methods, lecturers explain how and why they use CAA, the issues they have encountered and how they have adapted their practice to counter them. With activity theory providing a framework with which past changes can be explained, possible future changes in practice will also be discussed.
The inaugural Science Matters conference took place yesterday at Loughborough University. Its aim was to bring together the six departments in the recently formed “School of Science” in one place to talk about research.
We had some excellent keynote speakers, two careers workshops, a keenly contested poster competition and a vibrant group of participants in a discussion group at the end of the day.
Having been a member of the organising committee, I am quite relieved it is over in some ways! However, I was able to enjoy the day and I am quite proud for the small group of dedicated individuals from different departments that it passed over with few problems!
I entered the poster competition, but unfortunately did not win any prizes. A few people (mainly judges) asked a little more about my project, which helps, but it’s a shame more didn’t.
I did enjoy the talk given by Professor Sir Michael Brady, however. He spoke of barriers that exist only in the mind in response to a question that referred to the borders between departments and disciplines. It made me think more about what my research might lead to.
Photo from ucisa.ac.uk
Aside from meeting some familiar faces, I met some new ones: including some prominent researchers that are working with activity theory every day.
Photo from Wikimedia Commons, by Mike Peel (www.mikepeel.net)
Today I was notified that my current report on using focus groups to investigate formative feedback in CAA has been published on the Taylor & Francis website. This is my first published paper that has been peer-reviewed.
Naturally, this happened on the day I gave up using Twitter and Facebook for Lent!