
One of the efficiencies of AI in education is likely to be in the areas of marking and feedback. Proponents of automation/AI use in marking and feedback suggest that use of AI could speed up the marking/feedback process, ensure consistency and potentially offer rich, varied, personalised feedback. However, students have raised concerns about greater use of AI in marking and feedback as limiting their interaction with a human tutor and depersonalisation of their learning experience.
What aspects should I consider?
While our understanding of the impact of AI on teaching and learning develops, it is sensible to be cautious about the use of AI for marking and feedback. Here are some aspects to consider:
- Contextualise your use of AI within a holistic view of your teaching/assessment. Does using AI to assist you in marking/feedback allow you to invest in time for more meaningful engagement with students about feedback and how to use it?
- Be transparent about how you are using AI and what the ‘added value’ to students is by doing so. This is compatible with general responsible AI use. In the context of marking, this means explaining to students how AI will mark work, what rubrics or pre-input information it will work to, how marks will be checked or moderated, and what students should do if they are concerned about their marks.Â
- If you intend to use AI to support marking you must use Copilot. UoS supports the use of Microsoft Copilot which has institutional ‘guardrails’ ensuring input data does not train a model or get archived and shared. If you share student work in an open AI tool, like Claude, or ChatGPT, you are infringing a student’s copyright. Work submitted to Large Language Models (LLMs) can potentially be analysed, stored and used by others. Â
- You must retain accountability for your marks and feedback, so all AI-generated marking should be checked or moderated. AI gets things wrong, even when you preload it with information to work from. AI often produces unusual wording or structures which can make it evident that you have used AI and diminish your personal ‘voice.’ In addition, it often works at a generic, non-detailed level which does not give rich, personalised feedback without editing/detailed prompting. This results in marking that perhaps does not capture your unique voice, the insights gained from your personal experience, or the context of learning.Â
Bite-sized task
Automated ways of providing marks and feedback are not new. However, AI offers potential to enhance and assist the giving of marks and feedback that can integrate with how we want to work, as educators.
In this task, you will consider some example scenarios of when AI-assisted marking might be used effectively and reflect on their use in practice.
Step 1 – learn
Look at this brief snapshot of findings from a survey of 7000 students in Australia ‘Riding the Tiger of AI Feedback.’
Look at these scenarios of effective practice in AI-assisted marking [internal Sharepoint link].
Step 2 – do
What do these two resources suggest to us about how staff and students might feel about the use of AI to assist in offering marking and feedback?
Think about your own module or programme and your own practices in relation to marking and feedback.
Do you work with marking rubrics and criteria?
How do you usually deliver formative or summative feedback? Are there aspects of the process that could be enhanced using AI? Might this be different for different tasks? How?
How could you experiment safely and responsibly with AI to ensure it is enhancing what you do?
Step 3 – reflect
The idea of using AI for marking and feedback can be contentious.
How do you feel about using AI to help in this educational process?
How do you think your students might feel?
How might you talk to your students about the use of AI to assist in marking/feedback and build trust?
Join the conversation
Post your thoughts on the weekly Teams post to join the conversation.
Further links
Nazaretsky, T., Mejia-Domenzain, P., Swamy, V., Frej, J., & Käser, T. (2024). AI or human? Evaluating student feedback perceptions in higher education. In European Conference on Technology Enhanced Learning (pp. 284-298). Cham: Springer Nature Switzerland.
Barrett, A., Pack, A. (2023) Not quite eye to A.I.: student and teacher perspectives on the use of generative artificial intelligence in the writing process. Int J Educ Technol High Educ 20, 59.
Contributor biography
Kate Borthwick is Professor of Digital Education in Languages, Cultures and Linguistics, in the Faculty of Arts and Humanities. She is the Lead for AI in education at the University and chair of the University Digital Education Advisory Group.  She is Director of the University open online course programme and is an award-winning lecturer and learning designer.
