
If we ask a question of generative AI, it produces a response within seconds. That response is detailed, draws upon a large dataset and can be rapidly refined and expanded through our own questioning or instructions. For example, if I ask Copilot to produce a research design for a Masters in a particular discipline, including references, relevant theoretical frameworks, appropriate research methods to result in a 20,000-word dissertation, it will do this in a matter of moments.
Yet a research design is usually the culmination of weeks or years of research and thinking. GenAI offers the possibility of jumping to the end of a task and giving you the output without you having to take the steps of learning in between.
This has led some commentators and researchers to characterize use of AI as ‘cognitive offloading’ or ‘cognitive delegation’: getting AI to think for us. While this has potential positive benefits in terms of productivity and efficiency there are serious implications for education and knowledge development in general. How can our students progress if they repeatedly use AI to uncover foundational knowledge rather than discovering it themselves? How can we retain the ‘challenge’ in education that developing expertise requires if GenAI is so readily available with information and answers?
What does research say?
As yet, there is no conclusive body of evidence, but research is underway into how use of AI can impact cognitive engagement and development. Some recent findings are:
- Over-reliance on AI can hinder active engagement in learning and lead to shallow, superficial understanding or biased perspectives
- Cognitive offloading to AI can diminish complex, critical thinkingÂ
- Use of AI can sometimes lead to a misplaced over-confidence in knowledge (thinking you know or understand something when you don’t)Â
- Use of AI can foster an overly-trusting relationship with AI (thinking it ‘knows’ everything better than you)Â
- Use of AI can reduce the sense of ‘ownership’ in work (and therefore knowledge, pride and responsibility)Â
(Fan et al, 2025, Gerlich, 2024, Jost et al, 2024, Kosmyna et al, 2025)
As educators, what can we do?
- Assess where the value of AI for your discipline/module/programme sits in terms of enhancing productivity and efficiency. Are there some tasks that AI is particularly useful for and can enhance learning? Are there other tasks/areas where basic skills and knowledge need to be acquired and demonstrated for progress to take place?
- Learn about GenAI and the impact it is having on the learning you expect to happen in your module/programme. Experiment with AI and talk to your students about AI is assisting them in learning on your programme.Â
- Articulate the learning outcomes and intentions on your programme/module clearly. This might include discussion of why AI is not recommended for use at certain times.Â
- Include elements of critical AI literacy skills in the context of your discipline to illustrate how AI can both help and hinder the development of subject knowledge.Â
Bite-sized task
In this activity, you will critique an AI-output relevant to your own work or discipline area and consider how you might build AI into your work while retaining challenge and criticality.
Step 1 – learn
Watch this short video (4 mins) that links confidence levels to AI-use:
The paper referred to in the video is: Hao-Ping (Hank) Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Confidence Effects From a Survey of Knowledge Workers. In CHI Conference on Human Factors in Computing Systems (CHI ’25), April 26–May 01, 2025, Yokohama, Japan. ACM, New York, NY, USA, 23 pages. https://doi.org/10. 1145/3706598.3713778
Step 2 – do
- Think of a task or assignment question that you might set your students or carry out as part of your job. This might be ‘write a report on…’, ‘review an article on…’, ‘create a research design…’ include as much detail as you wish to.
- Ask your question to Copilot.Â
- Critique the response. Is this a good response? Are there mistakes or other additions there? How much further thinking or work might get an excellent response to your question?Â
- Consider what might this mean for the cognitive development of your students. You are a subject expert and so you can critique the response, benchmark it, assess its usefulness but what if your students had done the same activity? Would they understand how to contextualise the response? Would they pass an assessment that you have set without developing the independent thinking you aim for? Would they achieve the learning outcomes that you intend?Â
Step 3 – reflect
Reflect on how this might affect your teaching delivery. Perhaps you can see a way to integrate AI into your tasks or assignments that promotes learning through a critical response to AI output.
Or perhaps you see a need to revise task instructions, assessment design or reframe learning outcomes to ensure learning takes place.
What kind of challenges to cognitive development might over-use of AI present for your discipline or programme?
Share your thoughts on this week’s GenAI Essentials post in Teams and join in the discussion. 
Join the conversation
Post your thoughts on the weekly Teams post to join the conversation.
Further links and references
These articles or reports informed this post – read them in more detail here:
Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56, 489–530. https://doi.org/10.1111/bjet.13544
Gerlich, M. (2025) AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15, 6. https:// doi.org/10.3390/soc15010006
Jošt, G.; Taneski, V.; Karakatiˇc, S. (2024) The Impact of Large Language Models on Programming Education and Student Learning Outcomes. Appl. Sci., 14, 4115. https://doi.org/10.3390/ app14104115
Kosmyna, N., Hauptmann, E., Ye, T. Y., Situ, J., Liao, X-H., Beresnitzky, A.V., Braunstein, I., Maes, P. (2025) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, pre-print, MIT Media Lab https://arxiv.org/pdf/2506.08872v1
Contributor biography
Kate Borthwick is Professor of Digital Education in Languages, Cultures and Linguistics, in the Faculty of Arts and Humanities. She is the Lead for AI in education at the University and chair of the University Digital Education Advisory Group.  She is Director of the University open online course programme and is an award-winning lecturer and learning designer.
© 2025. This work is openly licensed via CC BY-NC-SA
