
The ‘jagged frontier’ of GenAI
Studies have found that GenAI has a ‘jagged frontier’ (Dell’Acqua et al., 2023) – that it is better at some things than others. But the nature of the so-called jagged frontier / jagged profile is changing as developers respond to weaknesses in the tech. The effectiveness of a response from GenAI is also very dependent on how a user frames questioning to the AI. It’s now harder to get GenAI tools to exhibit their ‘classic’ weaknesses of:
- Making stuff up (‘hallucinating’) – though problems definitely persist due to how the tools are trained.
- Making errors in its approximation of ‘reasoning’.
- Losing focus / accuracy in longer tasks.
- Creating images with impossible/strange features – though problems definitely persist.
- Creating bland text with formulaic or unusual phrasing (use of ‘delve’ etc).
Biases inherent in the training data (the outputs of our societies) and a tendency toward ‘digital sycophancy’ are ongoing concerns. Digital sycophancy occurs where ‘models tend to output responses that conform to their user’s views, even if incorrect’ (Kwik, 2025, p. 467).
Uncertain impacts on learners
Some speculate that GenAI capabilities have ‘plateaued’, despite the noises coming from big tech companies, but even in their current form the impact on student capabilities and learning is still unclear. The tendency for ‘cognitive offloading’ to AI by learners is a current topic.
As limitations of these tools are patched in response to the way we use them, it’s possible that use of GenAI will foster increasingly jagged profiles in our abilities. If GenAI tools can summarise, compare and critically analyse texts to a sufficient standard, will its users start to lose these skills? What new skills might be acquired or required?
It’s clear that most students (92%!) are using GenAI tools in some way (HEPI, 2025). Regardless of the potential affordances of GenAI, cognitive scientists agree that if we want our students to learning things well, there needs to be an element of cognitive struggle involved. And to improve their skills to an advanced level, learners need deliberate practice. How do we think about this in our own teaching and learning?
Bite-sized task
This task considers the need to balance skills development in students against the affordances and drawbacks of GenAI tools.
One possible way of approaching this is to consider a two-stage process for teaching students particular skills in the context of GenAI. Set up classes so that students first:
- Learn the underlying theory and practice of skill X without use of GenAI, then
- Collectively experiment with and critically evaluate usefulness of GenAI in terms of skill X
How might this process work in your own context?
Step 1 – learn
Think about the skills and capabilities which you want students on your programme/module to develop. Consider overall programme or module learning outcomes, those for particular classes, or preparation for formative (developmental) and summative assessment.
Try to answer these questions in relation to your module and discipline:
- To what extent do some skills seem essential for students to master independently of GenAI?
- What knowledge and skills do students need to plan their approaches to research or problem-solving tasks, and to therefore produce effective prompts for GenAI?
- What do students need to understand in order to evaluate the outputs of GenAI on tasks in your discipline?
- Evaluating and verifying sources
- Checking accuracy of information
- Identifying bias
- Resisting digital sycophancy
Step 2 – do
Apply the answers to the questions above to your planning for a particular class, lecture or assessment.
Make a 2-step plan which first develops an underpinning (human!) skill, then explores the affordances and risks of using GenAI in this area.
Step 3 – reflect
Extend this thinking across a series or sequence of classes, or to module / programme level.
Can you draw a ‘thread’ which combines skills development activities, deliberate practice, and timely engagement with GenAI across a module or programme?
Join the conversation
Post your thoughts on the weekly Teams post to join the conversation.Â
Further links
Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper(24-013).
Kwik, J. (2025). Digital Yes-Men: How to Deal With Sycophantic Military AI? Global Policy, 16(3), 467-473. https://doi.org/https://doi.org/10.1111/1758-5899.70042
Contributor biography
Dr Steve White is a Senior Teaching Fellow (Education Development) in the University of Southampton Business School. He’s worked on various digital learning projects over the years and like everyone else, he’s trying to get his head around what GenAI means for learning and teaching in higher education.
© 2025. This work is openly licensed via CC BY-NC-SA
