Beyond the buzz: how AI can be a coach, not a competitor, in the university classroom – Times Higher Education


Practical insights from and for
academics and university staff
Everything you need for each step
of your study abroad journey
Placing teaching tasks along a spectrum between AI and human strengths can help university educators make use of the best of both worlds
You may also like
Popular resources
The advent of ChatGPT and other generative AI tools ignited debate about the use of artificial intelligence (AI) across the higher education landscape. Concerns about academic integrity, job displacement and the erosion of human connection have dominated discussions. Amid the noise, a crucial question remains: how can educators harness AI to enhance, rather than hinder, the teaching and learning experience?
The AI-teacher teaching tasks spectrum is a framework that delineates the roles of AI and human educators in the classroom. It categorises teaching tasks, ensuring that technology complements rather than replaces human instruction and fosters a collaborative relationship between educators, students and AI tools. Our model was built through interviews with experts and students and refined through a design-based research process.
And it couldn’t be more timely. The Australian Universities Accord calls for the sector to lift participation in tertiary education among the working-age population to 80 per cent by 2050, supported by scalable, inclusive and digitally enabled teaching. The accord, like other global education reform efforts, points towards a digital-first future, but achieving this vision will require more than infrastructure. It will demand pedagogies that combine the strengths of humans and machines.
But what sets our human-machine spectrum apart from other approaches? Many AI-in-education models focus on what technology can do; few ask what it should do in relation to human teaching roles.
The framework places teaching tasks along a spectrum based on their suitability to AI or human execution. Routine and administrative tasks, such as grading or scheduling, fall on the AI end, while complex, empathetic interactions, like mentoring or providing emotional support, remain firmly within the human domain. This delineation empowers educators to allocate responsibilities effectively, leveraging AI’s strengths while preserving the human elements of teaching. 
The approach helps educators focus their energy where it matters most: on tasks that require empathy, judgement and context. While AI can help with summarising text or providing a draft, these tools still lack the capacity to make judgements – making a continuum useful to consider where AI can assist in teaching and student support.
We piloted the framework using a chatbot that acted as a motivational coach. While some of the chatbot’s functions, such as offering motivational prompts and encouraging reflection, may appear to fall at the human end of the AI-teacher teaching task spectrum, these were intentionally designed not to replace teacher-led inspiration or pastoral care. Instead, the chatbot functioned at the transitional zone of the spectrum, acting as a scaffold rather than a substitute. Its prompts were non-evaluative, supportive and grounded in pedagogical models such as guided reflection and Bloom’s taxonomy, offering encouragement without judgement. In this way, the chatbot extended support in low-risk, routine ways, enabling teachers to focus on more complex, contextual and emotionally nuanced tasks that require deep human judgement and presence. 

This design aligns with the spectrum’s principle of leveraging AI for task augmentation rather than automation, affirming the teacher’s central role while exploring how AI can gently nudge students towards self-regulated learning and resilience.
What set this intervention apart was its coaching-style dialogue; it asked students how they were feeling, prompted them to think about their learning strategies and encouraged goal setting and perseverance. Students reported feeling more confident in managing their workloads and expressed appreciation for the chatbot’s non-evaluative tone, which contrasted with traditional assessment-focused interactions. 
The chatbot was designed to sit at the boundary between AI-supported and human-led tasks. For example, while it offered motivational nudges and reflective prompts, it never judged student performance or made evaluative decisions. Instead, it adopted a “coach, not player” stance deliberately formulated to enhance, not replace, the human relationship.
Far from replacing teachers, the chatbot’s presence appeared to free educators to focus on deeper, high-impact interactions such as facilitating critical discussions, designing new curriculum, face-to-face discussion with students, and mentoring students facing challenges. The bot provided timely, personalised check-ins that many students said helped them feel “seen” and “less alone” in their studies (even though they knew it was a bot). Its consistent presence, non-judgemental tone and focus on their progress created a sense of support that often felt more immediate and accessible than traditional channels. In essence, the tool didn’t just support academic outcomes; it enhanced students’ sense of belonging in a hybrid learning environment. Reception of the tool has been widely positive.
However, it could perhaps be argued that the sector is still in its early days. The global sector is moving quickly but not always carefully, with Steven Hill from Walbrook Institute London urging caution about allowing technology to do what only teachers can. And the question of how to ensure that AI use remains responsible and ethical and that access is available to all students continues to hang over the sector.
Institutions can start thinking about how AI can be integrated. Mapping teaching tasks against a framework such as the AI-teacher teaching task spectrum may clarify where AI can meaningfully assist. Avoiding using AI where human capacities such as judgement, ethics or empathy are required is essential, as is framing AI tools clearly for students. In our case, positioning the chatbot as a motivational coach, not a teacher or assessor, helped build trust and uptake.
Our pilot demonstrates that thoughtfully designed AI can strengthen, not supplant, the human heart of higher education. By positioning our chatbot at the transitional zone between the two, we showed how low-risk, coaching-style prompts can lift students’ confidence and sense of belonging while freeing academics for the richer work of mentoring, feedback and curriculum design.
The future of teaching isn’t machine versus human. It’s machine working with human to enhance our learners.
Meena Jha is head of the technology and pedagogy cluster CML-NET in the School of Engineering and Technology, and Kwong Nui Sim is an adjust associate professor, both at Central Queensland University. Michael Cowling is a professor in computing technologies in the STEM College and director of the Hub for Apple Platform Innovation at RMIT University, Australia. Josiah Koh is associate director of online learning and teaching delivery at Western Sydney University.
Register for free
and unlock a host of features on the THE site

source

Jesse
https://playwithchatgtp.com