GLOBAL

Why human mentorship still matters in the age of AI
Postgraduate supervision is undergoing a profound transformation. Once rooted in face-to-face mentoring and long-established academic rituals, it now finds itself at a crossroads where emerging technologies – particularly artificial intelligence (AI) – are reshaping the ways in which knowledge is guided, produced, and evaluated.This shift is both promising and disconcerting. Institutions are increasingly adopting AI-powered tools that offer instant feedback on writing, suggest relevant literature, and even manage project timelines.
At their best, these technologies reduce administrative burdens and expand access to high-quality academic support. However, beneath the surface lies a more urgent question: what happens to the human essence of supervision when mentorship becomes partially outsourced to machines?
At the core of the traditional supervision model is a deeply relational process. Supervisors are not only content experts but also mentors, moral compasses, and emotional anchors.
They guide students through uncertainty, assist them in discovering their academic voice, and model the discipline-specific norms that define scholarly integrity. These functions are not secondary – they represent the very bedrock upon which doctoral and masters journeys are built.
While AI is rapidly becoming embedded in research training, its integration into supervision carries inherent tensions. On the one hand, automated systems can efficiently detect grammatical issues, assess textual similarity, or parse datasets.
On the other hand, these systems lack the capacity to understand nuance, cultural context, or emotional well-being. A writing assistant may flag a student’s work as problematic, but it cannot discern whether the issue stems from a lack of understanding, linguistic background, or emotional stress. Only a human mentor can provide such insight.
Moreover, the widespread reliance on algorithmic feedback risks diminishing the student’s role as an autonomous creative researcher.
If postgraduate students begin to defer judgement to machines, they may lose the opportunity to grapple with ambiguity – an essential component of scholarly growth. There is also a risk that academic supervision becomes standardised and transactional, rather than dynamic and reflective.
A hybrid path
Supervisors themselves are not immune to these changes. With AI taking over certain routine tasks, some may feel tempted to reduce engagement, relying on automated reports instead of meaningful, dialogical feedback.
This scenario is not hypothetical – it is unfolding in real time across numerous institutions. While efficiency gains are welcome, they must not come at the expense of meaningful intellectual and human connection.
There is, however, a more effective path forward. Rather than perceiving AI as a threat to mentorship, we should consider the concept of hybrid supervision: a thoughtful integration of technological tools with the irreplaceable human capacities of empathy, critical judgement, and ethical guidance. In such models, AI supports the supervisory process but does not replace it.
Imagine a scenario in which a doctoral student utilises AI to receive instant feedback on structure or coherence prior to meeting with their supervisor. The human mentor, informed by this data, engages with the deeper elements – conceptual clarity, argumentation, and ethical considerations.
In this arrangement, AI enhances preparation, yet the core educational experience remains grounded in relational dialogue.
This hybrid model is not only practical – it is inclusive. Students at institutions with fewer resources can access sophisticated AI tools that were once reserved for elite universities. This levels the playing field and expands access to advanced academic support.
However, inclusivity cannot be achieved through access to tools alone. Institutions must also ensure that these technologies are culturally sensitive, linguistically adaptable, and transparent in their functionality. A system that penalises students for using regional idioms or unfamiliar referencing styles can reinforce academic gatekeeping rather than dismantling it.
There are also significant governance issues at stake. Universities must develop policies that clarify how AI is to be employed, who is accountable for its outputs, and how students can challenge automated judgements.
No algorithm should possess the final authority in determining a student’s academic integrity, progress, or potential. Human oversight must remain central to any decision that impacts a learner’s trajectory.
Critical engagement
Training is another critical imperative. Supervisors need to understand not only how AI tools function but also when to rely on them and when to question them.
Students, too, should be educated on how to engage critically with AI-generated suggestions. These conversations must be ongoing and embedded within institutional cultures, rather than relegated to one-time orientation sessions.
We must also ask ourselves what kind of scholars we are trying to cultivate. If postgraduate education is intended to develop thinkers who can navigate complexity, challenge assumptions, and generate original knowledge, then human mentorship is not optional – it is essential.
AI can simulate intelligence, but it cannot instill integrity, empathy, or resilience. These are qualities forged in relationships, not algorithms.
Therefore, I dared to say that postgraduate supervision in the AI era does not need to become a binary of man versus machine. It can, instead, be a collaborative space where technology enhances learning without displacing the human heart of mentorship.
Nonetheless, the crux of the matter lies in achieving a balance: utilising the strengths of AI – such as speed, consistency, and data analysis – while reaffirming the unique contributions that only humans can provide – namely, guidance, care, and moral imagination.
In conclusion, as universities continue to adapt, the challenge will not be merely technological but also philosophical. Do we perceive education as a process of relational transformation or as a series of automated transactions?
Our response to this question will ultimately shape not only the future of supervision but also that of higher education as a whole.
Professor Bunmi Isaiah Omodan is an NRF-rated researcher and associate professor of Education Management and Leadership at the University of the Free State, South Africa. He has a strong academic record in research, inclusive teaching, transformative leadership, and community engagement. His work explores the intersection of educational leadership, decolonial pedagogy, and critical research methodologies. He has chaired research and postgraduate committees and led initiatives such as the Scholarship of Teaching and Learning (SoTL). He is editor-in-chief of three accredited journals. This article is based on the findings of his recent journal article.
This article is a commentary. Commentary articles are the opinion of the authors only and not their employer and do not necessarily reflect the views of University World News.