GLOBAL

Tips for rethinking student assessment in the GenAI era
Picture this: a final-year business student types a short prompt into ChatGPT and, within minutes, receives a full, polished business plan. Not long ago, creating that plan would have taken weeks of research, brainstorming and revision. Now, thanks to generative AI tools, students can produce impressive-looking assignments in record time. However, here’s the catch: Are they truly learning or are they just submitting what the machine gives them?Universities have long relied on essays, reports and business plans to measure students’ learning. The logic was simple: if a student could write a good essay or present a good business plan, they would understand the subject. However, with tools such as ChatGPT and Claude, the line between student work and machine-generated content has become blurred. It is no longer sufficient to look at the final product; we need to ask how the student got there.
In some universities, staff members have noticed sudden jumps in grades for long essays and written projects. The suspicion is that many students used AI tools to draft pitch decks and plans. However, catching this is difficult. Even advanced AI detectors often miss subtle edits or provide false positives.
Given the challenge of distinguishing what is AI-generated content, it becomes imperative to redefine our assessment strategies. This entails a shift towards evaluating the learning process rather than just the end product.
How we got there
One solution is to shift the focus from what students submit to their working process in the following ways:
• Step-by-step submissions: This would involve breaking large assignments into smaller parts: outlines, early drafts, feedback notes and final versions. In this way, the students can show how their thinking develops over time.
• Oral checks and quick interviews: After handing in a business plan, students could do a short 10-minute chat with their lecturer to explain their choices and defend their ideas. AI tools can help students write; however, they cannot answer follow-up questions or show personal understanding.
• Live, in-class projects: Replace take-home tasks with workshops where students create materials on-site with limited access to external tools. This mimics real-life team projects and reduces over-reliance on AI.
• AI reflection journals: Instead of banning AI, encourage students to use it but make them reflect on how they used it. What worked? What did not? Where did the AI make mistakes? This teaches critical thinking and gives credit for thoughtful use, not just for copying.
Turning challenge into opportunity
If wisely used, GenAI tools can boost learning. Imagine a management class in which students test an AI’s supply chain recommendations and then break down why the suggestions do or do not work in the real world. This pushes students to evaluate the information and not just repeat it.
Assessment rubrics also require updates. A simple and clear guide might look like this:
• 40% for solid, logical arguments
• 30% for showing how ideas developed over time
• 20% for thoughtful reflection on how AI helped or hurt the work
• 10% for clear, effective communication.
Such rubrics make expectations transparent and encourage students to see AI as a helper rather than a crutch. After addressing how assessment rubrics can be updated, it is equally important to consider the support system for lecturers and tutors in this new educational landscape.
Supporting teachers and students
Lecturers and tutors also require support. Many staff members learn about prompt design, AI limitations and ethical concerns. Universities should offer training workshops to help staff guide students confidently through the new landscape. Shared marking sessions can also help teachers align expectations and tips on identifying authentic work.
Universities will also need to address some of the broader barriers to new ways of assessment, such as:
• Time demands: Process-based tasks may seem to add more grading work, but tools like peer reviews and structured self-assessments can ease the load.
• Access gaps: Not every student has a powerful laptop or a stable internet connection. Universities need to ensure fair access to technology or provide alternative paths.
• Policy gaps: National education bodies are still catching up. In the meantime, universities should set clear, fair guidelines for responsible AI use on their campuses.
Generative AI is reshaping the landscape of higher education. But it does not have to erode learning or academic integrity. With smarter assessment design, we can utilise the power of AI while preserving the heart of education: helping students think deeply, work honestly and grow into capable professionals.
Dr Abdullah Ijaz is a lecturer in business management at the School of Business, Operations and Strategy at Greenwich Business School, University of Greenwich, UK. Dr Madiha Shafiq is head of Tech Valley Research Centre, a social enterprise dedicated to transforming learning experiences through digital innovation. Nishwa Ibrar is head of strategy and partnerships at Tech Valley Pakistan.
This article is a commentary. Commentary articles are the opinion of the authors and do not necessarily reflect the views of University World News.