Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Harnessing the Generative AI Wave Towards Fair and Diverse Higher Education Assessments : A Comprehensive Analysis through an Innovative Lens of Students

Karunaratne, Thashmee ; Aghaee, Nam LU orcid and Ferati, Mexhid (2025) p.476-484
Abstract
While Generative Artificial Intelligence (GAI), particularly tools powered by Large Language Models (LLMs), offer benefits in teaching and learning, they also raise critical concerns about academic integrity, fairness in examinations d ue to their potential for generating educational content. This evolving landscape requires higher education institutions to rethink their assessment models, ensuring they remain robust, inclusive, and aligned with the realities of AI-enhanced learning environments. In this backdrop, this study investigates the practical, GAI-resistant assessment frameworks in higher education. It explores how alternative, skill-focused methods such as oral exams (vivas) and ... (More)
While Generative Artificial Intelligence (GAI), particularly tools powered by Large Language Models (LLMs), offer benefits in teaching and learning, they also raise critical concerns about academic integrity, fairness in examinations d ue to their potential for generating educational content. This evolving landscape requires higher education institutions to rethink their assessment models, ensuring they remain robust, inclusive, and aligned with the realities of AI-enhanced learning environments. In this backdrop, this study investigates the practical, GAI-resistant assessment frameworks in higher education. It explores how alternative, skill-focused methods such as oral exams (vivas) and AI-integrated tasks can be included in future assessment models. Central to the study is the understanding of how students perceive current assessments and envision future methods that fairly and effectively measure both knowledge and skills. The empirical investigation is based on a case study at a Swedish University. Research methodologies include a survey questionnaire administered to 30 students enrolled in a semi-theoretical course on innovation and technology, and a future workshop (FW) with 22 of them in five groups. The two research instruments corresponded to answering the two research questions, respectively. The survey results revealed students’ clear concerns about the academic integrity challenges posed by essay and report-based take-home assessments, as well as online quizzes. They also expressed apprehension about the potential impact of relying solely on proctored and supervised exams, highlighting the risk of reducing diversity in assessment methods, and thereby raising red flags for the need for a new and innovative approach to assessment methods that is hardly affected by unauthorised assistance from GAI. Responses to open survey questions reflected their problem-solving mindset and deep thinking of how cheating can be minimised by increased peer collaboration and solving real problems, contextualised to specific and ongoing learning activities in class. The outcomes of the FW provided insights, such as active learning-based assessments, combined with real-world problem-solving or context-specific question-based assessments. These findings are intended to inform course design, policy-making, and broader discussions on educational reform in the digital age. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Proceedings of the 24th European Conference on e‑Learning
pages
476 - 484
publisher
Academic Conferences & Publishing International Ltd
ISBN
978-1-917204-67-5
978-1-917204-66-8
DOI
10.34190/ecel.24.1.4259
language
English
LU publication?
yes
id
f0880449-8879-458c-9b6d-c13d1530ef6b
date added to LUP
2026-02-15 03:20:37
date last changed
2026-02-17 08:42:56
@inproceedings{f0880449-8879-458c-9b6d-c13d1530ef6b,
  abstract     = {{While Generative Artificial Intelligence (GAI), particularly tools powered by Large Language Models (LLMs), offer benefits in teaching and learning, they also raise critical concerns about academic integrity, fairness in examinations d ue to their    potential for generating educational content. This evolving landscape requires higher education institutions to rethink their  assessment  models,  ensuring they  remain  robust,  inclusive,  and  aligned  with  the  realities  of  AI-enhanced  learning  environments.  In  this  backdrop,  this  study  investigates  the practical,  GAI-resistant  assessment  frameworks  in  higher  education.  It  explores  how  alternative,  skill-focused  methods  such  as  oral  exams  (vivas)  and  AI-integrated  tasks  can  be  included in  future  assessment  models.  Central  to  the  study  is  the understanding of how  students  perceive  current  assessments  and  envision  future  methods  that  fairly  and  effectively  measure  both  knowledge  and  skills.  The  empirical  investigation  is  based  on  a  case  study  at  a  Swedish  University.  Research  methodologies  include  a  survey  questionnaire  administered to 30 students enrolled in a semi-theoretical course on innovation and technology, and a future workshop (FW) with  22  of  them  in  five groups.  The  two  research  instruments  corresponded  to  answering the  two  research  questions, respectively. The survey results revealed students’ clear concerns about the academic integrity challenges posed by essay and report-based take-home assessments, as well as online quizzes. They also expressed apprehension about the potential impact  of  relying  solely  on  proctored  and  supervised  exams,  highlighting  the  risk  of  reducing  diversity  in  assessment  methods,  and thereby raising red flags   for the need for a new and innovative approach to assessment methods that is hardly affected by unauthorised assistance from GAI. Responses to open survey questions reflected their problem-solving mindset and  deep  thinking  of  how  cheating  can  be  minimised  by  increased  peer collaboration and solving  real  problems, contextualised to specific and ongoing learning activities in class. The outcomes of the FW provided insights,  such as active learning-based  assessments,  combined  with  real-world  problem-solving  or  context-specific  question-based  assessments. These findings are intended to inform course design, policy-making, and broader discussions on educational reform in the digital age.}},
  author       = {{Karunaratne, Thashmee and Aghaee, Nam and Ferati, Mexhid}},
  booktitle    = {{Proceedings of the 24th European Conference on e‑Learning}},
  isbn         = {{978-1-917204-67-5}},
  language     = {{eng}},
  month        = {{10}},
  pages        = {{476--484}},
  publisher    = {{Academic Conferences & Publishing International Ltd}},
  title        = {{Harnessing the Generative AI Wave Towards Fair and Diverse Higher Education Assessments : A Comprehensive Analysis through an Innovative Lens of Students}},
  url          = {{http://dx.doi.org/10.34190/ecel.24.1.4259}},
  doi          = {{10.34190/ecel.24.1.4259}},
  year         = {{2025}},
}