Ban it or embrace it: universities grapple with AI

Ban it, embrace it, ignore it, design around it or go back to traditional assessment practices such as invigilation – these are some of the typical responses that higher education institutions globally have adopted in dealing with ChatGPT and other generative artificial intelligence (AI) technologies.

However, according to Unisa's Prof Piera Biccard of the Department of Curriculum and Instructional Studies, selecting only one of these options for teaching and assessment practices might not be the ideal approach.

"I do not believe that we have to select one of these. We need to create spaces where some of these are relevant. There are spaces where you may want to ignore generative AI. There are spaces in your teaching and assessment where you may want and have to ban, invigilate and embrace. We do not have to choose one of these all the time," Biccard said during a panel discussion on the opening day of Unisa's 2023 Open and Distance Learning (ODeL) Conference.

Biccard said that with the advent of generative AI, universities would have to create new assessment landscapes, some of which would encourage the use of AI, while some would ban it. Examples of the latter were situations where original student thought was required, or students were expected to demonstrate a foundational grasp of the field. 

Using the analogy of the mass use of calculators, Biccard said the advent of the calculator had not eliminated the requirement for children to know how to add and subtract. "We created spaces where children would use a calculator because it was relevant, and then we created spaces where they do not use a calculator because we still need to be able to add and subtract."

Biccard, who was part of the panel discussing opportunities and challenges that ChatGPT presents for assessment, said the role of lecturers had become "exponentially more difficult" because of the decisions they would now have to make about assessment. "You are going to have to have to choose when to use AI and when not, and give reasons why," she said, noting that dialogue with students about the use and value of AI would be important.


Prof Jacqueline Batchelor (University of Johannesburg), Prof Moeketsi Letseka (UNESCO Chair on Open Distance and eLearning), Dr Roze Phillips (GIBS Business School), Prof Mpine Makoe (Unisa's Executive Dean, College of Education), Dr Anitia Lubbe (North-West University), Prof Piera Biccard (Unisa), and Dr Denzil Chetty (Session chair and Head: Academic Development Open Virtual Hub, Unisa)

Lecturers need to go back to basics

Fellow panellist Dr Anitia Lubbe of the North-West University's Research Unit for Self-Directed Learning said she was excited about the possibilities that ChatGPT and other generative AI tools presented. "It will kick us into first gear to think about how we are assessing learning," she said, noting that AI was highlighting how universities need to instil self-directed, lifelong learning skills in students and not just focus on content – meaning that it will be necessary to embrace innovative assessment.

The challenge for lecturers would be redesigning assessments while embracing AI, trying to figure out how students were engaging and responding. While students had Chat GPT at their fingertips, some university faculty and staff wanted to ban or ignore it. "In that sense, faculty and lecturers are the weakest link," Lubbe said.

The solution, she believed, was for lecturers to "up our level of assessment literacy" and "go back to basics" by focusing on the purpose of the assessment at hand and choosing strategies to match. "In the 21st century, we need to support students to become critical thinkers, to evaluate whatever they perceive from ChatGPT – is it valid, reliable, trustworthy?" 

The ability to evaluate is even more critical given that the world has become a place where everything can be faked, as Dr Roze Phillips of the Gordon Institute of Business Science noted.

Truth or fake?

"The world is full of inaccurate knowledge because ChatGPT and these large language models hallucinate and confabulate, sometimes so badly that we do not know how to pick it up. We need to teach ourselves and the students that we graduate to be able to do exactly that," she said. "In a world where everything can be faked, what is the truth?"

Phillips noted that she seldom heard conversations about the ethics of education and the importance of moral compasses. "With our technologies come new responsibilities," she said, adding that ethics should be taught in every discipline.

Other vital skills students should learn for their future employability are how to ask better questions – asking questions being a "superpower" that humans could soon lose to AI – and the ability to constantly reinvent themselves as future jobs have yet to be created. 

Phillips said what worried her most about generative AI is that it is now even easier for humans to avoid learning. She made the chilling comment: "If we are going to work like machines, machines will take our jobs. If we are going to learn like machines, we will never have jobs. But if we are going to behave like machines, we will have no humanity." 

On a more positive note, Phillips said machines are not meant to make humans redundant but relatively abundant. "Our technologies can show us what we can do; they will not help us understand what we should do. That is only a human endeavour, and I hope we make the right decisions and choices."

The last speaker, Prof Jacqueline Batchelor of the University of Johannesburg, spoke about the "hype cycle" that accompanies the introduction of new technologies, saying people should not be distracted by the hype around AI. "We know that this hype cycle follows a pattern; it comes and is replaced by something new. We need to become comfortable with a natural evolutionary cycle."

Batchelor said a United Kingdom study had shown that university students were aware of the pitfalls of using AI and did not want to compromise themselves. Therefore, the best assistance lecturing staff could offer students is to guide them about the appropriate use and at-risk use of AI. "It is our responsibility to guide them."


* By Clairwyn Rapley, Directorate Research Support

Publish date: 2023/08/22

Unisa Shop