I TEACH AT OUR TOP UNI – AND AI CHEATING IS OUT OF CONTROL
Robert A*

I’ve been a frontline teaching academic at the University of Melbourne for nearly 15 years. I’ve taught close to 2000 students and marked countless assessments.

While the job can be demanding, teaching has been a rewarding career. But a spectre is haunting our classrooms; the spectre of artificial intelligence.

Back in the day, contract cheating – where a student paid a third party to complete their assignment – was the biggest challenge to academic integrity. Nowadays, contract cheaters are out of work. Students are turning to AI to write their essays and it has become the new norm, even when its use has been restricted or prohibited.

What is the value of the university in the age of AI? Ideally, university should be a place where people are not taught what to think but how to think. It should be a place where students wrestle with big ideas, learn how to reason and rigorously test evidence. On graduation they should be contributing to and enhancing society.

Instead, AI chatbots, not Marxist professors, have taken hold of universities. AI is not an impartial arbiter of knowledge. ChatGPT is likelier to reinforce rather than challenge liberal bias; Grok’s Freudian slips reveal a model riddled with anti-Semitism; DeepSeek is a loyal rank-and-file member, toeing the Chinese Communist Party line and avoiding questions about its human rights record. When the machine essay-writing mill is pumping out essays, AI is the seductive force teaching students what to think.

While we know AI cheating is happening, we don’t know how bad it is and we have no concrete way of finding out. Our first line of defence, AI detection software, has lost the arms race and no longer is a deterrent. Recently, I asked ChatGPT to write an essay based on an upcoming assessment brief and uploaded it to Turnitin, our detection tool. It returned a 0 per cent AI score. This is hardly surprising because we already knew the tool wasn’t working as students have been gaming the system.

Prosecuting a case of academic misconduct is becoming increasingly difficult. Many cases are dismissed at the first stage because the AI detector returns a low score that doesn’t satisfy the threshold set by management. The logic seems to be that we should go for the worst offenders and deal with the rest another way. Even with this approach, each semester the academic integrity team is investigating a record-breaking number of cases.

To deal with the inundation of AI cheating, the University of Melbourne introduced a new process for “lower-risk” academic integrity issues. Lecturers were given discretionary powers to determine “poor academic practice”. Under this policy, essays that look as if they were written by AI but scored 0 per cent could be subject to grade revision. Problem solved, right? Not even close.

Tutors are our second line of defence. They are largely responsible for classroom teaching, mark assessments and flag suspicious papers. But a recent in-house survey found about half of tutors were “slightly” or “not at all” confident in identifying a paper written by AI. Others were only “marginally confident”. This is hardly their fault. They lack experience and, without proper training or detection tools, the university is demanding a lot from them.

Lecturers are the final line of defence. No offence to my colleagues, but we are not exactly a technologically literate bunch. Some of us know about AI only because of what we read in the paper or what our kids tell us about it.

We have a big problem on our hands, the “unknown-unknown” dilemma. We have an academic workforce that doesn’t know what it doesn’t know. Our defences are down and AI cheaters are walking through the gates on their way to earn degrees.

Soon we will see new cohorts of doctors, lawyers, engineers, teachers and policymakers graduating. When AI can ace assessments, employers and taxpayers have every right to question who was actually certified: the student or the machine? AI can do many things but it should have no place in the final evaluation of students.

A wicked problem surely requires sensible solution. If only. Federal Education Minister Jason Clare has acknowledged the AI challenge but passed the buck to the sector to figure it out. With approval from the regulator, many Australian universities have pivoted from banning to integrating AI.

The University of Melbourne is moving towards a model where at least 50 per cent of marks in a subject will have to come from assessments done in a secure way (such as supervised exams). The other 50 per cent will be open season for AI abuse.

All subjects will have to be compliant with this model by 2028.

Australian universities have surrendered to the chatbots and effectively are permitting widespread contract cheating by another name. This seriously risks devaluing the purpose of a university degree. It jeopardises the reputation of Australian universities, our fourth largest export industry.

There is real danger that universities soon will become expensive credential factories for chatbots, run by other chatbots.

There are many of us in the sector who object to this trend. Not all students are sold on the hype either; many reject the irresponsible use of AI and don’t want to see the critical skills taught at university cheapened by chatbots. Students are rightly asking: if they wanted AI to think for them, why are they attending university? Yet policymakers are out of touch with these stakeholders, the people living through this technological change.

What is to be done? The challenge of AI is not a uniquely Australian problem but it may require a uniquely Australian solution. First, universities should urgently abandon the integrated approach and redesign degrees that are genuinely AI-free. This may mean 100 per cent of marks are based on paper exams, debate, oral defences or tutorial activities.

The essay, the staple of higher education for centuries, will have to return to the classroom or perish. Australian universities can then proudly advertise themselves as AI-free and encourage international and domestic talent to study here.

Second, as AI rips through the high school system, the tertiary sector should implement verifiable admission exams. We must ensure that those entering university have the skills required to undertake it.

Third, there must be priority investment in staff training and professional development to equip teachers for these pedagogical challenges.

Finally, Clare needs to show some leadership and adopt a national, enforceable standard. Techo-capitalism is leading us away from the ideal of the university as a place for free thinking. If independent scholarly inquiry at university falls, our human society will be the biggest loser.

Robert A* is an academic at the University of Melbourne and has written under a pseudonym.