Identifying AI-Generated Plagiarism

Facing the Challenge of ChatGPT in the Classroom

Ryan Edel | 8/25/2023

If there's any question whether a student wrote a paper, you have to answer that question as reliably as you can.  A false accusation of misconduct will seriously damage your relationship with a student, but failing to catch misconduct will encourage a student to continued cheating in the future.

AI platforms like ChatGPT make it dangerously easy for students to submit work that isn't their own, and no amount of investigation can ever "prove" that a paper has been plagiarized from an AI.  Unlike with traditional plagiarism, there is no extant original text to compare the student work against — as an instructor, you'll have to actively investigate any suspected cases of AI misuse.  Here are some tips to help.

Video: Four Alternative Strategies to AI Detectors

Preventing and Identifying AI Misuse

It's well established that AI detectors have a poor track record.  Unlike Turnitin, which can actually link to direct evidence of plagiarism, an AI detector uses language models to extrapolate whether an AI might have written a given sample. As a Scribbr study found, "no tool can provide complete accuracy; the highest accuracy we found was 84% in a premium tool or 68% in the best free tool."

Talk with Your Students Before, During, and After Assignment Submissions

Do you assign a big paper?  Then it's important to explain the nature of AI misuse, provide guidance to help your students write their own work, and then check with your students to ensure the works they've submitted are original.  This conversational approach isn't just good for preventing and detecting AI misuse, though — you should also use it as part of your pedagogy.  This is how you coach students toward becoming better writers.

Be Wary of "Perfect" Writing

No human can write consistently perfect sentences.  Even professionally published works that go through multimple rounds of editing and proofreading will have an occasional typo.  If you read a student paper that has no surface-level errors, then it's possibly you're looking at something generated by AI.  However, this is not a smoking gun —a very conscientous student who ethically uses Grammarly (or similar) can submit a work that will have almost zero errors.

Check for Fake Sources

Need a smoking gun?  This is it.  If a student submits a paper with fake or nonexistent sources, then you have absolute proof that some kind of academic misconduct has occurred.  AI platforms like ChatGPT often manufacture nonexistent references that sound legitimate — then again, a desperate student might write original work while also inventing sources to make a paper sound more intelligent.  It may not be plagiarism, but it should still be addressed.

So check that the sources exist and that you have access to them.  If you can't access the source, then ask the student to bring you a copy.  Also, bear in mind that access to the source is just as important as the existence of the source.  If a student quotes a source without ever having read it, that's also a form of misconduct.

Quiz Your Students On Their Work

Yes, you can do an in-class assignment where you have every student write a short description of what they've turned in.  But this is something that should be done with care, so I've prepared a whole section below...

Erik Ofgang's "How to Prevent ChatGPT Cheating" provides thoughtful discussion for assignments that will encourage deeper thinking and responsible technology use.

In "Three Steps to Prevent ChatGPT Misuse," Spencer Burrows gives three excellent suggestions for complicating your writing prompts to prevent ChatGPT from delivering answers.

Quizing Your Students

If you're truly concerned that a student didn't write an assignment, you can quiz your student on the contents of their own writing.  If you do this, here are a few recommendations:

Links for Self-Assessment

If you'd like a more thorough approach to having students evaluate their own writing, try these resources for self-assessment:

The University of New South Wales provides a helpful overview with "Student Self-Assessment," including pros and cons.

You can find a deeper dive with "Student Self-Assessment: The Key to Stronger Student Motivation and Higher Achievement" by James H. McMillan and Jessica Hearn (Educational Horizons, v87 n1 p40-49 Fall 2008)

Looking to help students transition through the first-year experience?  Sean Kearney recommends using the Authentic Self and Peer Assessment for Learning (ASPAL) model.

The Role of AI Detectors

Yes, You Should Use AI Dectors...but with care.

Despite the bad rap, AI detectors do offer a valuable data point when investigating cases of suspected AI.  Most AI detectors will give a percentage of "human" versus "AI-generated" content.  The percentage works about as well as a weather forecast for rain — a 15% chance of rain means that rain is possible, but unlikely, whereas an 80% chance of rain means that rain is likely, but not guaranteed.  But if the AI detector is giving you 0% or 100%, then that serves as very strong evidence of either calm skies or an e-mail to the Dean of Students office.

Here's the main concern with AI detectors: since they can't possibly give black-and-white certainty of AI involvment, we're left wondering what percent of "possible AI content" constitutes reasonable cause for investigation.  As a teacher, you have to decide the degree to which they can help.  Personally, I've found AI detectors useful, but they are not a tool of certainty.

Following gut instinct and then using AI detectors, I caught three high school students using AI between January-July, 2023.  I identified the AI concern because I had papers that didn't at all match the writing styles or speech patterns of two of my students — AI detectors found a high liklihood of AI, and then I verified the situation by talking with the individual students directly.  The third student, however, provided a more interesting case.  This individual naturally spoke in a rather elevated style — the AI-generated text read in a way that sounded very much like the student.  But the content seemed off, and I never the student working in class, so I ran the pages through an AI detector — it indicated a high liklihood of AI use, and it wasn't wrong.

When confronted, two of the students confessed to using AI — though the third one denied using AI, he had no personal knowledge of the information in the work.

In all three of these case, the AI detector was simply one tool — I would never recommend using an AI detector alone.  I had another case where the AI detector registered "98% AI content" when the words were pretty clearly produced by the student.  Unless of course I'm wrong — and there's likely no way I will ever truly know.

AI Detectors Recommended by Scribbr:

This information from Scribbr is current as of June 2023.  Also, remember that no AI detector will ever be 100% reliable — you cannot rely on digital AI detectors alone when it comes to concerns about academic misconduct.

Scribbr's "Best AI Detector | Free & Premium Tools Compared" by Jack Caulfield.  Yes, it does matter which AI detector you use — the worst on this list are only 38% accurate.

Sapling's AI Detector was identified as the best free detector by Scribbr (68% accuracy).  The paid version offers far more checks.

ZeroGPT's AI Detector was close to Sapling's accuracy (64%), but did have a false positive.  However, it appears to be extremely user friendly.

Why It Matters: Maintaining Integrity

Because of the three students mentioned above, I began a deeper investigation into whether I had other students using AI to commit plagiarismAnother instructor had also heard rumors that certain students weren't doing their own work, and this led to tension with those students who were genuinely work hard to graduate early.

This created a serious problem — if students believe that my system allowed some students to succeed by cheating without effort, then cheating would become not only common, but encouraged.  Rumors alone are enough to undermine the integrity of a system, even when no actual cheating has previously taken place.  But I had found AI misuse — it was clumsy and easily spotted, but what if I had cases that were less clumsy?

So I ran AI detector spot checks on submits that had been submitted earlier.  Fortunately, I didn't find any more cases of AI — and the AI detectors indicating that the writing was by human hands.  Instead, I found the more typical forms of plagiarism — three cases of students  copying material other websites without attribution.

Honestly, I was pretty surprised — I had assumed that my high school students would have less incentive to cheat.  But the fact that six students had been cheating on their papers was not a secret to the student body.  Basically, one student had gotten away with it for a couple months, and then she gave "advice" to help others do the same.  Most didn't follow that advice, but one did.

In another case, the plagiarism came from a misunderstanding of the nature of writing — one of my students had essentially been taught that research writing meant looking up sources and "patching together" the information from those sources into a paper — her patch writing involved copying over paragraphs from a variety of sources and arranging them into a coherent whole.  No, I'm not saying that any teacher would specifically teach a student to compose an essay this way, but failing to catch and correct this phenomenon is essentially the same as teaching it.  This student learned from prior teachers that a paper with good paragraphs from good sources will get an A.

Clearly, I had to do something.  Given the student population I was working with, it didn't make sense to "drop the hammer" or anything like that — most of my students had never been exposed to a rigorous writing classroom, and most didn't have prior positive experiences with school.  The cheating made sense, but it had to be stopped — and further plagiarism had to be prevented.  So I had the students redo their papers, I ran the revisions through the same plagiarism checks, and everyone continued on with their lives.

But here's the moral: you can't stop academic dishonesty unless you can accurately and consistently determine when it has happened.  Students must know that there is a credible system in place to weed out cheating.  And this isn't simply to ensure that students learn — I have many students who would never cheat under any circumstances, even if they knew they could get away with it.  But if they saw others getting away with it, they would rightfully feel incensed.  Cheated, you might say.  And that kind of justifiable anger creates a very unhealthy atmosphere.

Recommended Links

There's an industry out there trying to prevent academic dishonesty.  Though you might not need their products, they can offer helpful advice regarding dishonesty:

Turnitin has a thorough essay on the importance of academic integrity by Christine Lee.

It's important to build a productive academic community that reduces the pressure to cheat.  ProctorEdu's "Why Students Cheat" by Anton Skshidlevsky offers several good suggestions for fostering a positive rather than punitive atmosphere.

Wonder why students cheat?  Olivia L. Holden, et al., consider this in their peer-reviewed study "Academic Integrity in Online Assessment: A Research Review."

In Summation

AI platforms create greater opportunity for students to get away with cheating.  Unlike traditional plagiarism, AI-enabled plagiarism is harder to detect and almost impossible to prove with absolute certainty.

Regardless of whether or when we use AI to support learning, we must ensure that students don't use these online tools to replace their learning.  We must foster classrooms where students feel comfortable writing their words, and where they feel confident that every student is held to the same standards of academic rigor.  So although it's hard, we must ensure that we are identifying and addressing misuse of AI when it occurs.