Teacher-Student Trust is Eroding Because of AI; But it Doesn’t Have to

Tradional Teacher and Robot

Plagiarism Concerns Persist

Over this past semester, we’ve spoken to hundreds of teachers about artificial intelligence and its impact on education. Unsurprisingly, we’ve received a lot of questions about plagiarism and cheating. How students can complete assignments using AI, how to stop them from doing so, but most of all, how to catch them when they do. These conversations inevitably turn to the use of AI detection tools. We then deliver the bad news: tools that claim to be able to detect work written by AI are notoriously ineffective. They are prone to producing both false negative and false positive results and they can be easily evaded with minimal effort on the students’ part.

Some teachers have turned to other methods of detection, like checking version histories in Google Docs, or requiring students to write in a document that records their typing. The logic here is that if a student pastes in a large piece of text all at once or types abnormally fast, the teacher can guess that they are copying work from elsewhere. From a technological standpoint, these methods are far from infallible because another AI tool could easily be created that mimics human writing speeds and patterns.

But, the issue here is much larger than the limitations of current technologies.

The Bigger Problem

These conversations have led us to a couple startling conclusions. First, AI technologies are eroding trust between teachers and students. Teachers fear that cheating will become more rampant because chatbots have made it easier. Students feel like their teachers assume everyone is cheating and fear their hard work will go unrecognized. Second, this eroding trust is hindering students’ individuality, creativity, and academic growth.

Let me explain:

A student who prefers to handwrite their work before typing it because they find it stimulates their thinking to handwrite, they struggle at typing, or they share a device with a sibling or parent, could be accused of cheating because they “typed too fast” when typing out their essay for submission.

A student who prefers to work in a separate document, because working in one that the teacher can see causes anxiety, will be accused of cheating because they pasted their entire essay at once.

A student who prefers to work in an app other than the one required for submission will be questioned on the legitimacy of their work.

Some of these seem like small concerns. If a student says they prefer to type in Microsoft Words documents, a teacher may think “just do it Google Docs”. But, isn’t part of school supposed to be about figuring out how you learn and work best? If a teacher is saying that there’s only one correct way to go about completing an assignment, then only the students who feel most comfortable with that method will thrive, and we may never see what the others are really capable of.

And the issue of waning trust may go even deeper. If students feel like their teachers’ main goal is to catch them doing something wrong, they may feel less motivated to produce their best work.

Think of 2 possible student reactions:

Student A has never used AI, always puts 100% into their work, but feels like their teachers just assume everyone is cheating so they feel like their hard work is going unappreciated and they slowly stop working as hard.

Student B treats it like a game. They complete the assignment using AI and then spends hours on TikTok watching videos that teach them exactly how to avoid being detected. In the end, they haven’t learned the material and their detection evasion skills are much stronger.

What Now

We already know that a punitive environment hinders learning. Now, the advent of AI technologies like chatbots has created a new conversation around school rules, monitoring students, and appropriate punishments.

We also know that AI detection tools, in addition to being ineffective, are time consuming to use. And obviously, most teachers would much rather read the words that their students chose to put down on paper than to watch screen recordings of their paper being typed. But, if we’re putting the bulk of our energy into policing the way that students work, how much energy are we going to have left to actually grade their assignments?

Not all hope is lost here. We are by no means suggesting that standards of academic integrity should be lowered or that there are no mechanisms available for combatting AI misuse. Also, new research has shown that cheating has not increased overall since the release of Chat GPT in late 2022.

Some other good news is that the types of assignments that students can’t simply complete by using AI (those that involve in-class collaboration, presentations, and high-level problem solving) are also the types of assignments that will best prepare them to live and work in a world with AI.

Lastly, mutual trust can be reestablished between teachers and students through the implementation of honor codes and initiating open dialogue. Students will be more likely to follow rules that they had a hand in creating and honor codes have been shown to be effective in reducing instances of cheating. Also, open dialogue can help demystify AI for both students and teachers. By acknowledging why students may be tempted to use AI and showing examples of its strengths and weaknesses, students will be better equipped to make an informed decision about their AI use.

AI & Education Newsletter

Recent Posts