Skip navigation
Career Advice

AI tools don’t have to be the enemy of teaching and learning

Now is the time to think through how these new technologies can be improved and used ethically in everyday life, including in teaching and learning.

BY GAVAN P.L. WATSON & SARAH ELAINE EATON | FEB 17 2023

In late 2022, the release of the artificial intelligence (AI) tool ChatGPT was like a work of speculative fiction brought to life. Created by startup OpenAI, ChatGPT is an example of a large language model (LLM) AI, which works to interpret and generate text based on human prompts. Specifically in the case of ChatGPT, prompts from humans are entered in a text-chat-like interface, the tool “remembers” the context of the prompt and responds to feedback from the human. In short, using ChatGPT is an iterative and conversational process that generates what appears (to the user) to be unique passages of prose.

From our observations in late 2022, the real “this changes everything” moment for educators was the apparent quality of ChatGPT’s outputs, which have given the impression that rather than being authored by a robot, there is a distinctive human voice behind them. While it has been demonstrated that some of the services’ answers can be factually incorrect, ChatGPT’s responses are uniquely generated, even with the same prompt. This has caused some observers to suggest it creates “untraceable” work and others to predict the “death of the essay.”

While LLMs may offer a threat to academic integrity, knee-jerk reactions – that the new technology should be banned or that all students who use it are cheaters — are not helpful. Instead, we offer the following position: given that AI tools are becoming widely available, now is the time to think through how these new technologies can be improved and used ethically in everyday life, including in teaching and learning.

Within the classroom context, there could be ways to incorporate the use of AI tools into assessments. Initial discussions around incorporating these tools could see students using the initial output of ChatGPT as a “first draft” of a written assignment, submitted with the final work, and matched with a short description of how the work was subsequently revised. Our creativity as educators will remain a skill unmatched by AI, and one we need to continue to draw upon to rethink how we ask students to demonstrate their proficiency with course outcomes.

We have no doubt that educators’ task of re-thinking assessments in the context of AI will take time and work. We also need to acknowledge that recent trends, such as the massification of Canadian higher education classrooms, put limits around assessment redesign and will likely impact how universities can respond. This means administrators will need to support educators to try new ways of assessment, focusing on aligning them with learning outcomes. Some solutions will fall outside the purview of the individual classroom – few institutional policies currently have language that makes it possible to claim the use of AI as a form of academic misconduct, unless it is classified as a form of contract cheating or plagiarism (but given that the outputs from tools such as ChatGPT fall within current definitions of “original”, an argument of plagiarism may not hold up on appeal). Above all else, it is important to support students’ learning. And when it comes to academic integrity it is imperative that we avoid perpetuating antagonistic and adversarial relationships between educators and students.

We also need to be honest with ourselves: students will not be the only users in higher education of LLM tools like ChatGPT. Some educators have already explored how these tools can be used to augment some aspects of their classroom work. ChatGPT has shown itself able to generate course readings for syllabi, essay evaluation rubrics, slide decks for lectures and can even been used for assessing student work. In response to the ChatGPT “threat”, any ethical threshold we set for student use also needs to be set for all members of the university community. Efforts to quickly revise institutional policy to allow for the punishment of students who use AI tools may have the unintended effect of preventing important conversations about how we, as educators, should be using such tools ethically for teaching and learning.

There is no way to win an academic integrity arms race. There are, however, lots of ways to support students’ learning. In the case of artificial intelligence, this includes engaging everyone on campus in thoughtful and supportive conversations about how to use AI ethically for teaching, learning, and assessment. Students need to be meaningfully involved in these conversations: from developing a better understanding of how they use LLMs, through to suggesting revisions to classroom practices. Our institutions’ responses to AI will be more thoughtful and practical by including the student voice.

Gavan Watson is associate vice-president, academic, at Memorial University. Sarah Elaine Eaton is an associate professor in the Werklund School of Education at the University of Calgary.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey