Skip navigation
In my opinion

What happens when a machine can write as well as an academic?

As AI-facilitated algorithmic writing improves, it poses tricky questions about authorship and what constitutes an “original” paper or assignment.

BY MICHAEL MINDZAK | FEB 17 2020

Recently one morning, I asked my computer a relatively simple question: can artificial intelligence (AI) write?

We’re not too certain on what artificial intelligence will be able to write, but there are some scenarios in which computers could be responsible for a huge number of word documents …
 
The biggest potential scenarios would involve machines analyzing what has already been written and determining what pieces need to be edited to make the content seem fresh. A robot system analyzing words for typos could do something like this on a big project …

The above sentences were composed by a machine in a matter of seconds. The tool used is a freely accessible interface based on the GPT-2 text generator released by OpenAI – a company founded by technology industry leaders, including Elon Musk and Sam Altman. Only a limited version of the tool was made available, as it was dubbed “too dangerous” by the company to release fully into the world.

While we can take such grandiose claims with a grain of salt, the existence of advanced, or advancing, algorithmic writing technologies is not too far off. As issues surrounding artificial intelligence continue to be discussed and debated by scientists, futurists and ethicists, higher education also finds itself thrust into the mix of this rapidly evolving field. The implications will likely be far-reaching.

Algorithmic forms of writing are not necessarily new, as researchers have actively sought to combine artificial intelligence, machine learning and predictive text for a number of years. While the subtleties and nuances of modelling human language have remained an elusive science, refined efforts towards developing functional AI natural-language generation and understanding (NLG and NLU) have resulted in continuous improvements.

Essentially, what remains pertinent is that the machines are improving their writing capabilities all the time. What the implications of this evolution will be for higher education is, of course, more difficult to prognosticate. Assuming that forms of algorithmic writing become more widely available, the first question many educators may ask is how will they know what their students have actually written? If the growth of technologies such as Turnitin.com are any indication, educators are already concerned with the originality of student scholarship and written submissions. Hence, an unfortunate consequence of the growth in AI writing technologies will likely be that they will serve to augment what is already a substantial for-profit “essay mill” market.

Essay-writing services are already readily available online, offering to write “original” papers for those who wish to purchase these services, and some of these providers already claim to be utilizing AI in their work. Thus, improvements in algorithmic writing will likely make these cheating services more widely available and cheaper. Some of these AI writing tools may one day even become freely available as open-source software and will thus spread even more.

Academic publishing

Concerns surrounding the originality or integrity of written work are not limited to students. Academics are already utilizing AI products and services in the process of publication. For example, in 2005, a group of researchers from MIT used an algorithmic language generator which prepared several papers for publication (and which were subsequently accepted). While their intent was to demonstrate the problems surrounding predatory journals, their experiments reveal how improved forms of AI might be used in the academic publication process. Such technological developments will conceivably lead to increased “automation of publication,” whereby academics – under pressure to “publish or perish” – may seek to deploy algorithmic writing technologies in order to succeed professionally.

Considering the possibility of algorithmic writing and implications for scholarship, the next question facing higher education might then be how can writing be effectively assessed or evaluated? Interestingly, as AI becomes better at writing, it will simultaneously also become better at detecting its own penmanship. However, this approach of “fighting fire with fire” will likely only result in a sort of mutually destructive technological arms race. As a result, schools and teachers may instead move towards more traditional models of assessment and evaluation, such as in-class essays or examinations.

Again, it is not only students and educators who will be impacted here, but also researchers engaging in publication and peer-review. For example, scholars may utilize algorithmic writing for their submissions, whereas journals may counter with artificial intelligence screening tools (such tools are already actively used to screen job applicants). Hence, while the implications of such concurrent developments are unclear, they will likely require higher education to reconsider how to best teach, assess and evaluate writing moving forward.

Plagiarism, originality and academic ethics

The aforementioned challenges in scholarship and assessment will require higher education to also reconsider key values surrounding plagiarism, originality and academic ethics. Is the inclusion of writing developed by an algorithm, but which remains “original” in the sense that it may never have been generated before, considered plagiarism? Moreover, who exactly claims ownership over this writing and why?

As a classroom example, if a student submits a paper that was, say, 50-percent generated by an algorithm, with the rest written by the student, is this original and acceptable work? What if the student in question altered the code in an open-source version of the algorithm, and thus utilized their own coding which created the written output? Big and new questions such as these pose new dilemmas for the entire field of writing, witnessed already in the sphere of journalism and now moving rapidly into the borders of academia. Such ethical concerns will be extremely important as they may serve to augment or else diminish notions of integrity, trust and reciprocity between students and scholars within academic communities.

The evolution of technology and artificial intelligence has, and will, continue to present both challenges and opportunities to higher education. Algorithmic writing is just one part of this larger trend. Perhaps the larger concern is the apparent movement towards the increased “automation of education,” where tools and services are offered by technologists as “solutions” for the “problems” of higher education. The underlying logic for many of these tools appear to be rooted in notions of efficiency – and the adoption of these methods and methodologies serve to speed up rather than slow down academic teaching and learning. While there is likely no silver-bullet solution for the numerous challenges AI will bring about to higher education, reconceptualizing what exactly a high-quality postsecondary education means will definitely require some uniquely human thinking.

Michael Mindzak is an assistant professor, department of educational studies, in the faculty of education at Brock University.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Rahul Kumar Cancel reply

Your email address will not be published. Required fields are marked *

  1. Sarah Elaine Eaton, University of Calgary / February 18, 2020 at 12:56

    Thank you for this thought-provoking and articulate article. As someone who studies applied ethics in educational contexts (i.e. research ethics, publication ethics, and academic integrity), I found this to be a fascinating and insightful read. Have already shared it!

  2. Michael Owen / February 19, 2020 at 16:02

    The question might arise whether, instead of fearing the advance of AI-support writing, can we as teachers /instructors employ these tools for assist our students to improve their writing skills? Could AI-support enable students understand more fully how to construct a well-written paragraph, paper or opinion piece? A number of researchers have explored how Turn-it-in, as well as other plagiarism detection tools, could be used in this way, not as a tool for detection and punishment but to support the teaching of principles of appropriate citation, amongst others.

  3. Rahul Kumar / July 1, 2022 at 11:38

    This piece further strengthens the idea that efforts should be deployed in educating students on the importance of academic honesty and integrity, rather than focusing only on detection.

Click to fill out a quick survey