ChatGPT has generated a lot of concern and controversy since Open AI launched in November 2022, much of it related to the impact it appears to be threatening our classrooms. In particular, there are concerns that ChatGPT will allow, perhaps encourage, plagiarism and that this plagiarism will go unnoticed and lead to a (further) erosion of educational standards.
But what is behind these worries? I'd like to be careful not to create (other) tech panics - instead, I advocate that universities should respond to newcomers to the AI block in a constructive, not fearful and defensive manner.
ChatGPT and its disappointments
Chat Generative Pre-trained Transformer (ChatGPT) is a chatbot that "provides detailed and thoughtful answers to questions and suggestions." This technology can produce a variety of outcomes. It can answer questions about quantum physics. He can write poetry. Oh, and not only can she "deliver a compelling essay on any subject, she does it faster than a human."
Like any other technological innovation, ChatGPT is not perfect. Apparently he has limited access to information after 2021, so he won't help you with your fantasy assignment of Congressman George Santos. However, its versatility and complexity, especially when compared to other AI tools, has made it the subject of intense public scrutiny and even concern.
For example, last month The Guardian quoted a vice chancellor as expressing concern about “the recent emergence of increasingly sophisticated ChatGPT text generators that produce highly engaging content and can increase recognition difficulties.” The article goes on to say that with the advent of ChatGPT and similar developments in artificial intelligence, some Australian universities are returning to traditional paper-based exams.
For his part, West describes ChatGPT as "the latest bug in mass education" and "a new plagiarism threat" that "university leaders" are "fighting" in a hurry.
This concern is not unfounded. Plagiarism, plagiarism and contract fraud are serious problems in higher education. As the 2022 study explains:
Integrity of judgment is essential to academic integrity and the academic integrity of the higher education system as a whole. Students' confidence in their own abilities and in the value of their qualifications is undermined when third-party assessments are graded.
Threats to academic integrity are not uncommon. A study of 4,098 students at six universities and six post-secondary institutions in Australia reported alarming proportions of all of the above experiences.
This threat to the integrity of higher education is recognized by university officials and the general public. Media reports on contract fraud and use of pre-chatGPT AI to create essays. Most educators are aware of instances where students have copied part of an academic paper or Wikipedia and passed it on as their own. It can also come after a few reminders in class about the importance of quoting correctly and acknowledging the work of others.
Added to this is the difficult and often dangerous environment in which university professors work. A 2021 article in Conversation reports that around 80 per cent of teaching at Australian universities is carried out by temporary staff, e.g. with “temporary” and short-term contracts, with little or no further job security. continuous. . . All graduates (regardless of employment status) work in unemployed environments where teaching is one of the growth occupations.
AI hasn't caused these problems in industry, but it hasn't made life easier for university professors either. Responding to a breach of academic integrity can be time-consuming and emotionally draining for both scholars and students. Some violations, such as those allegedly caused by ChatGPT, can go unnoticed by the software designed to detect them and can be very difficult for a teacher to prove.
Want the best religion and ethics materials delivered to your inbox?
Sign up for our weekly newsletter.
Beyond “techno panic”.
I fear that ChatGPT's exclusive or dominant focus on threats to academic integrity may ultimately lead to techno-panic . Technopanic is when social mores and public safety are perceived as threatened due to advances in technology, be it smartphones, social networks or artificial intelligence.
Technopanic has several goals. They are practical scapegoats for real and perceived social ills. This scapegoat is easy to identify; They are not human and therefore cannot reply (ChatGPT may be an exception here). Techno-panic sensationalism lends itself to the clickbait era, although this type of panic predates Web 2.0, an example being the 1980s "Video Greuel" ad campaign.
However, techno panic is defeat. They are naturally uninterested in establishing a constructive use of technology and expect punitive and often unrealistic measures (like deleting your social media accounts or banning AI in the classroom). Technological innovation remains a determinant and a negative determinant of human striving.
In fact, AI is nothing more than a human creation. Their use and misuse reflects and perpetuates social traditions, values, belief systems, and prejudices. A recent study argues that addressing the ethical challenges of AI “requires learning from the earliest stages of our interaction with AI, whether we developers are learning AI for the first time or users just beginning to interact with AI “.
A constructive way to go
With that in mind, let me outline a few ways universities can respond constructively to the arrival of ChatGPT. Some of these have already been implemented. Of course, all of this can also be integrated into contexts outside the ivory tower, such as in elementary and secondary schools.
- Organize information sessions for AI professionals (academic researchers, media professionals) on ChatGPT and similar AI tools. These classes can be dedicated to students and staff separately. They should provide a fact-based, non-sensational overview of what these technologies do and their potential harms and benefits. It's important to consider these benefits, as AI is not without its problems, and to suggest otherwise is naïve, if not paranoid. These sessions should also provide an open space for students and staff to voice their concerns and learn something new. Members of the two groups will understand ChatGPT very differently, from those with technology to those just experiencing horror titles.
- Develop clear and unambiguous institutional guidelines on how students can use AI to create graded assignments.
- Incorporate AI into the classroom to enhance learning, prepare students for employment, and provide insight into the ethical use of AI. Tama Leaver noted this on her blog about the Western Australian Department of Education's decision to ban ChatGPT in public schools. Leaver is referring specifically to youth here, although his observations apply to students of all ages:
Education must equip our children with the essential skills to ethically use, appreciate and expand the possibilities and impact of Generative AI. Don't let them try it at home behind closed doors because our education system is paranoid that every student wants to use it to cheat in one way or another.
- Introduce mandatory ethics training in all curricula, especially in the first year. These training courses can take the form of semi-annual or quarterly training courses or be integrated as part of existing courses (e.g. introductory computer science, introductory media and communication). Choosing to violate academic integrity by using a chatbot to get an assignment or write an essay is inherently ethical; This decision depends on what a person considers right and wrong. The same goes for decisions to use technology forever or at any cost.
Each of these proposals impacted the university's extremely tight budget and limited time for students and academics. Even the most well-meaning and charitable AI researchers don't want to constantly get up and introduce themselves to other chatbots when there's another urgent need of their time and attention.
However, this tip is still better than giving up on our techies and admitting defeat.
Jay Daniel Thompson is Professor of Professional Communications in RMIT University's School of Media and Communication. Her research examines ways to promote ethical online communication in an age of online disinformation and digital hostility. He is co-author of Digital Media Content Production: Introduction and Fake News in Digital Culture.
Published , updated