ASU's nonspecific guidelines for artificial intelligence usage in the classroom is a savvy move as institutions across the country grapple with the emergence of ChatGPT. By leaving the terms of AI and ChatGPT usage open, professors can provide leeway to courses that could benefit from it or prohibit AI in courses where it's a liability.
Since ChatGPT's release in late November 2022, institutions are struggling to find the balance between maintaining academic integrity and exploring the potential of AI.
The current guidelines permit "academic units and/or faculty to determine whether student use of generative AI/ChatGPT in their courses is permitted or prohibited." This flexibility lets AI positively influence different facets of research, learning, teaching and administration.
The University plans to "positively employ" generative AI technologies to "enhance learner outcomes," said ASU spokesperson Veronica Sanchez in an emailed statement.
"I'm a strong advocate of developing innovative ways of using AI to enhance learning and to make learning more accessible to more people — all while being very aware of the potential pitfalls of using AI in education," Maynard said.
This course, called "Basic Prompt Engineering with ChatGPT," has been structured and designed by ChatGPT itself. The course teaches students how to make the most out of ChatGPT as a resource.
"AI has the potential to address some of the biggest challenges in education today," the Office of the University Provost said in a statement. "Nonetheless, as educators and students, we face a new frontier as we navigate a world in which the distinction between content generated by AI and humans is rapidly blurring."
While AI has potential to positively guide and support academics at ASU, ChatGPT and other chatbots must be used responsibly.
Critics point to increased risks of cheating and integrity violations as a deterrent to allowing AI in the classroom. However, academic integrity violations have always been a risk regardless of the presence or absence of this technology.
Kena Ray, assistant director at the Instructional Design and Learning Technology program, said students who intend to plagiarize or break other ethical codes will "always find a way to do it." Ray said AI should help the University reach its academic goals rather than be a hindrance or liability.
"The benefits of artificial intelligence – streamlining workflows and improving efficiency – must be balanced with the human elements that are indispensable for scholarly pursuits," Ray said.
There are no AI detection tools currently set in place at ASU. Ray said the University is not planning to institute these tools any time soon because they are unreliable.
As ASU and other universities navigate the AI landscape, granting more flexibility is the best approach.
"We need to learn how to use (AI) responsibly and innovatively to positively scale our impact as an institution, all while understanding and learning how to successfully navigate potential pitfalls," Maynard said.
ASU’s current guidelines offer this understanding. They give faculty and students enough adaptability to explore AI as an educational tool that could enrich the learning experience instead of detracting from it.
Edited by Mia Osmonbekov, Alexis Waiss and Caera Learmonth.
Editor's note: The opinions presented in this column are the author's and do not imply any endorsement from The State Press or its editors.
Claire Le Gallo is a reporter for the Community and Culture desk at The State Press. She is a sophomore majoring in Journalism and Anthropology.