As finals week approaches, it marks the end of yet another semester at ASU, and one that continued the trend of students utilizing AI to complete more of their coursework.
The improvement of AI tools and their accessibility has forced professors to adapt, especially when it comes to final exams, to prevent students from using the technology to cheat.
"The number of academic integrity cases that I've had to submit for my course has gone up," said Scott Emett, a professor in the W.P. Carey School of Business. "AI is becoming a big issue with exams. And so for me personally, I've had to change things up a lot. So I've gone back to pencil and paper exams."
Technologies like LockDown Browser were previously used by professors like Emett to limit student cheating, but he said that changes had to be made due to students "finding ways to circumvent LockDown Browser."
Emett utilizes these in-person exams for his introductory accounting courses, which can be as large as 300 students. He said the size of these classes has created some difficulty in administering paper and pencil exams, which require seven or eight proctors to assist him in ensuring students are not cheating.
READ MORE: Opinion: Keep AI out of my classrooms
Of course, accounting is far from the only course that is facing challenges related to student AI usage. Anoop Grewal, a professor in the Ira A. Fulton Schools of Engineering, said that AI use is creating discrepancies between student scores on check-in quizzes and actual understanding of course concepts.
"Since we did not do that quiz on Lockdown Browser, about 90% of the class is cheating on that quiz using AI," Grewal said. "But the knowledge is not translating in the class. If I ask the same exact question in the class, only two people would know. Yet 90% is the average in the quiz."
This broken link between higher scores and greater conceptual understanding is cause for concern for Emett as well, who referred to this concept as "grade inflation" – where students earn higher grades on average through cheating, but don't actually gain any of the knowledge typically associated with that higher grade.
"When teachers inflate grades, it gives students less of an incentive to actually learn the material, and because students are not learning the material as well as they would with that incentive, they end up doing worse on future exams," Emett said.
In engineering, Grewal said AI does have some beneficial uses in helping students visualize certain aspects of coding, especially in his introductory courses. But Grewal also pointed out that students still needed to develop a fundamental understanding of concepts, which are then built on in later courses.
"You still need to have the ability to quickly check if the AI work is correct or not," Grewal said. "You need to have a rough understanding of, are the steps logical, and some agency over the work which AI is giving to you ... What's needed to get that is super detailed, super awesome understanding of the fundamentals."
Mike Tueller, a professor in the School of International Letters and Cultures, took an approach to AI use similar to Emett, in shifting more coursework from online to in-person. However, rather than students adapting, Tueller said that after this change, two-thirds of his students dropped the class entirely.
This response has left Tueller in a situation where he is incentivized to structure the course to allow AI usage, just to get enough students in the room to teach.
"Until I change that course to make it more cheatable, I won't be able to teach it again, because we don't have the manpower to teach courses that students aren't going to enroll in in droves," Tueller said.
Amid these variations in professor policies, course structure regarding AI and variations in student responses, the University is attempting to provide support and guidance to faculty on the issue more broadly.
"When it comes to final exams and other ways to assess what a student has learned, our approach is to design assessments that require meaningful engagement," said an ASU spokesperson in a written statement. "We also aim to support faculty with tools that can deter and identify misuse, and, most importantly, engage students in clear, ongoing conversations about when and how AI can be used responsibly."
ASU's Writing Programs, directed by Professor Roger Thompson, have been at the forefront of this experimentation and adaptation regarding the place of AI in writing courses, in particular.
Thompson said he has focused on conversations with faculty about which areas of writing AI could be used as a supplementary tool, versus those areas where it would only detract from student learning if used.
"My ask of our faculty is that they engage with the technology with their students, and they take what our colleagues call an AI-informed approach to teaching writing," Thompson said. "For some of our faculty, that means use of the tool throughout all the assignments … For others, that means a critical approach. They'll say, 'Okay, we're going to make AI an object of discussion or inquiry.'"
The other side of AI's growth and prominence in higher education is its use by faculty through AI grading tools.
Thompson said that the Writing Programs have been screening and testing new AI technology to be used by faculty, and Grewal has experienced similar trends in the engineering department regarding grading.
"The push is in the grading direction," Grewal said. "They want us to explore grading-related tools or something else. The ... objective seems to be, how can we reduce the number of people involved in the course from the teaching side, and then still get the same productivity and still get the same result?"
However, Grewal noted that so far, the AI grading tools he has experimented with have not been particularly effective with the problems his students are working on.
For foreign language professors like Tueller, AI grading tools can actually provide a disservice for students due to their inability to assess those who are excelling beyond the standard in the course.
"AI graders, they can do a certain amount of good things," Tueller said. "They cannot detect whether your ideas are valid, and they sure as hell can't detect genius ... We have students who are really, really bright. We're not doing them a service if we don't have humans recognizing that."
The current state of AI use has placed both faculty and students at a crossroads regarding the future of higher education. For Emett, Tueller and Grewal, it is clear that AI is not going anywhere in the coming years, and as such, students will need to understand the technology and even utilize it to varying degrees.
But Emett explained that the unregulated use of AI for cheating in undergraduate courses can stifle the development of the very skills needed to be successful in the AI-driven workforce of the future.
"If students are taking this shortcut of using AI tools to do well on an exam, what they're losing out on is learning those fundamental concepts that we're trying to teach them," Emett said. "If they don't understand and master those fundamental concepts, there is no way they are going to be able to add value in an AI world."
Edited by Henry Smardo, Jack McCarthy and Pippa Fung.
Reach the reporter at sluba@asu.edu and follow @samluba6 on X.
Like The State Press on Facebook and follow @statepress on X.
Sam Luba is a Senior Reporter with the State Press, focusing on longer form news stories and breaking news coverage. He is a Sophomore studying political science and justice studies, and is a competitor with Sun Devil Mock Trial. He was the Editor-in-Chief of his high school news magazine. He is in his 3rd Semester with the State Press, working previously as a Part-Time Political Reporter.


