As the academic world weighs the pros, cons and ethical implications of AI, ASU — eager to maintain its innovation reputation — has immersed itself in the industry in the form of over 500 projects costing hundreds of millions of dollars.
ASU President Michael Crow’s administration has also attempted to raise excitement for AI on campus, hiring Black Eyed Peas frontman will.i.am to teach a course on the topic, as well as declaring an “AI Day at ASU” on Sept. 15, 2025. Last year, the University granted all students free access to GPT-5, the most updated version of ChatGPT. Starting in Fall 2026, this will be added to student expenses in the form of a mandatory $100 fee for full-time students and a $50 fee for part-time students.
Last year, Crow called AI “the great equalizer” and has been a strong advocate for increasing investment in research in the field. He has also advocated for the incorporation of AI into aspects of student life, such as academic advising.
Crow also has significant personal stakes in AI projects, having served as chairman of In-Q-Tel — a venture capitalist firm that funds research for government agencies like the CIA, the National Security Agency, the Department of Homeland Security and the FBI — for the past 26 years.
In one case, In-Q-Tel paid $2 million in 2005 to save a little-known software company from imminent failure. Over two decades later, Palantir, led by Peter Thiel, Alex Karp and Stephen Cohen, has evolved into a $300 billion AI-driven surveillance giant with deep connections to both In-Q-Tel and the federal government.
As ASU’s engagement with AI continues to shape the student experience, many are divided on whether or not the new technology is actually beneficial to the University and its students.
AI in the classroom
“I have been using ChatGPT since my freshman year,” Darsh Chaurasia, a senior studying computer science, said.
Chaurasia is co-president of the AI Society at ASU, an organization boasting over 1,800 members on Sun Devil Central. He said AI helps him with his classes by making long pieces of text more easily digestible and evaluating sources for validity.
“When you’re talking about something, the authenticity of the source is very important,” he said. “AI helps you look at these sources. Obviously human mind review is required, because you need to verify these resources, but if you use it wisely, you can verify and cross-check all these kinds of information.”
“Students are doing a research paper, and I allow them to use AI,” Regents and Foundation Professor of Law Gary Marchant said. “They have to disclose it at the beginning of the paper, and any text that’s directly generated by AI has to be put in quotes and cited just like a book or article.”
Chaurasia said there is enthusiasm surrounding the potential of AI among students in STEM departments. According to Liz Cunningham, a junior studying animation, this is not a sentiment that’s shared by many in the liberal arts and sciences community.
“The general conversation [around AI] is usually pretty negative, because the way generative AI has been presented is it makes art faster than people and cheaper than people,” Cunningham said. “Students have just been not using it. One of my friends’ sisters dropped a class because the teacher was requiring them to use AI.”
At the opposite end of those refusing to use any AI are those developing an over-reliance on the technology, which Chaurasia acknowledged is a relevant problem among many of his classmates.
“I personally know so many students at ASU that are heavily relying on AI to even complete their degree,” he said. “I know people who are in their final year at ASU doing a computer science degree and they don’t know how to even write a single line of code because they have been using AI.”
Chaurasia emphasized the importance of balance. “[Students] are getting good at putting in prompts,” he said. “That is a good thing, because this is something that you would require in the near future. But AI gives you the entire output by itself, all you need to do is control-c, control-v; copy and paste … I’m not saying everyone does this, I’m just talking about most people.”
“I recommend they use it for things like editing, brainstorming, research, things like that, rather than necessarily generating me a text,” Marchant said.
Marchant, who is also the faculty director for ASU’s Center for Law, Science and Innovation, said the proliferation of AI in the classroom has had both positive and negative results. While the highest grades on his papers did not increase, the lowest grades rose dramatically. “I don’t get any bad papers anymore,” he said. “Last year, I had one bad paper and that student said they didn’t use AI.”
Marchant also explained how this can be an issue. “It’s actually a problem for me, because we have a grading curve,” he said. “I’m required to give a certain number of low grades and it’s very hard when they're all fairly well-written.”
Some issues are driven less by an increase in assignment quality. “You do see some papers where it’s pretty clear they’re using AI to do almost all the writing, they’re not doing their own critical thinking,” he said. “I’ve had my first example of a student handing in a paper with a hallucinated case that didn’t actually exist, which is a huge problem.”
In both computer science and animation, students said AI guidelines are often unclear. Siddharth Mehta, a graduate student studying computer science and the vice president of AI Society, described a “polarity” in how his professors approach AI, where most are either strictly against it in all forms or willing to accept any level of AI use as long as assignments are turned in.
Cunningham experienced a different problem. “The syllabus has not been updated since 2020, so it doesn’t even say anything about not using AI,” he said of one of his art classes. “I would like to think if somebody turned in a product that was entirely AI, they would get a zero.”
Marchant said faculty at the Sandra Day O’Connor School of Law are each allowed to have their own AI policies. He said that while many professors of introductory classes ban it, those teaching upper-level classes are more lenient.
“Most lawyers today are using AI in practice, and so we want to train our students to be ready to practice in the real world,” he said. “Time is a huge issue in legal practice. To be able to do things more quickly and better will be an important skill.”
AI outside the classroom
The fear of job loss is a persistent concern regarding the development of AI. Last November, Coca-Cola released a one-minute advertisement made from over 70,000 AI prompts.
“The quality put out by artificial intelligence is making mistakes that human artists would generally get fired for,” Cunningham said. “That’s been frustrating to watch, because the pressure is way higher on us, and a machine can do whatever it wants because it’s essentially free for studios and companies.”
Chaurasia and Mehta expressed a different point of view, arguing that while AI will replace much of the labor involved in many jobs, it will also create new labor. “As much as we try to rely on AI, it is true that we cannot put 100% of our faith on it without having an eye on it, without giving our own input,” Mehta said. “I don’t think AI by itself could be as effective as an AI and human working at the same time.”
AI surveillance technology is one of the most controversial aspects of recent AI advancement. While touted by some as a massive development in solving crime, others have criticized its role in tracking personal information and data.
“We’re being watched now by all these different technologies, all of which were around before AI, but are now integrating with AI to make them much more powerful,” Marchant said. “Cities like Chicago have some 45,000 AI cameras around the city that are tracking people and identifying people.”
Perhaps the most well-known AI surveillance corporation is Palantir, a company that uses AI to analyze and interpret data collected by its customers — primarily the United States government — and is now worth over $300 billion. When it risked failure in 2005, the company was saved from collapse in large part due to Crow, who encouraged his fellow executives at In-Q-Tel to bail the company out in the form of investments totaling more than $2 million.
Marchant said ASU’s Faculty Ethics Committee on AI Technology is currently weighing the degree to which the University should monitor its students.
Marchant said legislation and regulation have been slow to keep up with developments in AI surveillance, and that court decisions are often made on a case-by-case basis. “We have no federal comprehensive privacy law in the U.S., unlike most countries, but now we have 19 different state laws,” he said. “That’s creating a lot of confusion, and then states like Arizona don’t even have one of those.”
He said many still rely on the Electronic Communications Privacy Act of 1986 to regulate AI surveillance, despite the law predating even the World Wide Web.
“I would like to see [issues] being resolved by faster-moving types of governance, and I don't think legislation and regulation are up to the task,” he said. “A lot of my work is what’s called ‘soft law’ — finding other ways to do things like industry standards, best practices.”
“Evolve or die”
“I had a conversation with President Michael Crow, and it’s really nice to see that there is a lot of positive input about AI coming in from [him],” Mehta said. “There’s a lot of money being put in from the executives and it’s really nice to see.”
Marchant said that in Fall 2026, ASU’s law school will have eight or nine AI-themed courses. “We have all kinds of seminars, we have conferences, we have workshops, we have all kinds of research projects, so I really think we are full service to our students in terms of AI education,” he said.
Crow has approached the AI revolution with an “evolve or die” attitude, in his own words. Chaurasia and Mehta embrace the technology with a similar attitude.
“When something new comes in, which has a huge impact that affects the job market, some of the traditional roles in the job market get affected, and people who don’t adapt to the change might get left behind,” Chaurasia said. “They might lose their jobs and they might not be able to find the same level of job that they are looking for.”
“Everything is changing, and if you choose not to use AI consistently, everyone who’s using AI — who’s leveraging AI to make their decisions more efficiently — is going to have an edge over you,” Mehta said.
Cunningham acknowledged there is potential for AI to do good, citing its use in detecting cancer cells, but argued this is currently overshadowed by the issues it is creating. “People are literally being replaced by machines now, and it’s very frustrating to try to navigate that while also trying to navigate an already difficult industry,” he said.
Marchant stressed the nuance of the issue while urging those in the legal profession to take a more active approach in ensuring the technology is used ethically.
“The benefits to things like biology and medicine are going to be absolutely enormous, but also it comes at a lot of cost for things like privacy and issues about ownership of your data and who can use that data,” he said. “There’s a lot of longer-term concerns about the impact on not only the legal profession but on democracy, on government. And those kinds of concerns are ones that lawyers can help address.”
Edited by Leah Mesquita, Natalia Jarrett and Abigail Wilt. This story is part of The Culture Issue, which was released on March 25, 2026. See the entire publication here.
Reach the reporter at evansilverbergrep@gmail.com and follow @evansilverbergwrites on Instagram. Like State Press Magazine on Facebook, follow @statepressmag on X and Instagram and read our releases on Issuu.
Evan Silverberg is a reporter for the State Press Magazine. Specializing in coverage of social Justice issues, he has published several deep-dives into hard-hitting topics affecting the ASU community. He is in his third semester with the State Press and has been recognized by the Society of Professional Journalists with a Region 11 Mark of Excellence Award nomination.


