Skip to Content, Navigation, or Footer.

Opinion: The possibility of A.I. going rogue is more than just science fiction

Computer ethics is necessary for the programmers of tomorrow


"Perhaps the biggest project realized from the partnership so far is a 3D plug-in created for the Zoom software."

The thought of artificial intelligence causing destructive harm may seem impossible outside of movies like The Terminator, but it is more plausible than many people think. 

ASU is fulfilling its obligation to teach programming ethics. Currently, ASU's computer ethics class is a graduation requirement for computer science majors, allowing thousands of newly minted computer scientists to enter the industry with knowledge of potential consequences.

The field of A.I. is already incredibly advanced, and there are many cases where A.I. can consistently beat humans at intellectual tasks. As early as 1997, IBM’s Deep Blue defeated the No. 1 chess player in the world, Garry Kasparov, and in 2011, Watson beat the 74-time reigning ‘Jeopardy!’ champion.

A.I. like Deep Blue and Watson are hard-coded with the strategy they use, minimizing the risk of harm. However, even this type of autonomous decision making can lead to catastrophe. 

According to Ted Pavlic, an assistant professor of computing, informatics, and decision systems engineering at ASU, unforeseen circumstances can cause A.I. to make harmful decisions, using an example of navigation systems leading users into the blaze during the recent California wildfires.

“We saw with the fires in California, there were certain roads that didn’t have any cars on them because no one should have been driving there, but the navigation algorithms didn’t know why they didn’t have any cars on there, so they were routing people to these areas that looked like great shortcuts,” Pavlic said.

In fact, malfunctions in A.I. decision-making have already resulted in several injuries and deaths. In an extreme case, a factory robot in Michigan that was never intended to leave its section entered another area and killed a service technician. Department of Labor data shows several more cases of such incidents happening in the United States. 

Now, some major research institutions have turned to adaptive A.I., which are programmed to develop strategies based on their previous experiences. OpenAI created a self-taught bot that obliterated many of the top professionals in the video game Dota 2, regarded by some as more complex than chess.

ASU’s own Data Mining and Machine Learning Lab is spending about $3.75 million for currently funded projects to develop A.I. for a variety of innovative applications, including crisis tracking and response.

While this technology is ambitious, it could be dangerous. Some skeptics, such as OpenAI and Tesla founder Elon Musk, have hinted that this type of technology could start wars.

“I think the ‘Skynet’ view is definitely something that people shouldn’t just laugh at, because it may be so,” Pavlic said.

Pavlic said his undergraduate computer engineering curriculum included ethics in a stand-alone class. 

“(In computer science ethics classes) We learned that every single thing that we build, whether it be a piece of software, or a bicycle, or a car is actually an experiment on the world, where the people using our products are guinea pigs that don’t necessarily give their informed consent,” Pavlic said. “I would worry about autonomous machines less if we had more of that in there.”

It goes without saying that A.I. has tremendous potential for good, but the risks should be taken seriously. As one of the largest educational institutions in the country, ASU succeeds in its responsibility to ensure that the next generation of computer scientists know how to mitigate these risks by working ethically.

Correction: This opinion column has been corrected to list CSE 301: Computer Ethics as a required course for computer science students at ASU. A previous version of this column incorrectly said it was not a required course. 

Reach the columnist at or follow @rossdougla on Twitter.

Editor’s note: The opinions presented in this column are the  author’s and do not imply any endorsement from The State Press or its  editors.

Want to join the conversation? Send an email to Keep letters under 500 words and be  sure to include your university affiliation. Anonymity will not be  granted.

Like The State Press on Facebook and follow @statepress on Twitter.

Continue supporting student journalism and donate to The State Press today.

Subscribe to Pressing Matters



This website uses cookies to make your experience better and easier. By using this website you consent to our use of cookies. For more information, please see our Cookie Policy.