Contemporary American living is increasingly defined by our intimate relationship with ever-flourishing technology. As humanity develops, so does our technology, growing to be more pervasive and omnipresent.
Amid this whirlwind of technological change, researchers and developers are tasked with tackling the litany of ethical dilemmas that crop up. We look to the technocrats for guidance in our plugged-in world, but there are major discrepancies in how, or if, these novel technologies fit into ethical frameworks.
Artificial intelligence, perhaps one of the most ethically volatile of the emerging technologies, is not nearly as new as it seems. Decades of conceptualization of machine autonomy culminated in 1951 when Christopher Strachey developed an AI program capable of playing checkers, and in 1956 with Logic Theorist, the first computer program developed to mimic human thought.
Since then, the field boomed, ensuring a societal thrust into the age of big data, talking robots and too many ethical concerns to count. Artificial intelligence has meshed with our daily lives, sometimes abruptly, but mostly seamlessly. AI has also displayed programming biases relating to race, gender, sexuality, religion and disability.
Similar ethical concerns plague augmented and virtual reality, two tools used in environments ranging from video games to nonfiction storytelling.
AR, technology that superimposes an image onto the viewer's existing perspective, and VR, a computer-generated, three-dimensional image that immerses the viewer in a new perspective, have been growing for decades. Akin to artificial intelligence, augmented and virtual reality have their beginnings in visionary thought incongruous with existing technology of the early 20th century. VR’s breakthrough came in 1960 when the Telesphere Mask, a wearable virtual reality headset similar to those used in Oculus games, was patented by Morton Heilig.
A handful of ethical questions persist in the fields of augmented and virtual reality, including how to prioritize user security, the ethics of VR porn and, in journalism, determining what is truthful representation and what should be included or excluded.
Immersive media
Retha Hill, the executive director of the New Media Innovation and Entrepreneurship Lab in the Walter Cronkite School of Journalism and Mass Communication, said these ethical dilemmas are not new but have been amplified by the explosion of technology and the newfound ability for anyone to manipulate them.
Hill implores her students to examine how and why they use the tools available to them to tell stories while minimizing harm and delivering truthful messages. The lab focuses on exploring cutting-edge immersive technology and its ability to deliver information to viewers. Hill said before focusing on ethical tenets specific to technology used in the lab, she asks students to consider core journalistic principles and apply them to subjects they are representing.
Contrary to some popular opinion, Hill said journalists working in immersive media adhere to strict principles of truth and fairness. "We always make sure we're true to the scene we’re trying to simulate." Hill recounted a student who wanted to create a newsgame focusing on sex trafficking in Miami during the Super Bowl. The locale and bystanders had to be representative of Miami from a first-person gameplay perspective, and Hill and her students had to be wary not to use any broad stereotypes of people in their depiction. The purpose of the newsgame was to make players aware of the signs of sex trafficking and give them an opportunity to practice reporting suspicious behavior to an authority figure.
Accurate and truthful representation of sources in computer-generated journalism is critical to ensure misinformation doesn’t spread like wildfire, Hill said. As a long-time journalist with continued interest in exploring emerging media technology, Hill explained how artificial intelligence has grown over the years to allow anyone to portray public or private figures as they please.
"We have the capability of using tools like deepfakes and voice modification, voice imprinting, we can do all of that now that we didn't have back then," she said. "If we wanted to have a voice of somebody saying something we'd go hire a voice actor ... but now we can go and sample a person's voice and we can have that person say whatever."
Nonny de la Peña, the founding director of an ASU center for emerging media and narrative who has been dubbed the "Godmother of VR," said she takes the ethical principles she learned as a print journalist and applies them to her work with VR and AR.
For de la Peña, the ethical questions stem from editing choices made while creating an immersive piece, which is "really no different than editing film." What's different in immersive pieces is the viewer's part in the story.
When designing an augmented or virtual reality simulation, what to include, and how to include it, is constantly scrutinized. "In written words, we're quite happy to describe things very graphically," de la Peña said. "But then when we get into making something visually, we're much more careful about it. I’m not saying that’s a bad thing, but it’s the truth."
Jayson Chesler, a former student of Hill's and journalist working with emerging technology, said ethical decisions vary depending on technology, representation choice and how one seeks to portray it.
In April 2020, Chesler and his former McClatchy colleague Theresa Poulson created "A Guide to Immersive Ethics." The guide presents questions and case studies to be considered when capturing and digitally recreating a person or event.
"You really have to factor in, how am I deviating from reality, how can I most accurately get back toward reality, how can I make this 3D representation as real as possible," Chesler said. "And then beyond that, if I'm forced to make choices that are detached from reality ... how do I minimize the harm of that, both in how I'm representing my source and how I'm communicating that inaccuracy to my audience."
Emerging technologies have found their foothold in visual storytelling spaces and are flourishing in other fields as well. Academics and researchers at ASU and institutions across the nation are delving into machine learning, computer vision, robot-human cooperation and more.
Ensuring autonomy
Lixiao Huang, an associate research scientist at CHART, the University's Center for Human, Artificial Intelligence and Robot Teaming, researches collaborative opportunities between humans and robots. The center also looks at ethical and legal issues in emerging technology.
One ethical concern in the field of robot teaming sounds like it was pulled from a dystopian sci-fi blockbuster. Nancy Cooke, a professor and director of CHART, said "with physical robots ... you don't want them to be strong enough to kill you, or to make stupid mistakes."
However, the bigger problem, Cooke explained, lies in AI and machine learning. In medicine and defense, having biased results can present major hurdles, and autonomous machines as consumer products lacking transparency regarding their limitations — namely self-driving vehicles — have dangerous possibilities.
"Who's to blame when the vehicle kills somebody?" Cooke said. "We've already seen this problem. Is it the manufacturer of the vehicle? Is it the programmer? ... Was it the driver?"
While many researchers preemptively consider the ethical implications of the technology they’re creating, some are "bent on building the technology because you can, not because you should," Cooke said.
"The danger is they build things and then you have to work out the bugs after the fact. Well if those are bugs that are going to kill people, that's a pretty big deal." The question stands, as Cooke sees it, is "how do you have ensured autonomy?"
Huang sees debates regarding the "hot topic" of AI and ethics in self-driving cars habitually. "Elon Musk said that (Tesla's) cars are capable of self-driving, but in reality they are not there yet," Huang explained. "So consumers, they listen to their advertisements and false claims, and they fully trust the autonomation and put the Autopilot on, which caused the fatal accidents."
While the autonomy of Musk's vehicles should not be used in totality, Huang said not using the technology at all would defeat the purpose of its existence: to aid humans, not drive for them.
Pavan Turaga, the director of ASU's School of Arts, Media and Engineering, echoed Huang’s sentiments about ethics in AI, calling it a "vigorously debated topic at this point in time." Before grasping at solutions to broader, convoluted ethical questions, Turaga said individuals searching for answers should look at bias in AI.
Machine learning, which is based on examples and datasets to reproduce phenomena, are particularly susceptible to holding bias as they lack a handful of human elements; in the context of facial recognition software, Turaga sees a litany of potential bias concerns.
This type of software, which is used in instances from law enforcement to housing decisions, requires a list of labeled faces, leaving room for discrimination. Researchers must confront AI with tough questions to ensure this technological prejudice doesn’t sustain itself and proliferate.
"Where did you get the database from? Who labeled it? What races are represented? What skin tones are represented? What body types are represented? What did you assume in the data set that is representative of the world?" Turaga asked. "So that's where a lot of the bias of datasets creeps in to begin with. We don't really have a strong, solid way of ensuring that any data sets we acquire to train a system are free of bias."
The field of artificial intelligence has begun to acknowledge that representational bias is prevalent in data, Turaga said, but there remains disagreement on how to tackle it in a scalable way. Turaga posed the prospect of public oversight committees, which would likely increase transparency in numerous facets of artificial intelligence, along with amplifying government regulations on corporate entities weaponizing data collection for their own gain.
Despite the simplicity of regulatory and oversight proposals, the road toward cohesive ethical standards is long and winding.
Equality versus equity
Unlike how journalists have standards for employing emerging technologies, there isn't an established framework for approaching ethical dilemmas in artificial intelligence and robot teaming, Huang said, but she added there will be some regulatory code eventually.
As it stands now, lawmakers rely on ethics researchers to create case-by-case laws or guidelines for companies engaged in this work. However, because artificial intelligence and robot teaming are relatively niche and constantly evolving, it makes it difficult to craft legislation encompassing all the ethical questions plaguing the field. Proposed ethical paradigms, both utilitarian and egalitarian, present their own problems, Turaga said.
"The greatest good for the greatest number of people can often mean marginalizing the minority because, 'hey, I can't inconvenience the majority,'" they said. "The other ethical system is quite the opposite, which is, 'everybody needs to be treated equally.' That brings in its own issues, which is that everybody can’t be treated equally.
"Equality is flawed. Now people aren't talking about equality so much as equity."
Reach the reporter at ellefsonsam@gmail.com and follow @samtellefson on Twitter.
Like State Press Magazine on Facebook and follow @statepressmag on Twitter.
Continue supporting student journalism and donate to The State Press today.
Sam Ellefson is the Editor of State Press Magazine, leading a team of writers, editors and designers in creating four print issues each semester. Sam is a senior getting dual degrees in journalism and film studies and is pursuing an accelerated master's in mass communication at ASU.