Skip to Content, Navigation, or Footer.

State Press Play: Tech's ethical quandary

Crafting ethical paradigms for emerging technology proves to be arduous

statepressplay-CURRENT
"State Press Play." Illustration published on Thursday, Feb. 11, 2021.

New technology inevitably presents new challenges. State Press Magazine reporter Sam Ellefson sits down with Pavan Turaga, associate professor and director in ASU's School of Arts, Media and Engineering, to discuss the ethics of artificial intelligence, machine learning and emerging technology at large.


SAM ELLEFSON

As our society continues to grow and develop, so does our technology. Along with development come new questions and debates pertaining to the ethics and inherent biases of these new technologies. Augmented reality, virtual reality and artificial intelligence are some forms of emerging technologies that have lots of ethical baggage. Public skepticism toward these three forms of emerging technologies has become increasingly commonplace as they become more embedded in our daily lives.

As the increasing ubiquity of this tech presents new ethical and social concerns, developers and researchers have taken both a proactive and reactive approach. In the field of immersive media journalism, ethical standards for employing emerging technologies largely coincide with long-held journalistic principles. Truth-telling, accuracy and avoiding misrepresentation are key when replicating a scene or person with virtual reality tools. Some reporters in the field even have access to handbooks with guiding questions to be considered when creating their work.

In the fields of machine learning, robot teaming, surveillance technology and more, universally established ethical guidelines are harder to come by. Before even discussing applying ethical frameworks, researchers and developers have to contend with the presence of severe bias in artificial intelligence. 

Pavan Turaga, the director of ASU's School of Arts, Media and Engineering, posits that facial recognition software, which has its roots in datasets, opens up room for a litany of potential bias issues. 

Thank you again for meeting me for this conversation Pavan. I wanted to ask to start, if you could tell me about how datasets present certain biases. Are biases inherently bad? Must they be done away with, or can they kind of act as a necessity of sorts? 

PAVAN TURAGA

Thank you for the question, Sam. It's a pleasure to be here and to ruminate over these questions with you. The question about datasets and bias is a hotly debated question at this point of time. And the acknowledgement across the board is nearly every dataset has certain biases, and the sources through which the biases creep in are several, ranging from who do you choose to sample the dataset from, where's it coming from? 

And nearly every dataset, even at the stage of collecting information, presents opportunities for introduction of bias, simply in the way maybe even a question is framed to what is considered a valid entry. And there are all sorts of things that are filtered through post-processing methods, which also introduce biases of different kinds. 

So can they be done away with? I don't know, I'm not 100% sure that they can ever be done away with. 

You have to go down to the root of asking what is the meaning of bias to really unpack that question. As a very simple example, if we say let's look at pictures of people and for a surveillance application, and we are sampling people's identity by capturing a picture of their face and the source of bias could be, are all races represented in the dataset?

When you ask such a question, you have to be super careful around what you mean by represented. Should they be represented in equal proportions or should they be represented in the proportion in which they exist in a society, in a community? So, what is true representation, which also is very closely connected to how democracies work, in a way. I mean, we talk of representation, but there are varying definitions as to what representation means in different countries in the world.

The U.S. kind of democracy is slightly different, right? I mean, representatives for each state are not directly in proportion to the state's population. For example, there is some balancing at least for smaller states. But that's not the case for other democracies like India, where I come from. So it's interesting and tricky to talk of what is fair representation across categories.

The other challenge comes from enumerating categories themselves. What are the categories against which you're checking for bias? When you say something like race, as a matter of speaking, you know we can take a form, let's say that the TSA might use, which says tick off your race. And it's one of key categories — Black, Asian, Hispanic, Indigenous, you know, Caucasian, whatnot. But, are there other other races that are not on the form? Are there mixed races that are not included?

So, how do you enumerate the categories first is itself problematic. You can use some categorization to show that maybe there is some bias in the system by looking at outcomes associated with each of the categories, but it's an indication that bias exists. Not necessarily a way for it to solve for it.

Can you do away with it? That is the hope of people. 

So, there are really two ethical frameworks that are at play. One of the foundations for designing, let's say data-driven methods, can be said to be optimizing, the technical term people would use is you're optimizing for mean squared error in some representation. To give it a slightly loose interpretation, mean squared error roughly corresponds to the greatest benefit for the greatest numbers of people, which means you can ignore certain minorities and still ensure the greatest benefit for the greatest numbers of people. So, we know the problems with that way of thinking.

That is at the foundation of a lot of applications of machine learning, which work with sensory data like images, speech, time series. How do you convert raw sense data to a linguistic symbolic form that can be interpreted as race or gender or identity?

That translation is largely driven by these measures, which are mathematically just written as mean squared error of different kinds, where there is certainly a place of error which can ignore minority classes if there are some. 

Then there is the other framework, which is about fundamental rights, which people talk about. However, even when you talk of fundamental rights, the phrasing is written not in terms of low-level data, but in terms of high-level symbolic concepts, like the right to marry anyone of your choice, assuming that they are willing parties, the right to religious freedom. These are all highly symbolic concepts, which don't have an easy mapping to something as simple as a picture or a speech signal or a time series from your body, from your wearable device, for example.

So, the language of fundamental rights is symbolic, the language of machine learning is mathematical, and mean squared error is the best that they've come up with. And those two are not really compatible. 

So when we say does bias exist? Yes. When you optimize for mean squared error, it will certainly bias you in favor of majority things, whatever that majority concept is in the dataset it's optimized for. 

So people would say to fix it, you fix the dataset sampling error, which is make it more equal. But again, there are disagreements as to what it means to make things equal to the point of what are the categories first that need to be considered in this endeavor.

If we all agree on the categories, let's say the races that we are trying to accommodate for, do we know that we are not making it worse for some other unaccounted for category that we have not yet seen, or a hybrid fluid category that we don't know how to express? So all these questions are at the forefront of inquiry and there aren't any easy answers, but to the extent of demonstrating that bias exists, there are some approaches at this point.

It's very noncontroversial at this point of time to say yes, bias exists in datasets, bias exists in that a lot of them that convert data into other forms of data and we should be very careful about when and how we use them.

SAM ELLEFSON

That actually leads me to my next question. So, public opinion of controversial AI endeavors, like surveillance technology like you talked about, and self-driving cars have become increasingly contentious in the news media. What ethical or bias concerns do these two facets of artificial intelligence hold, and what can be done to mitigate or grapple with them?

PAVAN TURAGA

I mean, surveillance as we kind of hit upon. There's many things (we mean) by surveillance, which can be referred to. One is, of course, recognizing the identity of a person. There are many other things, the recognizing of the activity they are doing. Surveillance is also about recognizing anomalous activities, abnormal behaviors, parking lot cameras that look for people stealing things from someone's, you know, a car, for example. Identity recognition is an area of inquiry. 

So all of these things have the same underlying fallacy where in the case of anomalous activity recognition, the modeling paradigm starts with this assumption that there are two categories of activities, all humans engage in: safe and unsafe. And there is no midway. 

And then the data collection problem is, give me examples of safe things and give me examples of unsafe things. So there is nearly no way you can provide an example of all possible safe activities and all possible unsafe activities in a parking lot. But people assume that that's a given and then they'll construct a model that separates the two classes as a basis for enforcing whatever, safety and security.

It's not difficult to break these systems because human activity by itself is not made of classes and categories. It's just a way of navigating the world. You know, it can continue space and time, and there is never an easy way where we say here is where an activity begins and there is where it ends, and these are normal, that's abnormal, except if we talk in symbolic terms. 

We cannot really identify and enumerate normal or abnormal activities in the sensory sense, meaning I cannot give you videos of all possible manifestations of normal and abnormal activities, but maybe I can try to give you a symbolic description of what maybe an abnormal activity might be. And the symbols I might use to describe it would be linguistic symbols. Like yeah, breaking a window probably is problematic, but I've specified that in linguistic terms. I have not shown you a video of it.

And if I tried to show you a video of it, the immediate next question is, well, there can be zillions of videos of people breaking into a car, which would look very different from the perspective of the colors and the textures and the lighting and the viewpoint, that it's just impossible for me to give you all possible manifestations of breaking a window that you can work off of.

So ML is at that level, at the level of the video processing stage, where can you take a video and recognize the breaking of a window without any error, under all possible lighting conditions, under all possible viewing angles of a camera, under all possible body shapes and sizes of the human that does it? And the answer is no, they can't. 

But then this other side, which is, can't we just describe it in simple terms, like breaking a window, assumes that this low-level problem has been tackled by ML to a great degree of accuracy, but they haven't.

So the ethical implications are that which is when people talk about the applicability of these technologies, and self-driving is another example, isn't it so simple to say don't run into a person, right? I mean, I can just say it out. But the challenge of ML is to take that symbolic statement and find a way to ground it to the sensory data that it's seeing, whether through lidar or video or whatnot and define what a person means in a stable, robust way without making an error. That's where the gap is. The sensory data processing is problematic at this time.

SAM ELLEFSON

You touched upon this a little bit earlier as well. I know that ethical frameworks are habitually debated in your field. Can you tell me about the benefits and pitfalls of utilitarian versus egalitarian approaches and other major proposed frameworks, if there are any?

PAVAN TURAGA

I mean, people generally recognize that the utilitarian approach is problematic when it applies to humans. However, much of the machine learning framework that is popular now really had its roots, nearly a century ago, in a field called biometrics, which is a very different use of the word and how it's used now.

A century ago, biometrics meant measuring the heights and bone lengths of dinosaur bones, for example, like fossils, and trying to identify which species of animal you dug up from the ground and measuring some things about the bone lengths. Or trying to take a flower or a leaf and try to categorize it into one of the known species in the world.

That is what biometrics meant and much of machine learning owes its roots to that endeavor where utilitarianism did not seem problematic. If you are getting the species of a dinosaur that died millions of years ago somewhat wrong, OK. No harm done. Right? I mean, if it's a new species of dinosaur, which got misclassified, OK, no harm done. Eventually science will find out and if there's more examples, we'll figure it out. 

But it felt extremely pragmatic to minimize what I refer to as the mean squared error and build things off of that. But then shift forward a hundred years later, the same techniques are mature. The same techniques are being able to be scaled to bigger and bigger datasets.

And now suddenly by biometrics, we mean something very different now. We mean measurements of human samples and human identity and everything that is seen to be on a spectrum. So many human attributes are seen to be, I mean, we recognize that as everything is on a spectrum: gender is on a spectrum, race is on a spectrum, behavior is on a spectrum. What I feel like today is not what I feel like tomorrow. 

We are not a fossil that's been dead for a few million years that was dug up, whose attributes have been fixed in time. So we are a living, dynamic being, which doesn't occupy any category in a strict sense, but that machinery is being applied. 

So, what do we do with this? I mean, it seems like a path of least resistance at this time. You can't really invent new mathematics to account for dynamically changing entities, which defy categories, and yet be able to derive actionable intelligence from measurements.

I mean, that seems like a very interesting question to ask, but engineering in general proceeds by applying known methods to new problem spaces, and that's where we are at this time in this field. The understanding of fundamental rights, that is, you know, the framework that theoreticians in ethics want us to think about. That's what is the suggestion. 

But the gap is this: If you ask a question from the perspective of fundamental rights, if you ask a question, should video surveillance even be built, and the answer might be probably not, if you see the potential for harm for privacy and misuse. But big tech is not interested in that question. Big tech is motivated by profit and for them, the question is not whether it should be built — it will be built. The question then is can you provide guard rails around it?

So the question of choosing a framework is an academic question, but the most impactful work would have happen if the frameworks were kind of seen as a guiding star, as opposed to an absolute that has to be reached, because ethical frameworks are several. I mean, there is also the ethics of Immanuel Kant, who will talk about duty and, you know, there's ethics that come from religious beliefs, which have their own different rules and every culture has its own ethics spelled out of different bases. So there is, there will never be a commonly agreed upon ethical basis.

We can certainly ask can AI, which is developed in a lab in some country, when it's deployed in another country, can it be ethically informed by the ethics of its local country? We don't know. I mean, that would be the ultimate question, which would then mean AI has to be human-like, ultimately, and blend in the society that it has deployed in.

So, the ethical framework question is difficult because different societies are different ethical frameworks. It's very tempting to impose the ethics of the most dominant country in the world on every other country in the world. That will probably not work out in the long run. But to the extent we say that, rather than find the right framework, can we help big tech put the right guardrails in contextually meaningful ways that make sense for a very specific application, that make sense for a very specific community? I think that's a productive way to go forward.

At this time people are looking for the solution. I don't think there is a grand solution. The solutions will be built in the context of specific deployments and commands and people.

SAM ELLEFSON

You mentioned this a little bit earlier as well, but can equality be actualized in the search for applying ethical frameworks to new technologies, specifically machine learning? Is that too idealistic or how does the goal of equality compare and contrast to the idea of equity?

PAVAN TURAGA

So the level zero definitions that we could go off of is equality would mean AI treats everybody in exactly the same way, no matter who you are, right. Which is actually how it is at this time. I mean, face recognition, as in it will process your information in exactly the same way, no matter who you are, but the outcomes that it might spit out could be very different for categories of people. 

Equity would mean you have to take into account the context around which the person exists to provide an answer for the question that is different for who you are.

If AI has access to what we refer to as socioeconomic data for a specific person, let's say college admissions, right. Within the realm of college admissions, what are we assuming as data inputs to the AI system? If it's only GPA, and if it's only de-identified anonymous information, then there is no way the method that people use of any kind has a way to create an equitable outcome, because it doesn't know who it's applying it to. It will treat all the data as the same and the biases will just creep in. 

If you want to create equitable AI, you have to provide these symbolic inputs that, OK, this is a person with this context. This is the socioeconomic indicator. This is the racial, you know, label or whatever it is. How do we adjust for the outcomes to promote equity can be dealt with, but it cannot be gleaned automatically.

That's where I feel some of the assumptions are that AI can somehow glean the need for adjustments based off of the raw data, but it may actually have to be told explicitly what those adjustments should be and what those tags and labels are that trigger those adjustments. In which case I think, yes, it's possible to build equitable AI systems if we go that additional step.

However, it raises data ethics questions. This is all sensitive data. This is all supposed to be private data, which is used for equity considerations. So, it intersects with data ethics and privacy ethics. But if we don't go into those areas, it is possible to create equitable AI systems.

SAM ELLEFSON

Lastly, I wanted to ask, this may be a little idealistic as well, but what would machine learning look like without categories like you were mentioning earlier? 

PAVAN TURAGA

It is a question on my mind, and there are a few ideas which have been floating around and they're not always, you know, new or they're not mine. I'm not claiming them to be mine. 

To the extent that, you know, machine learning can be seen as an endeavor to provide functionally interesting responses to a query. Let's say we want to talk about AI as an agent, which is indistinguishable from another human, the Turing Test. Let's say something that passes that test.

It would mean it would look and feel human. I can interact with this AI agent as if they were a human. In that situation, categories don't exist. As humans, we definitely have a way of seeing people as multidimensional beings and not pigeonholing people into categories all the time and also maintaining a history of past interactions with people that inform us and we are contextually aware — we know a lot when we are engaging in this conversation, we talked about what I just did before coming in here and what you did before coming in here. 

So there's a lot of context around human interactions. It's not just based off of hey, in this movement, here's your question and here's an answer that is independent of the context that precedes it. So the context of humanity, very broadly construed, would be a way away from understanding things as categories, which means the math is not the same. The math is going to be super high-dimensional statistics and dynamical ways of thinking about phenomena, as opposed to a snapshot way of thinking about phenomena.

And I don't know. I mean, the tech answer would be, oh, if there are more categories, we can create more categories. You know, that is what big tech will say, but there is never going to be enough categories to explain any continuous spectrum, and gender identity being one, racial, mixed-race identity is being one, a lot of things about how do we feel about things, can never always be categorized into bins nor can it be objectively measured in reputable ways.

Human emotion is one of those. How many human emotions exist? I mean, people will say, oh, there are six basic emotions and they'll try and categorize it. But there's enough evidence that says we feel more than six emotions and we have more than six ways of manifesting it. And it gets finer and finer from there. Most of these endeavors end up with finer and finer categories to the extent that it becomes either impossible to train these systems, or it just becomes an exercise in nitpicking details, as opposed to seeing the fuller context of what is unfolding in front of you. 

So conversation, interaction, being able to feel like you can be who you are without wanting to feel you have to play a game to make sure if you're interacting with an agent or if you're putting your data into a system, that you have to somehow filter it to get a favorable outcome, that would go away if we stop thinking about categories, I just don't know how though. 

I mean, how do we make people feel comfortable in who they are in their fullest identity, in their fullest manifestation in their fullest, truest self. That is the big question. And I don't think the way forward is to break humanity into smaller and smaller categories to explain that.

SAM ELLEFSON

To read more about ethics and new tech, grab a copy of State Press Magazine: The Chrysalis Issue on Oct. 6, or read it online at statepress.com/section/magazine. For The State Press, I'm Sam Ellefson.


Listen to State Press Play on Spotify.

Reach the reporter at stellefs@asu.edu and follow @samtellefson on Twitter. 

Like State Press Magazine on Facebook and follow @statepressmag on Twitter.

Continue supporting student journalism and donate to The State Press today.


Kate OuradaPodcast Editor

Kate Ourada is in her 5th semester as the editor of the podcast desk and is doing her best to spread her love of audio journalism. She works in radio as a reporter and board operator. Kate has a passion for creative writing, her cat and making niche playlists for her friends.


Sam EllefsonMagazine Editor-in-Chief

Sam Ellefson is the Editor of State Press Magazine, leading a team of writers, editors and designers in creating four print issues each semester. Sam is a senior getting dual degrees in journalism and film studies and is pursuing an accelerated master's in mass communication at ASU.


Continue supporting student journalism and donate to The State Press today.

Subscribe to Pressing Matters



×

Notice

This website uses cookies to make your experience better and easier. By using this website you consent to our use of cookies. For more information, please see our Cookie Policy.