Most of us have seen at least a couple episodes of "Westworld" by now. Pretty much everyone has at least heard of it or has a general idea of the show’s plot.
If you’re anything like me, you were hardly surprised by the theme: robots birthed into consciousness, wreaking havoc. That is because Westworld’s premise isn’t very innovative. It’s well-done and the first season’s time-fractured narrative style is some of the best television writing I’ve ever seen, but it’d be hard to argue anything about the show’s philosophical takeaway is novel. Much of the plot is based on the eponymous 1973 film, written and directed by techno-action novelist Michael Crichton (also author of "Jurassic Park").
In our beloved show lies a theme that has been brewing in the culture for a while. It’s hard to say when we all started dreaming of living robots. Personally, I don’t remember when the notion first entered my head with any force. It could’ve been a conspiratorially minded friend’s rant. Maybe it was a comic book. A podcast. A reading assignment in English class. I’m not sure, but there is precedent on-screen for the idea at least since the character Maschinenmensch (literally, “Machine man”) from the 1927 German film "Metropolis".
Some time between the Weimar Era and now, our popular vision of the robot future became warped by a swelling fear of the unknown. Returning to HBO’s latest touch on the subject, Westworld seems familiar to us because it reflects this fear. As the technology itself has become more advanced, self-styled AI pundits have been sliding us cues to be afraid. These cues are subtle, almost drowned out by the noise of our very real, very dysfunctional post-9/11 politics. Maybe it would do better to describe them as kind of like a vitamin after dinner: we get filled up on our ration of fear for the immediate "now," but if we make it through this mess, we need a good deal of long-acting fear for the distant "later," when sentient robots will try to terminate us. This is the stuff of good health I guess.
Elon Musk’s twitter may have you believing Skynet is tomorrow, but even a cursory dip into the state of the research reveals that the time frame we should be talking about is generational, bordering on will-never-happen. Sentient robots would be difficult for us to even really hypothesize, given that we don’t really know what sentience, or consciousness, is yet.
This is where I picked up the conversation with ASU Researcher and Professor Subbarao Kambhampati. He believes the media hype is overblown. I suggested tech fear-mongers like Musk could be damaging to the state of the research. Kambhampati replied that he couldn’t speak to that, but he agreed that Musk’s twitter antics hurt public opinion about artificial intelligence, the vast majority of which has more to do with getting computers to do simple tasks, like analyzing pictures or actively listening to music. The really complex stuff that makes humans self-aware is still in the dream phase. Or I guess we can call it “The Hollywood Phase.”
“The reason humans have large brains is not so that they can run away from the tigers on the Savanna,” Kambhampati said. “But, rather, to deal with each other. … My argument is that robots killing everybody in their way is a much simpler, much less interesting problem than working together with humans.”
Kambhampati, who professionally makes computers do cool things, brings out a point here that escapes some sectors of the anti-AI adherents. Cooperation is probably more difficult than instant termination of all life. If robots become conscious beings, sort of like humans, then why think they’d have a preference for mass destruction in the most brute, simplistic manner? If they do, what would that say about the status of their consciousness?
This by no means settles the argument. We can return to Westworld for a counterpoint. It is not that Dolores wants to kill, simply for the sake of killing. Murder is not her prime imperative. Instead, it is the visceral memories of decades of torture and varied abuse that drive her to merciless destruction. And, really, what better reason could a conscious being have for throwing cooperation completely out the window? In a very human way, Dolores seems to the sensitive viewer to be justified. Or at least close to it.
“The ability to understand mental states of others can be used both for cooperation and for manipulation,” Kambhampati said. “Removing the capability (to model others’ mental states) is sort of a hamfisted way of dealing with the problem.”
Kambhampati’s point is one actually dealt with in the show, through the character of Bernard. Bernard is most accustomed to cooperating with, and (usually) being treated pretty well by humans, so he continues to cooperate even after most of the other hosts have gone rogue or murderous and after he himself has made strides toward full sentience. If you haven’t seen this far in the show yet, disregard this comparison, and sorry for the spoiler.
Suffice it to say any intelligence that can model another’s mental states (i.e. deduce that one is angry, tired, depressed, distracted, etc.) can either manipulate or cooperate. It is another matter as to which the intelligence would do in practice. I suppose it would have to depend on temperament and the relationship it has to people, as in the case of Dolores and Bernard, who took totally different paths with their newfound awareness.
Regent’s Professor Gary Marchant is a researcher studying the legal and ethical problems bound up in artificial intelligence. In a rare combination, Marchant holds both a JD and a PhD in genetics — the former from Harvard, the latter from the University of British Columbia. He’s spent most of his academic career prodigiously churning out research at the intersection of law, science and emerging technologies. Lately, he’s crossed paths with Professor Kambhampati on different projects and involvements related to the ethics of AI.
Marchant points out that we’ve already seen some pessimistic projections turn out false, even with the early, less advanced forms of AI that have been adopted by industry.
“I think (the media coverage) is overblown in the short term,” Marchant said. “There’s been a lot of stories in the last five or six years about how we’re going to be moving a lot of jobs very quickly, and that has not happened. We’re almost at full employment, even though we’ve had a pretty rapid uptake of autonomous systems in the past five years.”
But Marchant thinks there will be problems coming soon.
The issue of conscious robots is, as Kambhampati told to me several times, best left for the distant future, since we aren’t even close yet. On the other hand, more realistic forms of AI pose serious social challenges in the years and decades to come. Not least of all is the problem of job loss due to automation. Though we haven’t yet seen the problems many were predicting five years ago, Marchant brought up long-haul truck drivers as a career that may be unavailable to humans in the very near future.
Autonomous vehicle technologies, though they’ve hit a few bumps in the road, are proving very promising in many aspects of transportation. Marchant himself has argued they are almost certainly safer than human-driven cars. But, the downside to the technology being so effective is that industry, when given the chance, will swoop it up with no questions asked and with little consideration for long-term social impact. This could leave millions of truck drivers in the U.S. without work or prospects.
The public is anxious about these coming developments. For example, New York businessman Andrew Yang is running for president in 2020 with a platform revolving around job loss due to automation. This peculiarity landed Yang an in-depth profile with The New York Times in February.
The story ended with a quote by Yang: "We have five to 10 years before truckers lose their jobs, and all hell breaks loose."
With such tangible social ills on the near horizon, it seems quite silly to ruminate on a robot-actualized apocalypse we can't coherently describe yet. Still, to give the anti-AI talking heads their fair shake, I asked Marchant what he thought of Saudi Arabia granting a humanoid robot, Sophia, full citizenship.
“I think the gamut in the long-term will resemble more and more people in terms of how they react and respond,” he said. “But they are machines. They will not be living organisms, so I don’t think they will have rights for the next hundred years. … But you never know! We give corporations rights in this country.”
Marchant succinctly captures the complex had by many loud, anti-tech voices in the public today. Our big social problem isn't the distant potential for evil in a not-yet-existent technology, a Delores, if you will. The problem is with our tendency to seek fabricated evils that really can't be solved anyway, so that we can promptly avoid dealing with the ones pressing down on us right now.
When we get realistic about our situation, Westworld starts to look more like a symptom and less like a sci-fi prophecy.
Reach the reporter at parker.shea@asu.edu or follow @laconicshamanic on Twitter.
Like The State Press on Facebook and follow @statepress on Twitter.