Skip to Content, Navigation, or Footer.

Autonomous weapons and racist computers

The recent tech governance conference at ASU was a three-day package of eclectic diamonds in the rough and an anti-tech conspiracy theorist’s worst nightmare

Panorama.jpg

Autonomous weapons and racist computers

The recent tech governance conference at ASU was a three-day package of eclectic diamonds in the rough and an anti-tech conspiracy theorist’s worst nightmare

The Sixth Annual Conference on Governance of Emerging Technologies & Science, held at the Sandra Day O’Connor School of Law from Wednesday to Friday last week, sounds to the layman less interesting than it was. Beyond some shaky public speaking and awkward Q&A sessions, the ideas dealt with at the conference were as varied and futuristic as any episode of "Black Mirror." 

The three days were led by policy and STEM researchers, government officials, lawyers and big tech corporate actors — a mix that, in abstract, sounds as tranquil as a cocktail of chamomile and Zzzquil. But the talks were anything but boring. 

The first day kicked off with a keynote speech by Larry Downes, a tech industry analyst and best-selling author of books with such unusually forceful names as "Unleashing the Killer App: Digital Strategies for Market Dominance" and "Big Bang Disruption: Strategy in the Age of Devastating Innovation." 

His talk, entitled “Eight Simple Rules for Regulating My Disruptive Innovation” began the theme that was to run through the whole conference: how to regulate, from the perspective of the government, technologies that could spark fast, unpredictable social and economic change. Under that umbrella were such nightmare fodder as: guns that don’t need people to fire them, blatantly racist algorithms and genetically modified mosquitoes.

Believe it or not, that last item is actually good news, which brings to light an important point: some of the technologies discussed were actually mostly positive but will require adequate governance to maximize their benefits. 

Not all the lectures were about regulating technologies that, by their nature, would probably tend toward the destruction of humankind. But most of the lectures were, and in our sci-fi culture, the public seems to worry most about rogue artificial intelligence and murderous toaster-ovens, evidenced by our fascination with shows and films about the dark side of technological advancement. So without further introduction, here are some spooky and interesting highlights from the GETS Conference. 

Larry Downes and the inglorious death of MySpace

Downes’s keynote speech on the first day of the conference revolved around eight rules for regulating new technologies. There were well-designed pastel slides and lots of quotes from famous thinkers through history on the evolution of technology. But what stuck out the most was a quote about MySpace. 

Victor Keegan, writing in The Guardian in 2007, penned a now-infamous headline: “Will MySpace ever lose its monopoly?” At the time, the question seemed a valid one, and concerns over MySpace’s market dominance over the social media industry were leading some to cast their thoughts toward new anti-trust laws that could keep up with what was then an emerging set of technologies: social media platforms. 

It seems a silly thought to most now, as the realm of social media today is a competition among many different companies that service many different niche markets. Downes’s talk addressed this phenomenon. One of Downes’s rules is that policy-makers shouldn’t jump to make new laws for technologies before the market has even had a chance to sort itself out naturally. Just a few years after Keegan’s Guardian article, many different social media platforms came to the fore, and MySpace took a legendary dive into obscurity. 

As a parting word at the end of his talk, Downes provides the following caution in good humor: “If you’re thinking about regulating the tech giants, always remember the question: ‘Will MySpace ever lose its monopoly?’”


Carlos Ignacio Gutierrez and the invisible smart guns

Carlos Ignacio Gutierrez is a doctoral fellow at the Pardee RAND Graduate School. RAND, as in RAND Corporation, the nonprofit think tank that does all sorts of military-relevant research. Gutierrez studies the policy problems of artificial intelligence, and he presented some of his ongoing research at the GETS Conference. 

The thrust of his research is to understand how AI policy has been researched in the past. It’s interesting stuff and definitely worth a read, but he brought up one aspect of AI that not many people realize: no one knows if autonomous weapons exist. 

That is not to say no one knows if an autonomous weapon has been built yet because it is hiding in a bunker somewhere two miles underneath the Siberian ice. Gutierrez was saying that we have weapons systems already built that may be autonomous, but no one can figure out if they actually are. 

The questions of autonomy seem to revolve around whether a weapons system that picks its own target is autonomous. So if someone invented a drone that could be programmed to find at least one person stealing Milk Duds from a Circle-K, selectively target that individual, then fire a dozen missiles right on top of their unsuspecting head, we have good reasons to say such a weapon is autonomous. 

But, the problem seems to be that it still needs to receive programming from a human to be able to do all those things. Autonomous? Who knows? Apparently not the U.S. military, at least. 

“It doesn’t matter if (autonomous weapons systems) exist or not, because if terrorists used them, then the U.S. wouldn’t want to be behind,” Gutierrez said.  

Antony Haynes and downloadable racism

The last thing you’d expect a robot to be is racist, but apparently that could be the future (or present) Americans are looking at. 

Antony Haynes, associate dean at Albany Law School, presented some striking data about how certain algorithms, when programmed by a biased individual, can produce some blatantly biased outputs. One famous example is of Google’s auto-tagging feature, which tagged a picture of two African-American people with the phrase “Gorillas.” 

Now think about that imaginary drone and the milk duds thief mentioned above. How confident are you in an autonomous weapons system’s version of an “auto-tagging” feature if Google’s thinks black people are apes?

The problem of biased algorithms is more than just offensive and rude. There are real consequences as a result of these kinds of biased algorithms already. 

Haynes mentioned the use of risk assessment algorithms that spit out a score indicating a criminal’s likelihood of re-offending. Such risk assessment scores are used to gauge a former convict’s likelihood of committing another crime in the future. The problem? These algorithms, despite their creators’ best intentions, are wildly inaccurate. 

Worse, they consistently rate black offenders with a higher score that is not reflective of the actual data on subsequent offenses. In other words, the algorithms predict that black people will offend much more frequently than white people, and the data shows they don’t.

Read more: Machine Bias

So, racist algorithm? That may be a stretch, as first we’d have to decide if an unconscious machine has the capacity to be racist. But tools that could contribute to ongoing racism in society in a negative way? Haynes says definitely. 

The consequence, according to Haynes, is a tool used in the legal system that judges people not on the basis of their character or actions. 

“The software is embedding a value that we think of as opposite to Anglo-American jurisprudence,” Haynes said. 


Reach the reporter at parker.shea@asu.edu or follow @laconicshamanic on Twitter.

Like The State Press on Facebook and follow @statepress on Twitter.


Continue supporting student journalism and donate to The State Press today.

Subscribe to Pressing Matters



×

Notice

This website uses cookies to make your experience better and easier. By using this website you consent to our use of cookies. For more information, please see our Cookie Policy.