Skip to Content, Navigation, or Footer.

All our bright and rosy visions of the future are clouded by the specter of machines running amok and unleashing untold chaos.

As we grow increasingly dependent on technology in the conduct of our daily lives, a recurring theme is ensuring the safety of humans who use it. The most reliable way to do this seems to be to curtail the use of complex systems and to restrict their interactions to the minimum level necessary.

Except … we’ve kind of missed the bus on that one. The New York Times reported modern cars may now come packed with “up to 100 million lines of computer code.” In the same article, Bruce Emaus, the chairman of the Society of Automotive Engineers’ embedded software standards committee, said, “The modern car is a computer on wheels, but it’s more like 30 or more computers on wheels.”

Computers big and small are run by software, and as the number of lines of code that go into the creation of that software increase, so too does the chance of failure via unexpected behavior. Few people know this better than Steve Wozniak, the co-founder of Apple Computers, who had to deal with the multiplier effect of malfunctioning technology.

Wozniak, who owns a Toyota Prius, said his car would “go wild” at times due to a glitch with its cruise-control system. Toyota has had an especially torrid time in the past few weeks, issuing at least three separate recalls that have dented the company’s safety record and bottom-line. The latest of these recalls, on the Prius (one of the best-selling hybrid cars ever), seems to be linked to one or more electronics systems on-board.

Complex electronic and software components are not restricted to cars alone; they may be found in the toasters and microwaves we start our mornings with and can even effectively control a million-pound Jumbo Jet on a complete trans-Atlantic flight. However, these systems are human artifacts too, just like our buildings and bridges and BlackBerrys.

There are complex interactions among the components that such systems are composed of, and multiple points of failure that must be identified and addressed. When a number of such systems are thrown together in a critical application, the stakes are even higher. It is highly unlikely that we can ever predict the behavior of a reasonably large system with certainty.

What we can do, though, is build in redundancies to ensure the highest level of safety possible and also to ensure that obscure glitches are not allowed to multiply into an avalanche of problems. But most important of all, we need to come to terms with the realization that no software or system is ever perfect, and consequently plan ahead to allow the same room for error that we grant fellow humans in our daily lives.

Kartik wants to blame his word processor when he misses deadlines. E-mail him more excuses to give to the editor at kartikt@asu.edu


Continue supporting student journalism and donate to The State Press today.

Subscribe to Pressing Matters



×

Notice

This website uses cookies to make your experience better and easier. By using this website you consent to our use of cookies. For more information, please see our Cookie Policy.