Try 1 month for 99¢

Try to imagine the last 11 minutes of Lion Air Flight 610 in October. The plane is a new machine, Boeing’s sleek and intelligent 737 Max 8, fitted with an advanced electronic brain. After takeoff, this cyberpilot senses that something is wrong with the angle of ascent and starts to force the jetliner down.

A tug of war follows between men and computer, at 450 mph — the human pilots trying to right the downward plunge, the automatic pilot taking it back from them. The bot wins. The jetliner crashes into the Java Sea. All 189 aboard are killed.

And here’s the most agonizing part: The killer was supposed to save lives. It was a smart computer designed to protect a gravity-defiance machine from error. It lacks judgment and intuition, precisely because those human traits can sometimes be fatal in guiding an aerodynamic tube through the sky.

We still don’t know the exact reason the pilots of that fatal flight couldn’t disable the smart system and return to manual control. It looks as if the sensors were off, instigating the downward spiral. A report by the Federal Aviation Administration in 2013 found that 60 percent of accidents over a decade were linked to confusion between pilots and automated systems.

But it’s not too much of a reach to see Flight 610 as representative of the hinge in history we’ve arrived at — with the bots, the artificial intelligence and the social media algorithms now shaping the fate of humanity at a startling pace.

Like the correction system in the 737, these inventions are designed to make life easier and safer — or at least more profitable for the owners. And they do, for the most part. The overall idea is to outsource certain human functions, the drudgery and things prone to faulty judgment, while retaining master control. The question is: At what point is control lost and the creations take over? How about now?

It was exactly 200 years ago that Mary Shelley published a story of a monster who is still very much with us. Her book “Frankenstein” is about the consequences of man playing God. 

Shelley’s concerns were raised at the peak of the Industrial Revolution, when the Western world was transformed from sleepy agricultural societies into a frenetic age of factories, machines and overcrowded cities. 

Today we are close to creating a human brain inside a computer — an entirely new species. In his book “Sapiens,” Yuval Noah Harari takes us through a mostly upbeat tour of humanity since the cognitive revolution of 70,000 years ago. At the end of the book — our time — he warns about the new being, the cyborg now taking shape in a lab near you.

The CEO of Microsoft, Satya Nadella, hit a similar cautionary note at the company’s recent annual shareholder meeting. Big Tech, he said, should be asking “not what computers can do, but what they should do.”

It’s the “can do” part that should scare you. 

Driverless cars will soon be available for ride-sharing in the United States. If they can reduce the carnage on the roads — more than 70 million people killed and 4 billion injured worldwide since the dawn of the auto age — this will be a good thing. Except that this year a bot-car killed a woman in a crosswalk in Arizona, and others have been slower than humans to react. 

It’s not Luddite to see the be-careful-what-you-wish-for lesson from Mary Shelley’s era to our own, at the cusp of an age of technological totalitarianism. Nor is it Luddite to ask for more screening, more ethical considerations, more projections of what can go wrong, as we surrender judgment, reason and oversight to our soulless creations.

As haunting as those final moments inside the cockpit of Flight 610 were, it’s equally haunting to grasp the full meaning of what happened: The system overrode the humans and killed everyone. Our invention. Our folly.

Subscribe to Breaking News

* I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy.

Timothy Egan, based in the Pacific Northwest, writes a column for the New York Times.

0
0
0
0
0