Announcement

Looking for Your McMurray Magazine? We've created something even bigger and better...click here to check it out!

Looking for our original content? Welcome back to our original site!

Features(Archives)

Oct
13
2018
Volume
6-6

The Future of The Trolley Problem is Now

Gary Andreassen
BY Gary Andreassen
(1 Vote)

My older brother (age 12) was chasing me (age 7) around the house. I think it had something to do with me breaking the arm off one of his G.I. Joes. I vehemently deny the accusation to this day.

I dash outside the door straight for the street. I hear a screech of rubber on asphalt followed by the thud of metal against concrete. I turn around. A car had crashed into our front neighbour’s cement fence wall. I had been narrowly, by the thinnest margin, run over by that car.

The driver instinctively knew that he had no time to avoid hitting me even if he braked hard. He made the split-second decision to swerve left and then hit the brakes. He had chosen to pull the metaphorical lever à la the trolley problem.

I wonder though, if he had his wife and kids in the car, would he have made the same decision?

 

The modern trolley lever

Fast forward to this age of artificial intelligence and machine learning and I’m still wondering – high tech wondering. Would a self-driving autonomous vehicle (SDAV) carrying five  passengers have chosen to kill one stupid seven-year old kid like me to protect its passengers? Isn’t saving five lives better than saving one? As Starship Enterprise Science Officer Spock would utter as his ultimate logical last words in Star Trek III The Wrath of Khan: “The needs of the many outweigh the needs of the few.”

On the flip side, what if the passenger was an old person who’s about to buy the farm anyway? Would the SDAV have chosen to spare me because I still had my whole life ahead of me?

The scenario iterations are endless. But the real question isn’t what the SDAV would do but rather how would it decide what to do? Would its decision be based on moral grounds or purely logic? What will be the underlying philosophy determining its algorithm? Will theology be summoned in writing its code or will we leave it to economics?

 

What’s a trolley anyway?

The earliest version of the Trolley Problem was part of a moral questionnaire given to undergraduates of the University of Wisconsin in 1905 by the American philosopher Frank Chapman Sharp (1866-1943). It has had many iterations since then. The modern form of the problem was first formulated by the British philosopher Philippa Foot (1920-2010).

Until the advent of SDAVs, the trolley problem has remained in the realm of philosophy and cultural norms and social morality. Not anymore. There is a Facebook page called “Trolley problem memes” with more than 240,000 followers. The trolley problem has gone mainstream. It is no longer an obscure thought experiment. In fact, it has now gone full circle.

The National Science Foundation through a $556,000 grant has commissioned a group of . . . wait for it . . . philosophers to write algorithms for SDAVs based on various ethical theories. For example, Utilitarian philosophy believes that all lives have equal moral weight and so an algorithm based on this theory would assign the same value to passengers of the car as to pedestrians.

The objective of the grant is not to make a stand on which moral theory is right. Instead, the algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers or the public in general. Car manufacturers will of course by default choose to preserve the life of the passengers no matter what the philosophy. After all, who will want to buy a driverless car that will choose to kill you the owner of the car in a trolley problem situation?

 

The trolley problem at Fort McMurray

The repercussions of SDAVs are far-reaching and frankly, hard to predict. The most obvious immediate fatalities of SDAVs will of course be truck, bus and taxi drivers, including your pizza delivery and UBER driver. What most people don’t realize is that there are a whole bunch of other industries which are built around driving as a profession which will likewise be affected.

For example, if trucks become SDAVs, what’ll happen to all those truck-stop businesses like restaurants, hotels, motels, and gas stations? If no one is driving a car, what’ll happen to the car insurance industry?

The city revenue from parking tickets or speeding or DUI will dry up overnight.

A few months back, a Canadian oil industry giant announced that it will lay off a net of 400 employees when it rolls out what it calls its autonomous haulage systems (AHS), driverless trucks specifically those apartment-sized trucks used to haul ore in the oil sands.

This of course led to outcries from all sides. On one hand, there were comments in the vein of “It is a mistake to think that highly experienced and capable operators will cease to play a role in oil sands extraction.” On the other hand, there are those who say that while this move doesn’t indicate that mass job losses are imminent, it’s something regulators need to prepare for. It will not suddenly mean that all the transport trucks across Canada are going to be automated…but it will happen, and sooner rather than later.

That this kind of innovation should happen first in Canada shouldn’t be a surprise to Canadians. After all, the University of Toronto and University of Alberta in Edmonton is AI central of the world. The former headed by Geoffrey Hinton and the latter by Richard Sutton, the two people most widely considered as the fathers of AI and machine learning.

This is not the first time that automation has caused reductions in employees not just at the oil sands but all around the globe long before even the Luddites came to be known. And it most certainly won’t be the last. Specially since the angle to justify automation is safety. The ultimate goal in the oil sands is for the work to be done without risking life or limb to any actual human beings but only robots. The only human being will be the operator of the robots, totally safe and secure in a control center hundreds of kilometers away staring at a computer monitor.

How can anyone argue with that? How can anyone argue with losing 400 jobs for the sake of zero accidents or fatalities. How can anyone argue with the moral rightness of pulling a lever to switch a train track and kill one person instead of five? Of course, automation will also result in very significant cost reductions and offset the loss of revenue from the depressing low global price of oil, but that’s just icing on the cake.

 

The Luddite Fallacy

The story of mankind is the story of increasing productivity while decreasing labor intensiveness. It can be said in fact that this is the very definition of the word progress. The Luddite fallacy has been proven true time and time again by history. New technology does not lead to higher overall unemployment in the economy. New technology does not destroy jobs – it only changes the composition of jobs in the economy.

Automation and artificial intelligence however are two very different things. AI is uncharted territory because it has the same abilities that humans possess, i.e. learning new tasks and gaining new skills. This is reason for the dire warnings from the likes of Elon Musk and Stephen Hawking. The old warning against automation of “just because we can doesn’t mean we should” is severely understated when applied to AI.

With all these unpredictability, one thing remains for sure. A very interesting thing will happen when we stop innovating – nothing.

Sidebar