Programming Morals: Navigating the New Frontier

Programming Morals: Navigating the New Frontier

Self-driving cars aren't new. In the early 2000's, popular British television programme, Top Gear, tested out a self-parking Lexus. Mercedes and many other car manufacturers have debuted their self-driving systems, and we all know Google has been developing their friendly automated machine for years. With the self-driving car held up as the future of transport for some time now, serious questions about its application and effects have arisen. A recent paper published on arXiv, lead by Jean-Françios Bonnefon from Tolouse School of Economics, discussed the complications in calculating morals into autonomous vehicles. They asked this simple question: when faced with an unavoidable collision, what should the car do? 

"The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall."
- Jean Françios Bonnefon, Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?

When faced with the dilemma, a majority of the participants surveyed decided to swerve their car for "the greater good." Unsurprisingly, the decision was tougher when they realised they were in the driver's seat. Although morally we know what the "right" answer is, the question remains if consumers want to purchase a car that won't always guarantee their lives and safety first. 

Although there's much debate over the extent of automation, the current trend is headed towards a 50/50 split to ease us into giving up control of our vehicles (similar to the autopilot system on aeroplanes). The car takes control in dense urban areas and high-risk environments to minimise the risk of mistakes and accidents. The driver can then switch off autopilot when on open roads (because you can't take away the pleasure of driving down a historic route or an extremely straight highway). However, in July, NPR published a podcast discussing the future of Google's self-driving car as an all-or-nothing machine. They used the elevator as a case study. Like the automised elevator we all take for granted today, giving the driver control could be detrimental to the efficiency and safety these cars were created for. There are also a plethora of legal issues that arise from the conscious decisions programmed into the vehicle. 

Evolution is key to the success of the automated vehicle. Currently, cars are designed with the safety and comfort of the driver and passengers in mind. Due to the nature of automation, a vehicle's structure will eventually change. More attention can then be placed on safety precautions and redesigning it to take full advantage this new structural freedom. 

Although the Google car won't be commercially available for some time, there are other options for streamlining the driving process. Startups such as Navdy displays notifications on your windshield and uses voice and motion recognition to operate the software. Navdy have smartly designed notifications as a hovering display, but when your vision is set on a specific place on the window, does this distract from whats happening in the cars parameters? Only time will tell if these technologies will help safety or add another layer of distraction. 

In an ideal world, technology would integrate seamlessly and go beyond the good they were designed for. The harsh reality is that new innovations need to compete and appeal to already existing habits and preconceptions. It cannot live independently from the chaos it derives from. With that in mind, we who focus on innovation must think about how these breakthroughs be received beyond its original intent. 

Appsessed: Thanks

Appsessed: Thanks

Appsessed: Seatserve

Appsessed: Seatserve