Self-driving cars have the potential to improve our lives in many ways, not least by reducing incidents caused by human driver error. However, new technology will always raise new ethical questions, and in this sense self-driving cars are no exception.
There are a lot of moral dilemmas surrounding self-driving cars, but for now, let’s focus on just two. The first dilemma is how a self-driving car should respond in a situation where there are multiple pedestrians on the road and no way of avoiding every one of them. Should the car make a distinction, for example, between hitting an older person or a younger person, and if yes, in whose favour should the car rule? Is it important to consider whether the pedestrian has stepped out into the road lawfully?
A second moral dilemma revolves around whether a self-driving car should prioritise the lives of pedestrians or passengers. Should the self-driving car be programmed to protect the greatest total number of lives, even if this means sacrificing passengers in favour of pedestrians? Or should the car prioritise the safety of its own passengers above all other factors? Interestingly, while many people say that the car should favour pedestrians, they are also reluctant to be the passenger in such a car. Given this apparent paradox, then, who should make the decisions about whom the car should save? The manufacturers? Industry regulatory bodies? National governments? And again, how would these decisions be implemented internationally, or could they be?
We’d love to hear your thoughts on the self-driving cars debate, and on these or any other ethical dilemmas that it raises. Would you risk your own life to potentially spare somebody else’s? Who should have the power of making these decisions?
Leave your comments below and let us know what you think!