1

Self-driving cars have the potential to improve our lives in many ways, not least by reducing incidents caused by human driver error. However, new technology will always raise new ethical questions, and in this sense self-driving cars are no exception. 

 

There are a lot of moral dilemmas surrounding self-driving cars, but for now, let’s focus on just two. The first dilemma is how a self-driving car should respond in a situation where there are multiple pedestrians on the road and no way of avoiding every one of them. Should the car make a distinction, for example, between hitting an older person or a younger person, and if yes, in whose favour should the car rule? Is it important to consider whether the pedestrian has stepped out into the road lawfully?

 

A second moral dilemma revolves around whether a self-driving car should prioritise the lives of pedestrians or passengers. Should the self-driving car be programmed to protect the greatest total number of lives, even if this means sacrificing passengers in favour of pedestrians? Or should the car prioritise the safety of its own passengers above all other factors? Interestingly, while many people say that the car should favour pedestrians, they are also reluctant to be the passenger in such a car. Given this apparent paradox, then, who should make the decisions about whom the car should save? The manufacturers? Industry regulatory bodies? National governments? And again, how would these decisions be implemented internationally, or could they be?

 

We’d love to hear your thoughts on the self-driving cars debate, and on these or any other ethical dilemmas that it raises. Would you risk your own life to potentially spare somebody else’s? Who should have the power of making these decisions?

 

Leave your comments below and let us know what you think!

10件のコメント

  • 1
    Avatar
    blomberg.niclas

    I think you raise very interesting questions! My reaction is that these question might also have implications for legislation or insurance. For instance if my self-driving car accidentally runs over a person or hits another vehicle, who is responsible? Who should pay for the medical bills or the repair? Connecting to your questions, what if the car hits a pedestrian, say, because it prioritized the passengers, then what happens?

    I also think that it is interesting that no matter how far we advance, the same kind of moral dilemmas always come back to haunt us. For instance we have the classic dilemma where a train is headed for a group of elderly people or a group of children. You can save either group by rerouting the train towards the other group, but which group should you save? I feel like this dilemma is very similar to the dilemma you describe, where the car has to choose one or the other.

    I think legislators will face some new challenges when these cars become the norm, both practical and ethical. I actually hadn't thought a lot about the issue, but thank you for highlighting this interesting topic!

  • 2
    Avatar
    Trustlator

    The so-called dilemma's in this debate are hypothetical dilemma's. Take for example the situation "where there are multiple pedestrians on the road and no way of avoiding every one of them." This hypothesizing itself is what makes the situation a dilemma. The same goes for the famous dilemma of the train heading towards either one person or several people. In real life such situations are hard to imagine in the first place. And even if such situations could occur, they would have preceding events leading up to them or causing them, which would inform the moral decision and probably take away the dilemma.

    In short, this kind of thinking is nothing more than quasi-philosophical puzzle games.

  • 1
    Avatar
    Paul

    I believe that self-driving or not, car owners and drivers will always need to be held accountable for any harm or damage caused by their cars. If drivers are to be held accountable just because they are behind the wheels, then car owners should be held accountable for switching on the cars as well. Also, no one should be allowed to put minors on the cars and send them onto the roads just because their cars can self-drive. Therefore, I think there is no ethical dilemmas for autonomous vehicle technology. Anyone who want to own or use a self-driving car, should take responsibility for the consequences that may follow-suit.

  • 1
    Avatar
    Katrina Paterson

    Thanks to everyone who's contributed to the self-driving car debate so far!

    I think the questions surrounding liability, insurance and who is ultimately responsible for any harm that the self-driving car causes are all very interesting. In particular, I think the question of legislation will become more and more important. As we've seen, even if people agree in principle that the car should spare pedestrians, if they are the ones actually choosing a car then it seems fair to imagine that they would choose the car that will spare the passenger(s). Assuming that the information about whom the car would save is publicly available then it seems that almost everyone looking to buy a car would buy the passenger-sparing kind, and so you would think that the car manufacturers would respond in kind by mainly manufacturing this kind of car - unless legislation prevented them from being able to do this. But then, who would ultimately create the legislation, and would this be the same everywhere? If I live in the UK but my self-driving car was manufactured in Japan and then exported, whose government decides how the car should be?

    I'm also interested in Paul's point that whoever switches on the car is responsible regardless of who is driving it. I think that one argument in favour of sparing pedestrians rather than self-driving car passengers is the idea that the passenger in the car actually chose to get in and turn the ignition on (or command the car to move, or whatever), whereas the pedestrian had no choice in whether or not the car was on the road as they stepped out. But what about pedestrians who blatantly violate road crossing regulations (which are quite strict in some countries)? What if their unlawfully stepping out into the road causes the self-driving car to swerve and kill the passenger?

    I agree that some of these debates seem somewhat fanciful and, yes, the kind of examples that are sometimes given where there are five hypothetical pedestrians crossing the road, of which one each is an old lady, a homeless man, an executive, a child and a dog or whatever do seem a little bit fanciful. However, I think that anyone who drives will at some point have found themselves in a situation, even a relatively minor one, where something unexpected has happened on the road and they've been forced to make a split-second decision. I agree that foresight is a very important thing for both human and autonomous drivers and that in many cases it's possible to see the potential for an incident before it happens and to try to avert it. However, I don't think we can ever rule out freak occurrences. What about children or animals on the road? What about hazards created by the natural environment? It's true that the likelihood of such incidents occurring is very, very small, but when we consider the millions of miles that cars cover on the road every day, it seems that with enough time and enough distance travelled, virtually any incident could potentially arise at some point, and for this reason I think it's important to consider these now. 

    I'm really interested to hear people's follow-up thoughts, or, indeed, anything else that anyone has to say on this topic. Thanks again to everyone for your ideas!

  • 1
    Avatar
    rocypa

    This subject is very interesting. The technology is still in its embryonic stage and should evolve a lot in the development and testing of autonomous vehicles. A short time ago, there was an accident with death involving this type of car. The sensors of the car did not detect the presence of a truck in front of it because it had an enormous gap between the wheels.

    In case of no other solution and certain death, the cars should be programmed to avoid as much human loss as possible. I don't believe that these systems will reach the point of being able to identify the age difference between pedestrians, but in a war, for example, we should always preserve the youngest.

    I would never really buy this kind of car, even if I had enough money to buy it because an error can cause the loss of lives.

  • 1
    Avatar
    Katrina Paterson

    Rocypa - thanks for your comments! That's very interesting to read the reason for the accident involving the self-driving car (the failure of the sensors to detect the truck because of the size of the gap between the truck's wheels). I had heard about this accident, but not the cause. That seems like a classic case of technology overlooking something that would be obvious to the human eye, and it reminds me of another hypothetical question that I've read regarding autonomous vehicle technology, which is how can you teach the technology to distinguish between, say, the road in front of them and a picture of a road that might appear on the T-shirt of a pedestrian? As you say, the technology still has a long way to go in terms of development but perhaps with enough time, some of these issues might begin to be ironed out. 

    At the same time, though, I think we shouldn't rule out the fact that human driver error does sadly also cost many, many lives every year, and a great deal of these deaths are at least partly avoidable. So I sometimes wonder if autonomous vehicle technology is just kind of transferring the potential to cause harm from the human to the technology (whether or not this is a good thing is, I suppose, a whole other debate). I'm intrigued by your comment that you would not buy such a car because of the potential for error, because I do understand what you're saying, but at the same time, anyone who gets behind the driver's seat of a traditional car also has the potential to cause the loss of life. I think we feel more of a sense of 'control' if we're driving the vehicle because we can, to some extent, determine the outcome, but what if our own judgements are flawed? Maybe in some cases, the car would have decided better than we could. 

    The most optimistic projections that I've read about self-driving cars say that with enough time and enough 'training', the autonomous vehicle technology will be able to calculate risks, weigh up possible benefits and correspondingly respond to situations better, faster and more accurately than the human mind can. I really want to believe this is the case because I think that the number of people who are killed needlessly because of bad (human) driving is shocking. But as you say, there is a lot of progress still to be made in improving the technology. Let's see what the future holds!

    I've really enjoyed reading your comment, Rocypa. Thanks again for leaving your thoughts! 

  • 1
    Avatar
    rocypa

    Hi Katrina,

    It is a good question (how can you teach the technology to distinguish between, say, the road in front of them and a picture of a road that might appear on the T-shirt of a pedestrian?). There are some features that the AI systems explore to make a decision. In this case, image sensors could verify the continuity of the road - the road is present continuously, and a picture of a road that might appear on the T-shirt of a pedestrian appears and disappears suddenly. Another important feature is the size of the image - the real road is much bigger than a picture of a road that might appear on the T-shirt of a pedestrian. But the problem is: AI systems are never 100% sure about its decision. AI decides statistically. Unforeseen situations may occur and the system can make the wrong decision. Some of these situations could be easily decided by a human. That is the reason why human translation has more quality than AI translations. I'm in favor of these systems supporting human decision making, but the final decision has to be made by a human.

  • 1
    Avatar
    Katrina Paterson

    Hi Rocypa,

    Very interesting! Thank you for your insights and for letting us know more about the way that AI systems for autonomous vehicles are developed. I see what you mean now about the technology being able to sense factors such as size and continuity. But yes, you're right that there will always be the chance of other unforeseen events developing. Sometimes I wonder if it will ever be possible to create technology that's capable of responding to every single event that could conceivably happen on the road. Like, some potential incidents are commonplace such as cars reversing irresponsibly, people changing lanes at the wrong time or failing to indicate, but what about freak occurrences such as landslides, extreme weather, wild animals running on the road, earthquakes, sink holes, lightning strikes, aircraft making emergency landings, and so on? Even if the chances of something like that are a million to one, surely they must occur every once in a million times and therefore the car should know how to react to them? I also wonder how autonomous vehicle technology would cope with misleading driving by human drivers (for example, people forgetting to cancel indicators, indicating inappropriately and thereby causing confusion, and so on). A good driver should be able to anticipate other road users acting in unpredictable ways, at least some of the time, but would autonomous vehicles be able to demonstrate this same level of critical thinking? Or, conversely, if the technology developed far enough then would they potentially one day be able to predict such occurrences better than humans?

    I like your idea about AI systems supporting human decision making (I remember that another contributor made a similar point regarding translation memory technology here). Do you think that there's potential to develop a kind of 'hybrid' car which drives autonomously part of the time, like on wide open roads, but allows humans to take over in more challenging driving situations? And if yes, do you think people would need special training to operate such a car, or do you think that anyone with a standard car licence would be able to use it?

    Once again, I'm really grateful for your comments, particularly surrounding the technical areas - much appreciated! 

  • 1
    Avatar
    rocypa

    Hi Katrina,

    This subject is really very challenging and interesting. I think that the main types of autonomous car sensors are cameras that capture multiple images of the environment per second and distance sensors like the telemeter. With the data captured from the sensors, the AI system makes a decision at each minimum time interval. In my opinion, the ideal would be to oblige the driver (human being) to drive the car being assisted by the AI system, which would be responsible for analyzing the features of the scene around the car and alert the driver in case of any risk or dangerous situation. The autopilot could only be triggered in rare cases when it is easy to drive, such as in empty and low-speed streets. The subject of my doctoral thesis was the tracking of maritime vehicles on video. Vehicles like jet-skis, speedboats in a boat race, and ships on the high seas should have their positions indicated at each video frame. Situations such as occlusions caused by water foam and other vehicles, fog, vehicle size were very difficult to be treated by the AI system. Imagine a car on a busy street, it is very difficult to generate an automatic car steering system that never fails. Imagine an automatic defense system of a bay having to shoot a hostile ship that sails in the middle of other ships. The risk is high. In my opinion, a system of automatic driving that generates an alarm when it detects a risky situation is ideal. People would need special training to operate such a car because it is a completely different kind of car.

     

  • 1
    Avatar
    Katrina Paterson

    Very interesting!

サインインしてコメントを残してください。