Recently, I heard a topic about a popular problem in the context of self-driving cars. It was “Trolly Car problem” that became widely spread after Michael Sandel picked it up in his lecture.
Quora: What is Michael Sandel's answer to the Trolly Car problem?
To be honest, I felt shameful when heard this topic because I had not considered this matter so far. Indeed, I myself also have introduced the situation of this problem. You are a trolly car driver. You notice your trolly is going to hit five people stuck on the rail. You cannot stop the trolly, but can change its course. If you do so, the trolly will hit another person instead of five. What should you do?
In other words, is it acceptable to kill one person for saving other five’ life? It is the concept of this problem.
My past entry: Psychopath and a question of Michael Sandel
It has been revealed by some experiments that most people choose to take one person’s life instead of five. It is a concept of pragmatism. However, in the real situation, we will extremely hesitate to turn the steering wheel. Even if it causes a more disastrous result, nobody will blame him for not killing one person, I guess.
So, how does the self-driving car consider this situation?
At present, the algorithm of the car has not been so sophisticated to make an absolute answer. The car cannot prospect the exact outcome immediately. Additionally, turning a wheel rapidly brings a risk to both the driver and bystanders. The car will put on the brake even if is it not effective.
But in the future, the AI’s estimating ability will be far more precise than human. It is interesting whether the conclusion driven by the absolute intelligence is the same to ours.
In the experiment I referred above, psychopathic persons did not hesitate to make a cold-blooded decision. Being free from emotional conflict is advantageous to find the most beneficial solution to a complicated situation.
An excellent AI may be psychopathic.