Originally Posted by
holytrousers
Algorithms shouldn't be controlling machines that can kill human beings.
If someone you care about had to die in a car accident, would you prefer an algorithm to take the responsibility, rather than a human being?
That's the moral dilemma we should be concerned with before considering anything else.
That horse has left the proverbial barn.
In most cases, the algorithms can be programmed to make the least bad decision, but first we would more or less have to agree what the least bad decision is, from a societal, not individual, point of view.