Old 06-03-19, 06:20 AM
  #13  
noimagination
Senior Member
 
Join Date: Oct 2016
Posts: 728
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 365 Post(s)
Liked 419 Times in 248 Posts
Originally Posted by holytrousers
I think we can agree on two obvious axioms related to this question :

First one is that cars are machines that, besides transporting people, represent a real threat to other road users.

OK, postulated.



Originally Posted by holytrousers
The moment we delegate the responsibility of controlling a killing machine to an obscure algorithm totally lacking the ability to empathize with a human being or understand and predict his behaviour, we are de facto deliberately submitting ourselves to an absolute and anonymous control that's beyond the influence and understanding of normal humans.

I fail to see how this follows from axiom #1 . The sentences seem totally unrelated. Also, you use many words loaded with emotional connotations, which significantly undermines your pose of logical argument.



Originally Posted by holytrousers
The second point is that currently danger on our roads stems mainly from human irresponsibility. The real issue here is how to educate our children and adults to become responsible citizens. Substituting a problem with another problem is certainly no the simplest way to proceed.

I always feel strongly skeptical when i have to deal with a "safety" argument : it has been misused too often to let our fears hinder our rational thinking on this matter. We all remember how mass surveillance programs and military invasions have been enforced for security concerns.

Let's teach children responsibility instead of filling their heads with useless dates and obsolete formulae.. Replacing people with robots has nothing to do with improving safety on our roads, that's just marketing and manipulating the public opinion.

Agreed. If you have some practical process for eliminating human irresponsibility, or even of significantly reducing it (and what is more, for getting such a process implemented), I'm sure everyone would love to learn about it.


The point is, humans are fallible. I understand that it frightens people when they perceive that they have given up control, even if that sense of control is illusory. It is certainly possible that the people designing the programs for self-driving cars could make mistakes that could cause fatalities. It is certainly possible that there could be random faults in the software or hardware occurring after installation and testing that could cause fatalities. It is certainly possible that users could misuse the autonomous drive feature. It is certainly possible that malicious hackers could introduce faults that could cause fatalities. Would the cumulative risk posed by autonomous vehicles be greater than the risk posed by millions of vehicles operated by millions of humans all fallible in different ways at different times? That remains to be seen.
noimagination is offline