Old 09-21-22, 10:52 AM
  #14  
genec
genec
 
genec's Avatar
 
Join Date: Sep 2004
Location: West Coast
Posts: 27,079

Bikes: custom built, sannino, beachbike, giant trance x2

Mentioned: 86 Post(s)
Tagged: 0 Thread(s)
Quoted: 13658 Post(s)
Liked 4,532 Times in 3,158 Posts
Originally Posted by livedarklions
I did answer your question directly. Obviously if I'm agreeing to a 50% reduction being a good thing, number 1 would be a good thing. 2-4 are completely pointless questions, and really strawman positions. My point was that it was a stupid question because I have never seen anyone claim they needed perfection or 90% reduction in road deaths before it could be adopted. Hell, my standard for adoption is actually lower than your #1 because if you're asking me personally, I would accept AI control at the point it was as safe or any better than human drivers. If you want a precise number on it , I'm not claiming to be any kind of authority, so I don't know if 40,000 road deaths is the right measure (I suspect it's a hell of a lot more nuanced than that--highway AI requirements are likely completely different from urban street driving, for example). As far as "preferring" it, that's a rather academic question as even when it's going to be adopted (if ever), there will be many, many driver controlled vehicles on the road, bicycles among them.

I don't see anyone claiming that human drivers are perfect, or that AI drivers need to be perfect before they can replace human drivers. I have accused you of making that up, and I notice that your response to this accusation is to falsely claim I didn't answer your lame attempt at a loaded question.
Perhaps you need to simply read more...

First, we need to understand the safety dilemma that self-driving cars introduce. Currently, there are around 35,000 deaths as a result of motor vehicle crashes in the United States each year. That’s close to 3,000 people every month, or 100 people per day. Hypothetically, if there was a new technology that could reduce that fatality rate by just 1 percent – just 1 person a day – that could save 350 lives annually. Since the majority of car accidents are attributable to human error, and autonomous vehicles can reduce the error rate to close to zero, we can assume that autonomous vehicles can sharply reduce the overall fatality rate of motor vehicle accidents (and reduce total motor vehicle accidents as well).

However, even a single car accident can cause major damage and multiple deaths, and a single incident of major negative publicity casts doubt on the safety and efficacy of self-driving cars in general. If a handful of magnified cases lead the public to believe that self-driving cars are dangerous, we could face delays of autonomous cars for years to come – ultimately resulting in more lives lost.
https://readwrite.com/is-the-public-...car-accidents/

IDTechEx believes with current developments we could see autonomous vehicles matching or exceeding human safety levels by as soon as 2024. If growth is sustained, the report suggests by 2046 self-driving vehicles could meet the total mobility demand of the US – 3 trillion miles per annum. Then by 2050 autonomous vehicles could theoretically meet the entire transport needs of the world, with less than one accident per year. At this point, IDTechEx expects in many countries human driving on public roads will be outlawed in order to prevent injury, accidents and interference with self-driving vehicles that are all communicating amongst themselves.
https://insideevs.com/news/531094/hu...outlawed-2050/

The results show that the respondents believe that self-driving vehicles should be four to five times as safe as human driven vehicles. Current global traffic fatal risk is estimated at 17.4 per 100,000, which is 350 times greater than the frequency accepted by 50 percent of the respondents for self-driving vehicles. This implies that respondents expect these new vehicles to improve safety by two orders of magnitude against the current traffic risk.

Based on the results, the researchers propose the following requirements for self-driving vehicles based on the tolerability of risk in industrial safety (a concept developed in the health and safety field) in which risks are distinguished by three criteria: unacceptable, tolerable and broadly acceptable.

Self-driving vehicles that are less safe than human drivers would be set as the unacceptable risk criterion. The tolerable risk is that self-driving vehicles be four to five times as safe, meaning they should be able to reduce 75-80 percent of current traffic fatalities. The broadly acceptable risk criterion for self-driving vehicles is set as two orders of magnitude lower than current global traffic risk, indicating a hundredfold improvement over current traffic risks, or the same order of magnitude experienced in public transportation modes, such as rail and commercial aviation.
https://www.insurancejournal.com/new.../31/490751.htm

As for my THEORY...
Even assuming that technology is flawless, Garg believes proponents still face a “trust gap” with consumers that needs to be overcome before they will fully embrace driverless technology. AIG believes that consumers will have a say in how fast autonomous vehicles penetrate the market.
Flawless, Eh? Or what is "good enough?"

Last edited by genec; 09-21-22 at 10:56 AM.
genec is offline