Bike Forums

Bike Forums (https://www.bikeforums.net/forum.php)
-   Alt Bike Culture (https://www.bikeforums.net/forumdisplay.php?f=225)
-   -   Riderless Bikes (https://www.bikeforums.net/showthread.php?t=1079654)

Tusky 09-08-16 01:16 PM

Riderless Bikes
 
Video depicts new tech.


First there was the driverless car, now there is a riderless bike - Home | This is That with Pat Kelly and Peter Oldring | CBC Radio

ThermionicScott 09-08-16 02:07 PM

:lol:

Philphine 09-09-16 08:39 AM

I'm waiting for the riderless skateboard. it's gotta be able to do the gnarliest park tricks and make me look good though.

fietsbob 09-09-16 12:13 PM

David Gordon Wilson MIT Professor and author of Bicycling Science demonstrated the stability of a Bike,

with the fork backwards so Lots of Trail, and let it run down a Hill with nobody on it, and it went quite straight.

Rollfast 09-11-16 11:57 PM


Originally Posted by Philphine (Post 19043829)
I'm waiting for the riderless skateboard. it's gotta be able to do the gnarliest park tricks and make me look good though.


It'll get better than you are though and make you look bad.

Cheddarpecker 09-15-16 03:52 PM

This couldn't be more pointless.

Also, the complete elimination of human error opens the door for fatal software glitches. Remember those Toyotas that would throttle up on their own? Run you right off into your own death without warning.

I'll stop piloting my own vehicles when I'm dead. Hopefully not killed by someone's self driving automobile.

dabigboy 09-15-16 10:28 PM

This is great for the bikes. Think about it, while I'm in the grocery store, my bike can be out on the trails having some fun and working out its bearings. I'll just summon it with my phone about the time I pass the canned veggie aisle, and by the time I finish up (at the UNMANNED check-out, no less), my bike will be crashing into the bike rack at the front of the store. FINALLY, free-range biking!

It will take some time for the general public to get used to riderless bikes whizzing by all the time, the 911 calls to stop, and little kids to forget everything they saw in Bed Knobs and Broomsticks. But it will surely be worth the transition.

This will benefit the poorest members of society. The pan-handlers downtown who are cruising around on bikes all the time can send their bikes out for a riderless ride, and spend more time themselves asking for money. Of course, humans are never really satisfied....about the time everyone has one of these, we'll all be wanting runnerless shoes.

Matt

Philphine 09-16-16 08:31 AM

I wonder if in the future you'll see threads on driverless car forums complaining about riderless bicycles.

KonAaron Snake 09-17-16 07:33 AM


Originally Posted by Philphine (Post 19059618)
I wonder if in the future you'll see threads on driverless car forums complaining about riderless bicycles.

In the future you will be hunted by driverless cars and riderless bicycles.

Philphine 09-17-16 08:25 AM

since I also ride a motorcycle I completely believe that, especially cars.


stepping back into serious (because this has gotten fah fah too silly) I wonder how the car sensors would pick up a smaller object near them like a bike or motorcycle, especially since the motorcycle can run at the same speeds and would be occupying a lane on it 's own. say, if a car was in the left lane and a cycle was on the right, on the right side of it's lane.

Rollfast 09-18-16 12:37 AM

Isn't the entire point of aq bicycle to get on and use your own power efficiently as possible in getting someplace?


Why does this sound as bad as the toilet paper commercials that said you could 'go commando' after usage?

dabigboy 09-18-16 11:38 PM

On a serious note, I firmly believe that driveless cars will *never* be truly workable in the current real-world environment until/unless artificial intelligence makes some big breakthroughs. Philphine brings up a great point about the driverless car figuring out if a motorcycle or other smaller object is occupying a lane. Without really awesome (and not-yet-attained) AI, any attempt to handle these sorts of situations individually are just limited-scenario hacks.

Matt

ThermionicScott 09-19-16 04:23 PM


Originally Posted by dabigboy (Post 19065654)
On a serious note, I firmly believe that driveless cars will *never* be truly workable in the current real-world environment until/unless artificial intelligence makes some big breakthroughs. Philphine brings up a great point about the driverless car figuring out if a motorcycle or other smaller object is occupying a lane. Without really awesome (and not-yet-attained) AI, any attempt to handle these sorts of situations individually are just limited-scenario hacks.

Matt

It's not a great point, it's a stupid point. The cameras and software on driverless cars are much better able to see and keep track of the traffic around them than we humans are.


Imnotchinese 10-19-16 07:01 PM


Originally Posted by dabigboy (Post 19065654)
On a serious note, I firmly believe that driveless cars will *never* be truly workable in the current real-world environment until/unless artificial intelligence makes some big breakthroughs. Philphine brings up a great point about the driverless car figuring out if a motorcycle or other smaller object is occupying a lane. Without really awesome (and not-yet-attained) AI, any attempt to handle these sorts of situations individually are just limited-scenario hacks.

Matt

Last year, AI made a huge leap forward, by passing the "three wise men" test, in which three robots programmed with the same AI, but two of them were muted. Then each one was asked who could speak. Two of them said nothing, and one said "I do not have enough information to answer that question" and then corrected itself by saying "wait, actually I believe I have a voice" this is a huge step forward, as it marks the first time a robot has officially acknowledged it's own existence.

Imnotchinese 10-19-16 07:05 PM


Originally Posted by ThermionicScott (Post 19067468)
It's not a great point, it's a stupid point. The cameras and software on driverless cars are much better able to see and keep track of the traffic around them than we humans are.

https://www.youtube.com/watch?v=tiwVMrTLUWg

Thank you. I think people are too afraid of new things/the unknown. Statistically speaking, automated cars, whether it be fully driverless, or just things like emergency braking, or lane change warnings are hundreds of times safer than human drivers.

Rollfast 10-21-16 10:51 PM

What is the point of a riderless bike though? Will there be a Riderless Bike Forums in the future?

It's a parlor trick!

dabigboy 10-30-16 12:43 AM

Come on ThermionicScott, tell me how you really feel about my opinion. :)


Originally Posted by Imnotchinese (Post 19134995)
Last year, AI made a huge leap forward, by passing the "three wise men" test, in which three robots programmed with the same AI, but two of them were muted. Then each one was asked who could speak. Two of them said nothing, and one said "I do not have enough information to answer that question" and then corrected itself by saying "wait, actually I believe I have a voice" this is a huge step forward, as it marks the first time a robot has officially acknowledged it's own existence.

That's certainly a very interesting response, but from a programmer's perspective, its significance is hard to quantify without knowing HOW it arrived at that answer.

Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.

I'm sure anyone who's ever spent any time with a computer or other electronic gadget (particularly if you've had to support, fix, or develop software) can relate to the concept of "babysitting" the software. You may have something that works very well, but every now and then there is that unforeseen scenario in which the software loses its "mind" and does something completely unpredicted or silly. Yet, a human might look at that scenario and know immediately what to do (or, at the very least, know that what the software is doing is NOT a good response).

The aviation industry has been into this field for a long time. Airplanes with the right equipment can practically fly themselves. But, we really can't trust the systems entirely. This point is driven into pilots throughout their training, and is reinforced through embarrassing incidents and tragic accidents. There are numerous instances in which an autopilot or flight control system will "throw its hands up" and simply quit flying the airplane (Air France 447, for instance), at which point the pilot (who has hopefully stayed in the loop, but often is not) must assume control. There are also other instances in which the computer does something strange and dangerous, something a human pilot would never do (though in most such cases, the computer will just give up and disengage, thanks to fail-safes built into the software).

Now, compare this to cars: quick, appropriate responses are FAR less important in an airplane, where a pilot (or computer) has the luxury of wandering around the sky a few thousand feet here or there, heading the wrong direction for a bit, not properly controlling airspeed, etc. That's a lot different from the world of driving, where we are regularly mere inches away from another car coming the opposite direction at a differential speed of 130mph+.

That's not to say I don't see driverless cars making a big impact in the future. If our road system is modified a bit and made more uniform, if the car is only expected to go on relatively improved roads, if GPS is always available and/or some sort of computer-friendly road marker system is devised, if extremely twisty or hilly roads can be properly dealt with by the computer, if specific types of inclement weather are avoided, and IF the driveless cars are kept in good repair (and we know some won't be), then they may be reasonably workable.

And, who knows, maybe there will be a paradigm shift in AI and we'll see something that completely bypasses the current AI pitfalls, and can truly challenge the human mind's ability to reason. I'm happy to be proven wrong. :)

Matt

ThermionicScott 10-30-16 01:13 PM


Originally Posted by dabigboy (Post 19156876)
Come on ThermionicScott, tell me how you really feel about my opinion. :)

Just to be clear, I wasn't calling you stupid. ;)


Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.
...but you have a lot more faith than I in the reasoning, decision-making ability, (and sobriety, and attention span, etc) of the average human driver. IMO, a dumb "unimaginative" computer that just turned on the hazard lights and slowed the car to a stop a tiny fraction of 1% of the time when something unexpected happened would be vastly preferable to the situation we have today. And there is no persuading me that human drivers will ever get better at their job than they are right now. I say that not as a misanthrope, but a realist.

TheLibrarian 10-30-16 02:17 PM

As another article said about Hotz and Tesla's automated driving to make it 99% accurate is easy to make it 99.9999% accurate is much harder and these systems need to be that accurate. Also cars just stopping even 1% of the time would clog up every highway. There's an easy 200 cars going by at any time and if 2 are always stopped thats problems. But you're just pulling these numbers out of the air, don't think you mean 1% is ok. Further cars stopping in the middle of the road is not safe. If the error is on the person we can blame thats one thing but if the error is on the product that's endless litigation.

For riderless bikes not sure what the new thing is but certainly seems unnecesary and would have all the same problems as cars. As for the statistics driverless cars have not been out in any significant usage to determine if they are safer or not. Controlled studies with engineers babysitting them aren't representative.

ThermionicScott 10-30-16 03:16 PM


Originally Posted by TheLibrarian (Post 19157767)
As another article said about Hotz and Tesla's automated driving to make it 99% accurate is easy to make it 99.9999% accurate is much harder and these systems need to be that accurate. Also cars just stopping even 1% of the time would clog up every highway. There's an easy 200 cars going by at any time and if 2 are always stopped thats problems. But you're just pulling these numbers out of the air, don't think you mean 1% is ok. Further cars stopping in the middle of the road is not safe. If the error is on the person we can blame thats one thing but if the error is on the product that's endless litigation.

For riderless bikes not sure what the new thing is but certainly seems unnecesary and would have all the same problems as cars. As for the statistics driverless cars have not been out in any significant usage to determine if they are safer or not. Controlled studies with engineers babysitting them aren't representative.

If you're referring to my post, read it more carefully. I never said 1% would be acceptable.

Furthermore, the video in the OP was clearly a joke/satire, but that didn't stop it from going over a bunch of heads.

Imnotchinese 10-30-16 03:30 PM


Originally Posted by dabigboy (Post 19156876)
Come on ThermionicScott, tell me how you really feel about my opinion. :)



That's certainly a very interesting response, but from a programmer's perspective, its significance is hard to quantify without knowing HOW it arrived at that answer.

Hope springs eternal, but I can't imagine an AI ever eclipsing the reasoning and decision-making ability of a human. The thing about humans is that we have something an AI can never have: imagination. It follows that we therefore also have insight. Because of this, no matter what situation arises, no matter how extreme the circumstance, no matter how unplanned the event, a human always has the capacity to evaluate the situation and develop some sort of rational response. It may not always be the best response, and it may not even be the correct response, but there will be a plan of action that takes the immediate factors of the situation into account.

I'm sure anyone who's ever spent any time with a computer or other electronic gadget (particularly if you've had to support, fix, or develop software) can relate to the concept of "babysitting" the software. You may have something that works very well, but every now and then there is that unforeseen scenario in which the software loses its "mind" and does something completely unpredicted or silly. Yet, a human might look at that scenario and know immediately what to do (or, at the very least, know that what the software is doing is NOT a good response).

The aviation industry has been into this field for a long time. Airplanes with the right equipment can practically fly themselves. But, we really can't trust the systems entirely. This point is driven into pilots throughout their training, and is reinforced through embarrassing incidents and tragic accidents. There are numerous instances in which an autopilot or flight control system will "throw its hands up" and simply quit flying the airplane (Air France 447, for instance), at which point the pilot (who has hopefully stayed in the loop, but often is not) must assume control. There are also other instances in which the computer does something strange and dangerous, something a human pilot would never do (though in most such cases, the computer will just give up and disengage, thanks to fail-safes built into the software).

Now, compare this to cars: quick, appropriate responses are FAR less important in an airplane, where a pilot (or computer) has the luxury of wandering around the sky a few thousand feet here or there, heading the wrong direction for a bit, not properly controlling airspeed, etc. That's a lot different from the world of driving, where we are regularly mere inches away from another car coming the opposite direction at a differential speed of 130mph+.

That's not to say I don't see driverless cars making a big impact in the future. If our road system is modified a bit and made more uniform, if the car is only expected to go on relatively improved roads, if GPS is always available and/or some sort of computer-friendly road marker system is devised, if extremely twisty or hilly roads can be properly dealt with by the computer, if specific types of inclement weather are avoided, and IF the driveless cars are kept in good repair (and we know some won't be), then they may be reasonably workable.

And, who knows, maybe there will be a paradigm shift in AI and we'll see something that completely bypasses the current AI pitfalls, and can truly challenge the human mind's ability to reason. I'm happy to be proven wrong. :)

Matt

I'm not sure I could prove you wrong as you seem to have a bit more experience in the field than I do. (Full disclosure, I do not work with computers in any way, I am simply passionately curious) I would just like to add that computers can make the decisions they do at a substantially faster speed than humans, and hope to see advances made that would allow for greater AI presence on the road. I'm not sure I would want fully autonomous cars, because even if they're much safer than humans in the future, it would be hard to trust something else with my and others' lives. But I would like peripheral systems implemented in more cars in the future, such as emergency braking, and lane watch like we see in some cars currently. I feel the most likely way that safety could be drastically improved would be if all cars were able to communicate with each other (for lack of a better word, I simply mean all cars maybe broadcasting their speed, direction, amount of traction etc. to other cars) but again, I'm not sure I'd want that information broadcast publicly.

Rollfast 10-30-16 08:25 PM

I have no experience in the field as I stay on the road.

dabigboy 10-30-16 08:53 PM

I'm amused that a few folks are still thinking the original video was serious. :)

ThermionicScott, I wouldn't say I have a huge amount of faith in the ability of humans to make good decisions on the road, I just have faith in the tendency of them to make reasonably predictable decisions (vs completely wild and counter-intuitive decisions). Plus, while humans may not always make the best decisions, by taking on the responsibility for the drive into your own hands, your level of risk is more closely tied to your own personal decisions. And I think that is always a good thing. I like known risk: my own skills, condition, and decision-making ability. I don't like unknown risk: the minds of the programmers who designed the car's AI to foresee or somehow allow for every possible circumstance that could ever arise, which of course is impossible.

I'm not really disagreeing with you or trying to invalidate your point of view, I'm just trying to clarify my approach to the whole idea. We differ in where we want to allocate risk.

Imnotchinese, you're hitting on a very interesting paradox of computers vs the human mind. There are certain procedural, logical operations that computers are far superior at. For instance, if an AI-controlled car has information on all nearby vehicles, a sufficiently powerful computer can, in a matter of milliseconds, simulate a number of potential outcomes depending on different courses of action, and choose the one that would seem to have the lowest risk based on pre-programmed conditions and the laws of physics. Furthermore, if the computer is able to accurately measure things like distance, relative speed, direction, etc, it can do very accurate calculations for things like required stopping distance and likely path of travel. But the human mind can do other things in near-instant time, things that continue to baffle our understanding (such as highly abstract associations, inference, etc). Simple example: two people who know each other very well are talking. One says something rather vague and poorly worded, like "it's going to be worth it", when such a statement has no bearing on the current conversation. But the other person will, in their mind, instantly go to something they know the first person is working at, or a goal the two friends were recently trying to accomplish, and make a pretty accurate guess that that's the thing the first person was talking about. Or it may be as simple as a comment that goes back to a conversation the two had some minutes ago. Oftentimes, the mind is somehow able to pick up that comment and associate it almost instantly with one of perhaps many recent conversations.

These truly disparate strengths of humans vs computers are why, while I'm very skeptical of an AI-controlled car, I think a computer-aided car (as we are already seeing) has huge potential. Imagine having graphical vectors of other traffic's movement projected onto your windshield, or markers showing where you'll likely stop if you apply maximum braking!

Ah, this is interesting stuff. :) I've been interested in AI and software failure modes for some time. My career is in computers, and I've been programming for fun since 2002 or so. Being "passionately curious" is a good thing, I think it's the most important component of learning.

Matt

ThermionicScott 10-31-16 11:40 AM


Originally Posted by dabigboy (Post 19158628)
ThermionicScott, I wouldn't say I have a huge amount of faith in the ability of humans to make good decisions on the road, I just have faith in the tendency of them to make reasonably predictable decisions (vs completely wild and counter-intuitive decisions). Plus, while humans may not always make the best decisions, by taking on the responsibility for the drive into your own hands, your level of risk is more closely tied to your own personal decisions. And I think that is always a good thing. I like known risk: my own skills, condition, and decision-making ability. I don't like unknown risk: the minds of the programmers who designed the car's AI to foresee or somehow allow for every possible circumstance that could ever arise, which of course is impossible.

I'm not really disagreeing with you or trying to invalidate your point of view, I'm just trying to clarify my approach to the whole idea. We differ in where we want to allocate risk.

Imnotchinese, you're hitting on a very interesting paradox of computers vs the human mind. There are certain procedural, logical operations that computers are far superior at. For instance, if an AI-controlled car has information on all nearby vehicles, a sufficiently powerful computer can, in a matter of milliseconds, simulate a number of potential outcomes depending on different courses of action, and choose the one that would seem to have the lowest risk based on pre-programmed conditions and the laws of physics. Furthermore, if the computer is able to accurately measure things like distance, relative speed, direction, etc, it can do very accurate calculations for things like required stopping distance and likely path of travel. But the human mind can do other things in near-instant time, things that continue to baffle our understanding (such as highly abstract associations, inference, etc). Simple example: two people who know each other very well are talking. One says something rather vague and poorly worded, like "it's going to be worth it", when such a statement has no bearing on the current conversation. But the other person will, in their mind, instantly go to something they know the first person is working at, or a goal the two friends were recently trying to accomplish, and make a pretty accurate guess that that's the thing the first person was talking about. Or it may be as simple as a comment that goes back to a conversation the two had some minutes ago. Oftentimes, the mind is somehow able to pick up that comment and associate it almost instantly with one of perhaps many recent conversations.

These truly disparate strengths of humans vs computers are why, while I'm very skeptical of an AI-controlled car, I think a computer-aided car (as we are already seeing) has huge potential. Imagine having graphical vectors of other traffic's movement projected onto your windshield, or markers showing where you'll likely stop if you apply maximum braking!

Ah, this is interesting stuff. :) I've been interested in AI and software failure modes for some time. My career is in computers, and I've been programming for fun since 2002 or so. Being "passionately curious" is a good thing, I think it's the most important component of learning.

Matt

To the bolded point, indeed. And it's always good to look at the potential failure modes of any system. :thumb:

If you were severely nearsighted and wanted to do away with your glasses or contacts, would you rather have LASIK or radial keratotomy? :)

dabigboy 10-31-16 08:35 PM

Hmmm if I were nearsighted and wanted to do away with my glasses, I would probably rather avoid the risk of surgery altogether and go with the risk of walking into a post from not wearing them. :)

Matt


All times are GMT -6. The time now is 08:56 PM.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.