Old 10-20-21, 10:21 AM
  #21  
Iride01 
I'm good to go!
 
Iride01's Avatar
 
Join Date: Jul 2017
Location: Mississippi
Posts: 14,811

Bikes: Tarmac Disc Comp Di2 - 2020

Mentioned: 50 Post(s)
Tagged: 0 Thread(s)
Quoted: 6100 Post(s)
Liked 4,732 Times in 3,262 Posts
Originally Posted by RChung
Sigh. People say this all the time too, but those people have never actually tried to use their power meters to do anything where the relationship between power and speed varies. Are you familiar with the DCRAnalyzer? That's a simple tool to compare two different power meter data streams--and it doesn't measure accuracy it *only* measures consistency, which is what you're saying is the only thing that matters. If you've ever looked at a comparison on Youtube (for example, Shane Miller's comparisons) you'd see that *even in those cases* where the power meters are close to consistent, they're *never* a constant multiple of each other. Even in this easiest of all standards, the kind of deviation you're talking about (one being a constant percentage multiple of the other) doesn't exist. So that's a straw man argument: you've set up an easy-to-knock-down hypothesis and then succeeded in knocking it down. But it wasn't real, it was always a straw man.

In the real world, when you look at real data from real power meters, they will differ by differing amounts at different times under differing conditions. So what's important for these other uses (not training, which is arguably the least demanding use for a power meter) is knowing when they're off, by how much, and how much that affects the results.




To clarify the graphic: I did this a few years ago but one of the power meters was single-sided while the other measured total both-side power. I don't remember which was which. The average difference between PM1 and PM2 was smallish, like 3 or 4%, so the dots in the left panel have a slope of 1.03 or 1.04. The red line in both panels has a slope of 1. People think that a 3% difference is small because they think the comparison looks like the left panel.
Your plot of what your data actually looks like is exactly what I expected.

And still none have shown me why data with a PM that consistently reports 5% lower values or higher values wouldn't be just as useful as one that is considered accurate every time. Aren't you still looking at the data relative to all the data previously collected with the same PM?
Iride01 is offline