Repeatability and the San Diego Wind Tunnel
A few years back, I wrote this sucker:
http://www.biketechreview.com/aerodynamics/uncertainty.htm
whoa, that's probably the third time I've used partial differentiation in a setting outside of academia!!! :-)
To be honest, my experience as a consumer product development engineer in the bike biz (2+ years) and the golf biz (coming up on 10 years now... holy cow!) is much more caveman based than what doing partial differentiation might suggest.
Granted, the tools/toys I get to play with in the golf biz these days are a couple steps above the beer cans and bits of string I got to use in the labs I had at my disposal while in the bike biz... ;-) ...but, still, I think that even with the fancy tools I get to use these days, I find myself relying on my caveman instincts when it comes to judging "goodness" of data that so many of the high tech gadgets can spit out.
Y'know, folks will have to assess for me how well their methodology and instrumentation they have used can repeat a given measurement/setup condition within a day and across days before it gets my attention. For example, with a pedaling rider in the tunnel, I have seen things (axial force) repeat to within less than 10 grams...but I've also seen things not repeat so well. Over the years, and more than a thousand runs with pedaling riders, I've grown to know how much I can trust what the tunnel is telling me...and that knowledge drives the way I choose to test.
The same kind of familiarity with repeatability is helpful when placing "equipment only" wind tunnel test #'s into context.
Not sure where I'm going with all of this, other than, I don't think folks think about experimental uncertainty enough - especially when it comes to doing field tests with a power meter. "Subjective validation" of these data might kick in if you wind up getting the answer you were more or less looking for:
Anyway, speaking of repeatability, I was checking out some additional repeat data I have on a specialized trispoke (the same one linked to in the partial derivative link above) today. I've tested this exact wheel/tire combination in a couple tunnels (texas a&m and lswt.com). I've tested the trispoke in the san diego wind tunnel eight times since 2005 (yeah, that would be over a four year time period) at a beta=0 flow condition.
What was the standard deviation of the multiple runs over that 4 year period for the exact same wheel/tire setup? 3.8 grams of axial force at 30mph. That seems pretty good to me. What do you think? What can the other tunnels do over that same four year time period in terms of "equipment only" repeatability?
So, yeah, that std deviation tells me about how well I can trust the data coming out of the facility here in San Diego over time. My caveman instincts are comfortable with these data out of San Diego! :-)
http://www.biketechreview.com/aerodynamics/uncertainty.htm
whoa, that's probably the third time I've used partial differentiation in a setting outside of academia!!! :-)
To be honest, my experience as a consumer product development engineer in the bike biz (2+ years) and the golf biz (coming up on 10 years now... holy cow!) is much more caveman based than what doing partial differentiation might suggest.
Granted, the tools/toys I get to play with in the golf biz these days are a couple steps above the beer cans and bits of string I got to use in the labs I had at my disposal while in the bike biz... ;-) ...but, still, I think that even with the fancy tools I get to use these days, I find myself relying on my caveman instincts when it comes to judging "goodness" of data that so many of the high tech gadgets can spit out.
Y'know, folks will have to assess for me how well their methodology and instrumentation they have used can repeat a given measurement/setup condition within a day and across days before it gets my attention. For example, with a pedaling rider in the tunnel, I have seen things (axial force) repeat to within less than 10 grams...but I've also seen things not repeat so well. Over the years, and more than a thousand runs with pedaling riders, I've grown to know how much I can trust what the tunnel is telling me...and that knowledge drives the way I choose to test.
The same kind of familiarity with repeatability is helpful when placing "equipment only" wind tunnel test #'s into context.
Not sure where I'm going with all of this, other than, I don't think folks think about experimental uncertainty enough - especially when it comes to doing field tests with a power meter. "Subjective validation" of these data might kick in if you wind up getting the answer you were more or less looking for:
Anyway, speaking of repeatability, I was checking out some additional repeat data I have on a specialized trispoke (the same one linked to in the partial derivative link above) today. I've tested this exact wheel/tire combination in a couple tunnels (texas a&m and lswt.com). I've tested the trispoke in the san diego wind tunnel eight times since 2005 (yeah, that would be over a four year time period) at a beta=0 flow condition.
What was the standard deviation of the multiple runs over that 4 year period for the exact same wheel/tire setup? 3.8 grams of axial force at 30mph. That seems pretty good to me. What do you think? What can the other tunnels do over that same four year time period in terms of "equipment only" repeatability?
So, yeah, that std deviation tells me about how well I can trust the data coming out of the facility here in San Diego over time. My caveman instincts are comfortable with these data out of San Diego! :-)
Labels: aerodynamics, lswt.com, trust, Wind Tunnel