Michael LoBue writes: As the study results comparing the impact of the start of the recession on standalone and AMC-managed organizations gains attention, there seems to be a general criticism of the study by executives of standalone organizations. The criticism is that the results are not valid because the study samples were not randomly selected. This post responds to that criticism, pointing out how the criticism itself is both short-sighted and (intentionally?) misleading.
Here’s the punch line —true the samples were not randomly drawn, but it’s just as likely the stellar results produced by the AMC-model vs. the standalone model would be even greater (as opposed to less — as implied by the critics) if the study is repeated on randomly drawn groups.
Select the following link to read the entire response to that criticism.
My first response to this criticism is that I pointed out that limitation in the study results. I’m pleased that none of the study’s detractors have tried to claim that I was hiding this important point, implying an attempt to deceive readers. I won’t go into the reasons why the samples were not randomly selected; this was covered in the study’s report.
Secondly, the criticism itself is only half the story about validity. To ignore the other half is an attempt to cloak the study in fear, uncertainty and doubt (FUD). But I certainly understand why the criticism is left as is — the other half of the explanation is not easily stated in a sound bit, plus it’s not in the best interest of those criticizing the study to explain what statistical validity is. I attempt a clear and concise explanation here.
Statistics is a wonderful mathematical tool that allows us to extract sample data from a larger population, measure that data and then profile the larger population based on the measurements of a sample. For this to be valid the sample needs to be randomly drawn. To have drawn a random sample for the study I conducted I would have needed access to all standalone and AMC-managed organizations up to $5M in annual revenue that filed tax returns in the years 2006, 2007 and 2008. That data set simply does not exist and to construct it was cost prohibitive at the time.
Random sampling, assures us that any error in our samples will also be random. Error here is defined simply as deviations from the reality of the entire population. What’s important here is that the error in these samples (because they are random) cancel each other out, leaving a statistically significant result — such as, the results are valid to an error rate of + or — some measure like 5%. Literally, this means that if the sampling technique is repeated 100 times, we could expect to get the same results we got from the first sample 95 times out of the 100 samples. These results would be within 5 percentage points above or below what we got from the first sample. Obviously, the higher the statistical significance level from the first sample, the closer the sample results are to the reality of the entire population we’re trying to understand.
Stay with me… almost finished!
Because my sampling technique was not completely random there’s a possibility that the sample I measured contained some non-random values driving the results I found. Here’s the limitation of the criticism itself — if the samples I measured do contain non-random values, we don’t know the direction those non-random values might push the actual results!
The point here is simple: those individuals criticizing the results want us to believe that because the samples were not randomly drawn the results are not valid, and that if they were randomly drawn there would be no difference between the two management models. However, that conclusion is misleading — it’s just as likely that the differences would be even greater if the samples were randomly drawn as it would mean that the differences between the two groups are quite small.
It’s hard to deny that the direction and general magnitude of these results are what we would expect given the basic nature of the two management models under study. I am not completely satisfied myself with these results and I am eagerly awaiting ASAE’s 990 database to have access to more data and the opportunity to collect either very large sets of data for analysis or to conduct random sampling of data for testing.
Until then, perhaps we should construct models to measure the ROI on the nearly 50% premium standalone organizations pay to manage their organizations vs. the AMC option? I realize these results could be threatening to standalone staff, but is ignoring uncomfortable truths responsible? Not all organizations are well suited for the AMC-model, but should we attempt to better understand those key organizational characteristics that help determine which model is best for particular organizations?