After David Champion, head of Consumer Reports‘ auto testing, presented this year’s reliability results, I asked two simple questions. 1. What month were most surveys returned (i.e. how old are the data)? 2. What problem rates do the dots represent? Or, to keep it as simple as possible, what was the average problem rate for a 2008 car? Unfortunately, Mr. Champion did not know the answer to either question. He could only respond that the surveys went out in “the springtime,” and that the dots are relative. As if the actual problem rates they represent were of no consequence. In fact, both things matter. The truth about CR, as we’ve noted here: before: 1. The data are already about five months old, and will be 17 months old before they are updated again. 2. The differences between the dots for a 2008 model are about one problem for every thirty cars. But, since even the head of CR’s auto research doesn’t know these facts, it should come as no surprise that their millions of subscribers haven’t a clue. And then things got ugly…
Afterwards, CR’s head of publicity came up to speak with me, and was very combative. He insisted that I retract a recent statement that CR’s data were 17 months old. My response: weren’t they 17 months old when I wrote that? Well, yes, but not anymore. Won’t these new data also be 17 months old before they’re updated? Well, yes, but… He then challenged that my turnaround would be no better if I had 1.4 million responses. You know, because its better to be large than to be up-to-date. Didn’t the Detroit auto companies wear out this argument?
Finally he brought up my comment on David Holzman’s piece about the future car conference CR hosted recently. My comment: “CR avoids corruption itself, but corrupts other media as much as possible.” Or something to that effect. My response: didn’t we just enjoy a nice meal, preceded by an open bar? Weren’t all of the questions other than mine easy? (Actually, mine should have been easy, to answer at least.) His only response: there were tough questions. But there weren’t. And I wouldn’t be surprised if we’re not invited back next year.
[CR paid for MK’s parking, lunch and the two ginger ales from the open bar. Fair disclosure: TTAC has a contract with MK’s TrueDelta for pricing and specification data]
Good for you, Michael.
How dare you question the word. I don’t understand why they don’t disclose things like this with the published results. Statistics are pretty much meaningless unless you have knowledge of the data supporting them.
As Homer says, everybody knows that 87% of facts are just made up.
Oh, catfight! Bring out the popcorns!
Just laugh it off. Nowadays CR is only read by the olds who want to be sure to pick a reliable ride for their caregiver. And who can blame them? On the other hand, who could say this is in any way relevant to people who actually like cars?
Repair rates are relevant to me as is a large as possible sample size. Why Consumer Reports either wouldn’t know or doesn’t want to divulge what their little dots represent in real numbers not relativistically makes their presentation of the data completely worthless.
I have to confess I have found Consumer Reports useful and use it as one of the sources of info for purchase decisions on a variety of items. I am not old, I am 42. However, I do find their data wanting. Having a science undergrad, statistical significance is important to me and I have no idea if the difference between this model and that model is significant. I also find their predicted reliability a bit bogus.
Glad to see that CR is under at least some pressure for more transparent results. Personally, I feel that reader surveys should be totally replaced with random surveys sent out by simply getting information on vehicle ownership from the DMVs of various states.
I feel that sending surveys to readers, who are already patting themselves on the back for their superior consumer savvy because the read CR in the first place is not the best way to get unbiased results.
gamper: it’s so expensive to conduct a random sample survey that only the OEMs could afford the results.
If you want reliability info, especially if you want it free or nearly free, you’ll have to do without the random sample.
J.D. Power has a random sample, yet CR’s results and my own generally agree with them.
The difference: TrueDelta has results far ahead of the others, and we release actual problem rates:
http://www.truedelta.com/latest_results.php
Disparaging a competitor rarely improves credibility.
I feel 7-9% better after reading this. I would also say that I am somewhat more relaxed.
So let’s see if I’ve got this straight, CR’s reliability ratings are based on responses from a survey that was mailed out to vehicle owners that subscribe to CR. If that’s true then who the hell are these people who buy the crappy American cars that keep getting the black “much worse than average” ratings every year. Do these morons NOT read the magazine they subscribe to ?!!!.
So basically let’s say they’ve got like 100 responses each from Corolla and Civic owners reporting “much better than average” reliability, then there’s probably like ONE dipshit respondent that went out and bought a Cavalier even though the magazine said it was crap for the five previous model years. I can see how that might not be seen as a very valid survey scientifically speaking.
Michael, you and TrueDelta are to Consumers Reports what TTAC is to Motor Trend, Car and Driver, et al.
Keep it up!
So basically let’s say they’ve got like 100 responses each from Corolla and Civic owners reporting “much better than average” reliability, then there’s probably like ONE dipshit respondent that went out and bought a Cavalier even though the magazine said it was crap for the five previous model years. I can see how that might not be seen as a very valid survey scientifically speaking.
If CR doesn’t have enough data, they report that they don’t have enough data and don’t provide a report. That tells me that CR takes sample size into account.
I always find it disappointing that there does exist a database of the reliability of every car that is new to 3-5 years old, and that we the public will never have access to it.
The manufacturer warranty repair database. Why should anyone bother with a survey, when the data is already captured down to the last car. We wouldn’t even need any type of survey, or statistical analysis, for the data set contains every car sold in the USA.
But of course the possibility of any manufacturer sharing the warranty work database with the public is about as likely as Chrysler making a competitive small car.
I have always bought–and greatly enjoyed–cars that CR hates: Saabs, Audis, Volvos, Porsches, even a Sport Neon for our leadfoot daughter. Owned one Subaru that about convinced me to never never ever buy another Japanese car.
CR is as relevant to me as AARP would be as a guide to great singles bars.
If you could drive a garbage disposal, Consumer Reports would ROCK.
Disparaging a competitor that does not accept advertising rarely improves credibility…
Michael, I fill out your surveys monthly, and I also fill out Consumer Reports survey’s annually. Thanks for raising the bar and pushing consumer reports to explain their ratings with something more meaningful than red and black circles.
While I agree that your results and those from Consumer Reports tend to track closely, I haven’t found the same from JD Power. Just now I looked up the Toyota Matrix, which consumer reports shows as much better than average, and True Delta shows a low number of service trips per year, but JD power gives 3 out of 5 dots which I think equates to about average for overall quality. Similarly ridiculous, JD power shows overall quality of the Chevy, Ford, and Honda brands as the same at about average, and I find that really hard to believe as well. Do you really think there is any validity to the JD Power ratings?
@cicero
I beg to differ
CR is just as lame on appliances as they are on cars, at least in my 40 yrs of consumer experience.
CR’s data has pretty much validated my experiences with vehicles. I wish they gave more details but I fill out my survey each year and religiously report my data. I trust CR more than any other source out there. Until they prove themselves wrong, I will still use them as my main automotive reference.
I find CR very useful, but I think they should define those dots more precisely. For the rest of my opinion, see Pch101 about ten above.
Gardiner Westbound and ravenchris:
If ANYONE ELSE in that room full of journalists had asked the simple questions I asked, then I would not have. If after I asked the questions ANYONE ELSE had followed up on them, I would not have posted this article.
But no one else asked the questions–or any other questions concerning methodology or the nature of CR’s results–and not a single article based on today’s event has raised these topics or critiqued CR’s results in any way. Instead, CR’s statements are reprinted basically word for word.
Why? Because for some reason the entire automotive media establishment thinks that CR is above critique. How does asking them “when was your data collected” and “what is the average problem rate for 2008 cars” become “disparaging?” Because they’re a non-profit? Because they don’t accept advertising?
Neither excuses the way they do things. CR evaluates products based on the products, not based on whether the company that produces them is is some way a “good company.” GM, Ford, and Chrysler have for years given millions of dollars to charity. Should we then reprint the press releases attending their cars, without critique?
Of course not. And CR ought to be evaluated the same way they evaluate others, based on their product.
But no one else does this. So it seems I must.
srclontz:
All three of us ask different questions, so we are bound to sometimes have different answers. It’s also important to note the time period covered.
TrueDelta measures the number of successful repair trips per year based on a monthly survey, and states the average number of months of data and odometer reading. So sometimes our results cover about the first 90 days (what J.D. Power calls Initial Quality) and sometimes they cover later, longer periods. Because we measure successful repair trips, the problem must not only be fixable but fixed to count. So we can be quite sure there was a problem.
CR measures “problems you considered serious” using a yearly survey. Both the question and the long period covered by the survey lead to lower reported problem rates, but the relative ratings still track with our numerical scores. This even though a problem need not be fixable to count as a problem.
JD Power goes even further in this direction. They have people report design annoyances as well as problems, and either for the first 90 days or the third year (which many people misread as the first three years). This is the score you see. With the IQS, this score is separated into design quality (annoyances) and mechanical quality (actual problems, but not necessarily repaired).
If you look at these subscored for the 2007 Vibe (I randomly picked a year) you’ll see that while the overall score is three circles, the design score is two circles and the mechanical score is four circles. Five is hard to get–it’s the top ten percent–so four could be very good. Or not, but that’s dots for you–this one runs all the way from the 60th to the 90th percentile.
The mechanical score is the one you’d expect to agree with CR and TrueDelta, and in this case it does.
Does the typical consumer know to look at the mechanical score to infer problem rates? Do they understand what ranges the dots represent? Of course not. Which has been the basis for a critique of JD Power here in the past.
We should remember that CR is advertising-free and “non-profit” except for the people who run the place and work for it, all of whom make money varying from adequate to substantial. Which is how “non-profits” work.
How many tens of millions was that guy who ran United Fund making, though admittedly some of it was skimmed?
@Michael:
You mean to say that a room full of journalists didn’t ask the tough but fair minded questions to help inform their readers?? Well this is just a shock.
Nice job! I would have paid to see Michael Karesh in action, that’s for sure.
As I have oft repeated on this site, every time CR gets attacked by its would be competitor (guess who), I repeat that as an independent garage owner, the black dot/red dot system is accurate, based on our day to day experiences.
If I don’t leave a copy of the CR auto survey on my waiting room table, I get asked for it – often.
Even my own technicians consult CR when they’re discussing some car that’s back for the third time, with another and different problem. Usually such a car is black dotted to death.
Maybe CR can be found wanting sometimes, but it has served a valuable purpose for many years. Long before TTAC even existed.
philbailey,
So they’re not only large, but they’ve been around a long time? That defense didn’t work for Detroit, either.
You have piqued my curiosity, though. How does a black dot (or any colored dot) help your technicians in any way?
And I repeat: how does asking the simple, basic questions I asked constitute an attack? The very fact that something so mild is considered an “attack” should be thought-provoking.
To find this sort of language in any other case, you probably have to revisit the relationship between the old Soviet regime and what passed for a Russian press.
You asked about last year’s steel output? Off to the gulag for seeking to overthrow the government.
Sajeev,
It actually wasn’t worth paying for. At least not the public bit. I went out of my way to stick to two, simple, non-loaded questions. Had to express a little gratitude for the lunch.
I actually expected him to be able to answer the questions. My intent in asking the questions was entirely that such quesitons should be asked, and their answers attended to. I was nonplussed when he simply did not have the answers.
That’s when you know they know that all of the questions will have nothing to do with how they conduct their research and present the results: when they aren’t even prepared to answer the simplest of them.
If you could drive a garbage disposal, Consumer Reports would ROCK.
As someone else commented, that isn’t true either. They suck at rating appliances. Their vacuum ratings, especially the “emissions” rating is crap. I did a great deal of research due to the fact that my whole family has bad seasonal allergies.
We ended up with a Miele (expensive, but worth it). CR rated it no better than other vacuums even though Miele is one of the few companies with a fully sealed unit. In other words, the only place air is exhausted is through the HEPA filter. Other vacuums do the same, but are not fully sealed. Air leaks in various places meaning that unfiltered air loaded with dirt is being ejected back into the room.
The point is that if they can’t handle testing vacuums, then I don’t put much stock in anything else they do.
Stephan Wilkinson : “We should remember that CR is advertising-free and “non-profit” except for the people who run the place and work for it, all of whom make money varying from adequate to substantial. Which is how “non-profits” work. How many tens of millions was that guy who ran United Fund making, though admittedly some of it was skimmed?”
Certainly there are bad apples among nonprofits just like in any other organizational form, but I think it unfair to tar CR with such a broad brush. For example, it would be interesting to see a comparison of salaries for typical for-profit media outlets and CR. My bet: CR has lower salaries across the board simply because its funding is much more constrained because of its nonprofit status, refusal to sell advertising, and a long-standing commitment to buy the products it tests rather than rely upon demos.
I’d agree that CR doesn’t tend to focus on the kinds of cars gearheads find most interesting, but I take a careful look at its reliability data before making a purchase. Because I buy used, the age of the data isn’t as important to me as it apparently is to Michael.
I hope that CR will come to see Michael’s critiques as useful feedback to improve the quality of its surveys. CR also deserves arched eyebrows for the way it handled the future car conference. Clearly CR needs to become more transparent and responsive to these kinds of concerns.
All that said, before the advent of the Internet CR was one of the few media outlets that had the balls to robustly and consistently critique the American auto industry. (Even Car and Driver in its heyday was rather weak-kneed in comparison.) Not surprisingly, CR has been unfairly demonized for years by the industry and its flacks in the auto enthusiast press.
Let’s be honest: The press is so rife with conflicts of interest that we need more alternative media outlets that aren’t dependent upon auto industry ad revenue and the press junket lifestyle. CR offers a useful model.
Is it perfect? Nope. CR may even be obsolete in some respects. But that doesn’t change the fact that it holds an important and honorable place in automotive journalism history.
Dr. Lemming:
It might not be obvious why the age of the data is important. In fact, it’s most important if you buy used.
Let’s say you’re looking at a four-year-old car. Do you want to know how it did between the ages of 3 and 4, or how it did from 1.5 to 2.5?
It goes without saying that the problem rate can change quite a bit between age 2.5–before the warranty ends–and age 4.
Whatever the age of a car, a lot can change with another 1.5 years of age and roughly 18,000+ miles.
CR consistently tries to pass off their old data as new data. They’ll refer to “four-year-old cars” when these cars were just a bit beyond two years of age at the time of the survey. And they have the April auto issue, which repackages the fall results as if they were fresh.
Now that I think of it, the data age issue might be one reason they focus on relative ratings. Even if the absolute problem rates have increased during the “lag,” they might be assumed to have all increased about the same. So while absolute problem rates would not be accurate owing to the lag, the relative problem rates might be less affected.
Michael Karesh : So they’re not only large, but they’ve been around a long time? That defense didn’t work for Detroit, either.
Detroit faced a number of worthy competitors and LOST. CR still doesn’t really have any. Even with questions about their data and their lack of transparency, no one else provides data that is useful to average consumers on as large a number of cars and trucks as CR. Until someone else does, it’s CR.
CR dissed the Ford Escort when they were new,, when they turned out to be reliable cars they turned around and recommended them as used cars. It has seemed for years unless the front said Honda or Toyota and sometimes Nissan they called it junk and moved on.
They should stick to laundry detergent and can openers because they are simpler to test than a vehicle and are mostly US made.
The first two years of Focus production were the pits. At one point, it had the world record for recalls, until the Cayenne took over. After that and again according to our low level view from the shop floor, the Focus got better. Anything built after 2003 is not too bad. CR got it right, right on the button.
Johnster,
Detroit’s competitors started out small. Fans of Detroit initially wrote off the Japanese with “they’re only good with small cars.”
But what really gets me is when the people who claim to follow CR because “there’s no alternative” also do their best to prevent any alternative from developing. Many people–and I’m not saying you’re among them–seem to think CR should have no competition.
But a lack of competition has been at least as bad for CR as it ever was for Detroit.
And, in the hear, hear department:
Disparaging a competitor that does not accept advertising rarely improves credibility…
This is really interesting because I’ve found the qualitative data on the Internet, from prior and current owners, to be far more informative than the ‘dots and stats’.
There are at least a half dozen sites around the web that offer tens of thousands of reviews from folks who have actually owned the car for long periods of time. This is where you’ll find out whether certain components are truly aging well along with the vehicle overall. I actually encourage folks to visit those sites and related enthusiast sites if they want to learn more about the vehicles they purchase from me.
CR in particular has a rather nasty and stupid record of endorsing VW’s in the late 1990’s when the qualitative data in every site I know reflected the exact opposite of their findings. Guess which set of data won out in the end? They’re also a bit late when putting vehicles on the ‘avoid’ list and with generally understanding the actual corners cut by the automakers. Toyota’s decontenting over the last ten years in particular has resulted in a gradual loss of quality and durability vis-a-vis their past efforts. CR didn’t catch up to that fact until 2006.
CR isn’t trying to keep data away. They’re limited in the content they actually get and unfortunately, their narrow market demographic amongst their readers obscures their ability to look at and offer a more complete picture of the overall quality within the industry.
So why doesn’t TrueDelta just release it’s own magazine?
Apparently dalmations are awful They’re covered in black dots.
As a reader (but not subscriber) to Consumer Reports, and a participant on Mr. Karesh’s site, I’m glad that he is asking tough questions. And I appreciate his efforts to develop an alternative to the magazine.
Having said that, I have to agree with Mr. Bailey, as more than a few mechanics I’ve talked to have touted the accuracy of Consumer Reports survey results. So they must be on to something.
I’m also a little baffled at the barbs regarding the magazine’s alleged preference for wheeled appliances…the magazine also tests sports and sporty cars, and rates them on how much fun they are to drive. The days when it thought that a Dodge Dart sedan with a slant six and Torqueflite are all that anyone really needs are long gone.
Usually that charge is leveled at Consumer Reports by domestic fans miffed that their favorite brand’s poor showing on the test track and in the reliability surveys. As if a Pontiac G6 and Chevrolet Impala are more exciting than a Camry…
GS650G: CR dissed the Ford Escort when they were new,, when they turned out to be reliable cars they turned around and recommended them as used cars.
Are you referring to the Escort or the Focus? Because BOTH were initially unreliable, but Ford did work to correct the problems.
When we bought my wife’s 2005 Focus, I mentioned the improvement in reliability shown in Consumer Reports for the Focus after 2003. Even the Ford salesman said (in a low voice), “Yes, the first two years of that car had lots of problems; I’d avoid them, too.”
And note that the magazine has just said today that Ford continues to improve its reliability, and is gaining on the best of the Asians in this area.
Obviously, I’m a huge fan of Mr. Karesh’s work. Michael acts with complete integrity and OCD thoroughness.
But more than that, his methodology is transparent and he is open to criticism. He is always open to the possibility indeed the need for constant improvement and progress.
Consumer Reports has done– and continues to do– much good in the world. But their decision to shroud their data and “dumb down” the results brings them no honor.
Valid questions, but they should have been asked by somebody else. Disclosure or not, it’s still too much of a conflict of interest.
So advertising is the root of all imperfections?
And refusing to accept it guarantees perfection, or at least means that all imperfections should be ignored?
Ridiculous.
And if one starts with a belief that CR is always right, one is likely to find that they’re always right.
I’ve got one thing I’d like someone to explain to me. Check out the most and least reliable Subarus here:
http://www.consumerreports.org/cro/cars/used-cars/reliability/best-worst-in-car-reliability-1005/how-makes-compare/0407_how-makes-compare.htm
The least reliable Subaru, with a repair rate about 15% worse than average: the Legacy Turbo.
The most reliable Subaru, with a repair rate about 45% better than average: the Outback Turbo.
That’s a large difference, almost equal to the spread from “worse than average” to “much better than average.”
Who ever knew that raising a car’s suspension and adding bodyside cladding could do so much to improve reliability?
See Legacies much more often than Outbacks in the shop? If you haven’t before, I bet you will now.
This is just one unexplained anomaly in their results. There are plenty of others.
RF-Final answer?
Good work Michael.
I have, at last count, 169 clients driving Subarus. Only one is unhappy and she is ROYALLY unhappy for good reason, so far as I can tell from her tales of woe. If I had the same number of Cavaliers (for instance) the exact opposite number of unhappy clients would appear, or something close to it. Subarus mostly get red dotted, Cavaliers not. And I’m not surprised at Camrys’ latest downgrade by CR. Right now, from our view of life, Honda is the brand of first choice. Ford is improving, because of Mazda engineering – I hope they don’t sell the company to someone else. Big mistake – huge.
I can certainly agree with your last statement. Selling Mazda would be a huge mistake.
BUT it sounds like they just want to sell some of their equity and maintain their current ties.
That said, Mazda-based Fords aren’t the only ones doing well.
Mr. Karesh,
From what I’ve read, Ford is applying Mazda engineering practices and quality control techniques to all of its vehicle lines, not just the ones based on Mazda platforms.
Mr. Bailey,
Is is safe to say that Ford is the best of the domestics? And are GM and Chrysler products improving, too?
Honda, as noted. But even there, the new Civic is suffering from premature rear tire wear. Apparently, the fix is a modification of the rear suspension that is not cheap to accomplish. Consequently, Honda won’t release the relevant TSB.
The NHTSA does not consider this a safety related defect – yet. In other words, no manufacturer is perfect, it’s just that some are more perfect than others.
Geeber:
There’s no doubt that GM vehicles overall are far, far better than they used to be, but they still make major errors such as the intake manifold gasket leaks on all the V6 engines.
As time goes on, other problems still surface and the list is much longer than it should be. I’m waiting for the first turbo four cylinder engine from GM to go bang, but I think it will happen, since GM hasn’t a clue when it comes to small engine design.
I could get into Chryslers’ poor record, but since they won’t be around much longer, it’s no use beating a dead horse.
Michael Karesh : “So advertising is the root of all imperfections? And refusing to accept it guarantees perfection, or at least means that all imperfections should be ignored? Ridiculous. And if one starts with a belief that CR is always right, one is likely to find that they’re always right.”
I was with you until you stated the above. Forgive me if I missed something, but I don’t see anyone making your stated argument. If I’m correct — and I invite you to prove me wrong — that suggests you’ve created a strawman.
Whether a media outlet relies upon advertising is hardly the sole factor in avoiding conflicts of interest (actual or perceived). However, it can be a major factor, particularly when some auto manufacturers have been known to bully publications that don’t follow script.
I’ve been reading auto buff magazines and CR for almost 40 years now and I’ve seen a fairly consistent pattern of the magazines sidestepping product quality issues that CR has dealt with head on. Many editors and reviewers have come and gone, yet the same general pattern tends to continue. I suspect that CR’s unusual funding structure has played an important role in protecting its journalistic independence.
I wish more media outlets would try that model. Does that mean CR is above criticism? Or that it is the only model worthy of emulation? Not at all.
I hope that you find great success in your important work. My guess is that there’s room in this world for both your approach and that of CR’s, particularly if the latter improves its transparency and responsiveness to feedback.
Michael, can you (or somebody else) speculate as to why CR won’t release raw data, or at least numerical results? It’s not like they’re selling it, or allowing their rating to be used in ads. Have they ever given a reason?
As a former subscriber to CR, I can tell you that I would not post negative about my current ride, until I got rid of it. It would be kind of stupid of to do so: delude my own investment. The truth would come out after the vehicle is owned by someone else.
The fact to keep in mind is how many people own car for long periods of time. Statistic is hard to get and in a world of instant gratification (leases) very hard to verify.
Dr. Lemming: a couple of commenters had only “that does not accept advertising” in their comments that I was wrong in critiquing (“disparaging” in their words) CR.
The logical conclusion is that their refusal to accept advertising places them beyond reproach.
In fact, CR is chock full of ads–for all of the stuff CR sells. And, guess what? No one evaluates CR’s own products, some of which would not past muster in their own evaluations.
And this is the key to CR’s bias: at least as much as any other organization, CR is biased towards revenue growth. They present information in such a way that people will feel most in need of this information…
…which brings us to Dave Ruddell’s question. The only answer CR will give–shades of government bureaucracy here–is simply that they won’t release more than dots as a matter of “policy.” And no one in the media asks for actual numbers, much less presses for them.
Why not release the numbers? Partly out of habit. But also possibly because, if the numbers were released people might realize how small the differences actually are. And then they’d be less likely to subscribe.
It might also be because they lack faith that the absolute numbers are accurate, while trusting that all inaccuracies wash out in relative comparisons.
Three big reasons the numbers they won’t release might be highly inaccurate:
1. The question asks respondents to only report “problems that you considered serious.” This opens the door wide to under-reporting.
2. The data are old; in the summer, the numbers for “four-year-old cars” would actually be those of 2.5-year old cars.
3. The survey is conducted annually. I know from my own survey that people often forget repairs that occurred more than a few months earlier. I send the email for the survey monthly because I’m not sure even a quarterly survey wouldn’t stretch memories too far. Yearly? Forgedaboutit.
4. Combination of 1 & 3: problems that occurred a year ago are much less likely to seem serious than those that happened recently.
Michael Karesh: “And this is the key to CR’s bias: at least as much as any other organization, CR is biased towards revenue growth. They present information in such a way that people will feel most in need of this information…”
A very important point. Non-profits still want to be prosperous! As organizational theorists recognize, organizations seek to grow and prosper. How to be prosperous? Well, generate a revenue “surplus;” that makes new facilities, pay raises, etc., possible. And how can CR generate more revenue? One way is to persuade people that it is important for them to have CR’s guidance when they venture into the dark, scary world of the marketplace. The “colored dots” ranking system essentially dramatizes differences in reliability experience. Remember as a student how disappointed we were if we got 79% on a test? It’d be a mediocre “C,” not the respectable “B” grade from scoring one percentage point higher.
So absolute repair frequencies are important. As would be dollars expended. Moreover, I’d like to know CR’s weighting method. For example, a power window that won’t go down isn’t nearly as serious as a transmission that blows up, and not just in terms of cost.
I have been a CR subscriber for many years. I respect their independence from advertising. That doesn’t make them perfect. No one claims that. I think they do a great job overall on a wide variety of product testing. They have also admitted when they screw up – car seat testing results from an outside contractor miscommunication comes to mind. They do seem to have integrity in their mission and in their testing methodologies.
Mr. Champion didn’t know what month most of the surveys were returned – big deal. Maybe the more appropriate question would have been “How old, on average, are the returned survey results?” Even then, we all know that this is pen/paper mail stuff here. Yes, is going to be a delay. I would guess that there are those who return the surveys the next day and those who get around to it several months later. Either way, they probably average out to some number. So he didn’t have the specific month. Did that mean he couldn’t get that info? Probably not. You say that their head of publicity was combative. Maybe we should also hear their side of the story.
Your second question about dot-to-problem rate correspondence is a good one. I would like to see their answer to that one. Perhaps your questions could be communicated via e-mail to them and their answers posted here.
I’m not going to be a blanket apologist for CR here. Politics and “stick to the goddamn toaster testing” comments aside, they do a commendable job in bringing information to me and millions of others who would otherwise be in the dark. Their work on safety testing in multiple areas (auto safety, lead, radon, et al) is enough in my book to have a permanent place on my reference shelf. They are a tireless advocate in these areas. They have a tremendous amount of respect. That same respect would not be there if they had advertising of any sort. And claiming on a website that “our opinions are in no way influenced by those ads floating next to that shit you’re trying to read” doesn’t cut it and everyone knows it.
Yes, you guys are competitors in a limited sense. I really like the TrueDelta concept and website. I’d like to see a lot more problem detail. And maybe some info as to what doesn’t match up to CR and others. I wish it could happen without ads though. For example, having a Warranty Direct ad on your page removes your ability to objectively discuss extended warranties – a very relevant subject in your realm.
I have always laughed when I read some of the forum crap from people dismissing CR auto testing. They do a pretty damn thorough job from what I’ve seen over the years. And they have their own track, buy their own vehicles, etc. Their summaries of most models are usually spot-on. They get hit, at least perceptually, for their board-stiff engineers assessment of things like sports cars. They usually hit the nail on the head though. I’ve always found it odd that the major rags have never given a shit about listing reliability for vehicles – aside from a handful of “long-term tests”. And these are cars that are usually “provided” to them, not purchased like CR’s policy. And safety results. How many rags have a damn bit of useful info on safety? Does C&D or Automobile give a mention to how safe this gleaming sled is? Usually not anywhere to be found. Guess they figure it’s a given that we’ll get it elsewhere.
Overall, I’d recommend treading lightly on the CR criticism until you provide us with their official response so that we can evaluate in black and white.
I actually thought that asking which month the surveys were returned–and 75% are online these days–was a less loaded question than “how old are the data,” which would imply that the data are old. Maybe I was wrong in this, but I thought it was a very simple question.
Being in the survey business myself, there are few things I focus more on than how many surveys come in how soon. I can tell you that the current round is tracking almost exactly with the previous round. Maybe when you have 1.4m of them, though, you no longer care how many actually get returned, or when.
I don’t currently evaluate extended warranties. If I did, then having Warranty Direct ads would pose a conflict. Currently I don’t personally evaluate anything on the site itselft–the content is user-generated. My personal material is posted here and elsewhere.