At least this time GM’s Car Czar is sticking to PR Supremo Steve Harris’ talking points. Namely, that The General makes some kick ass cars so give us an effing break (and/or $25b worth of federal loans). On the occasion of 9/11, Maximum Bob Lutz (or his designated spin driver) uses the FastLane Blog to set-up a multiple choice test that proves one thing: nothing. The eight questions– one for each of GM’s U.S. brands, but not really– posit the kind of biased non-queries that would make a GM-friendly journalist blush (albeit only long enough to make his or her way to the open bar). The first three brain teasers challenge readers to rate three vehicles’ “initial quality”– which, as we’ve discussed here ad nauseam, doesn’t mean Jack shit. Number four asks us to believe that, as per the “premier automotive analysis site” (Edmunds), the Chevrolet Aveo is the most-economical car in America, taking into “account not only mileage but all costs” (above the Honda Fit and Toyota Prius). Question five DARES to quote Dan Neil, the auto writer whose prescient anti-GM rant “inspired” The General’s petulant PR folk to pull ALL the company’s advertising from the L.A. Times. Questions six, seven and eight trumpet journalistic circle jerk awards, ignoring sales slumps for the media-blessed vehicles. So, what did we learn? That GM is so busy tooting its horn it still can’t see that the bridge is out.
Find Reviews by Make:
Read all comments
First of all, my first post and thank you for this site. I love the anti- bulls hit view.
Ok let’s do one by one and sorry about the length.
1.) Which mid-size sedan has the highest initial quality?
a. Accord (Honda)
b. Altima (Nissan)
c. Camry (Toyota)
d. Malibu (Chevrolet)
GM define initial quality. 3 months w/o problems. I hope all cars can do this. BTW my 08 Malibu does have some loose parts but nothing major mechanical and love the car.
2.) Which large sedan has the highest initial quality?
a. Avalon (Toyota)
b. Grand Prix (Pontiac)
c. Sable (Mercury)
Hmm. Grand Prix is being phased out so you are getting rid of your car and the Sable is a rebadge Taurus.
3.) Which mid-size pickup has the highest initial quality?
a. Dakota (Dodge)
b. Ranger (Ford)
c. Tacoma (Toyota)
No GM product doesn’t matter on a GM blog.
4.) Which car is the most economical overall?
a. Aveo (Chevrolet)
b. Fit (Honda)
c. Prius (Toyota)
5.) Which car did the LA Times describe as, “a better car than BMW or Mercedes or Lexus or Infiniti?”
a. A6 (Audi)
b. CTS (Cadillac)
c. RL (Acura)
Could have been all three or none. CTS by most accounts I have read is a great car. Now make a great car for under 20k
6.) Which company makes the winner of the 2008 “Green Car of the Year” award?
a. Chevrolet
b. Honda
c. Toyota
Let’s see that would be a new introduced car. So the 20 mpg hybrid. Now put that baby in the Malibu to get some serious mileage.
7.) Which car was selected by the North American Automotive Press Corps as the “North American Car of the Year” for 2007?
a. Aura (Saturn)
b. Camry (Toyota)
c. Fit (Honda)
No problem here all three were eligible and the Saturn won the award. Which is the lowest volume car?
8.) Which car won the same award for 2008?
a. Accord (Honda)
b. Altima coupe (Nissan)
c. Malibu (Chevrolet)
Same as number 7.
GM some critical thinking please.
So with Federal Loans, does this remove GM, Ford and Chrysler from the Deathwatch? I hope they resurrect as world-class carmakers otherwise, that would be a waste of tax dollars – Gov’t could have allocated those for infrastructure improvement or MAGLEV Transport systems.
As far as the J. D. Powers initial quality business, the devil could be in the details.
When I worked at the Chicago main post office years ago, the employee’s Harrison street entrance had a big board showing how the on-time delivery of mail in Chicago was always in the high 90’s percentages. 98%, 97%, 99% were very common numbers posted.
But these were internal measures that were developed and executed by the bureaucracy itself. The seeded mail was placed in certain mailboxes in the city in Chicago but then supervisors in the post office were given the heads-up on this seeded mail. Postmen would make pickups specifically to get this mail collected, and once collected other supervisors made sure that this particular mail was processed very quickly throughout the system, and delivered. Management at the Chicago Main Post Office was passing around monthly “cash incentive awards” to each other for a job well done.
But then in 1992 we had a new Postmaster General by the name of Marvin T. Runyon, who decided that the Post Office needed to get to the truth about it’s on-time delivery with unbiased measurement. He hired a big ten accounting firm to perform a nationwide query of the on-time deliver of mail.
Well, guess what: When these independent bean counters placed their mail in various mailbox drop-off points throughout the city of Chicago, the post office had no heads-up were this mail was at. No special pick-up routes for this seeded mail, no special processing for this seeded mail. The numerical results were very different this time; as I recall it was ranging from the mid 80’s to the 60’s.
And so the same thing may be going on at GM/Powers. This would not be hard to do. I recall my first new car purchase was a Pontiac lemon. When the silly thing was towed back to the dealer the dealer would simply use the same work order as was used the last time it was towed back. They always told me it was “easier to just keep using the same work order,” but I knew what was going on. Thus, the same problem was only worked on once, not three times, because when it comes time to arbitrate under the lemon law, the dealer, the Pontiac rep, and the GM rep are all going to say that only the number of work orders count as an attempt to fix their damn products, not the number of times the car went back on a hook.
GM is in the business of making so many lemons that this organization has learned through the years how to mitigate their losses from lemon cars. My experience has taught me that GM is a morally bankrupt organization that does not get another dime out of me, ever.
I recall reading some years back how Powers had dropped their 3-year (or was it 5-year?) measure of quality. Not surprisingly, GM’s Cadilacs was getting consistently clobbered with this measure, so it is better for Powers to eliminate this to help GM. I also read how Powers gives GM the heads-up on the Cadillac customers who are selected for the initial quality measure. A GM big wig then calls this customer up on the phone to make sure that GM gets a favorable outcome. In other words, GM runs their shit just like a Federal bureaucracy.
Sorry for the long post, but I just had to get this rant out.
allen5h – That’s an interesting story about the Post Office. I have yet to see an incentive/motivational program that doesn’t have gamesmanship in its execution.
However, regarding your JD Power assumption it sounds like you believe that GM has some inside influence to JD Power and their metrics. Based on your Post Office story, it seems that you value independent and objective analysis. In actuality, JD Power has an extremely objective measurement system for their Initial Quality Study and long-term Vehicle Dependability Study.
JD Power sends surveys to customers who register new vehicle purchases. Their questions remain mostly unchanged year to year, and they attempt to obtain a valid statistical sample size of car-buyers in order to create results that can be compared year to year. JD Power does not request information from OEMs regarding any internal reliability data. They make no attempt to validate their results with the internal findings of the manufacturer.
One thing which is often overlooked is that JD Power doesn’t sell anything to any individual car buyer. Their customer base consists of the businesses that they analyze. If JD Power survey results turned out garbage that was of no value to the OEMs, then JD Power stands to lose a lot of business. JD Power also licenses the use of their findings to those who wish to cite the results in advertisements or other communications. This is why they refuse to “give” their findings to the masses to be analyzed. Instead, they sell their findings to the OEMs to be analyzed.
It’s important to remember that the statistical approach and calculation methodology are defined before the study is conducted. They do not change their methods if the results are “odd.” You would be interested to know that everybody was surprised when Ford and GM recently shot up to the top ranks of Initial Quality. Rest assured that steps were taken to double and triple check the results. Bottom line: Ford and GM make better cars in initial quality. Long term reliability takes years to fix, and the Domestics are improving there as well.
It’s a bit foolish to assume quality will improve overnight. It’s easy for many to dismiss (or just refuse to believe) the objective findings because they are not intuitive. But outright dismissal of objective findings is the same illogical action as what you described at the US Post office.
If GM wants to tout their awesome rankings – then JD Power wins. If no one (neither the businesses or the car-buying public) has any interest in the JD Power study – then JD Power loses. And thus, JD Power is extremely motivated to continue offering objective findings to the OEMs. JD Power also offers consulting services to help offer insight regarding improvement potential.
Neither JD Power or Consumer Reports use research vehicles that are manufacturer-sponsored. Consumer Reports differs from JD Power in their methods and motivations on other areas. It remains up to the public to understand the business model of the two firms since CR sells its wares to the public product consumer base. But keep in mind that neither are “paid off” by the OEMs regarding the results of their studies.
If GM wants to tout their awesome rankings – then JD Power wins. If no one (neither the businesses or the car-buying public) has any interest in the JD Power study – then JD Power loses.
There in is the problem I have with JD Powers surveys and why I ignore them. They have every incentive to create numbers that their customers want to hear. Their customers are the car makers, not the car buyers. JDP has a fundamental conflict of interest if you believe they are there to help you make an intelligent car buying decision.
JD Power also licenses the use of their findings to those who wish to cite the results in advertisements or other communications
This is why I trust CR more than JDP: CR expressly forbids manufacturers/distributors of tested materials from referencing CR ratings in promotional. A dealer can keep copies of the auto issue in their showroom (and other than Honda, I can’t think of any makes who’d have the guts to do that) but that’s about it.
CR’s customers are it’s subscribers; JDP’s are the various organizations that license it’s material. Big, big difference.
holydonut – thank your for your thoughtful comments.
I think we can agree to disagree about what J. D. Powers is all about.
Any organization that is supposed to make an independent measurement of something is not independent if it gives the service providing organization a heads up on the small statistical sampling before the sampling can respond to the survey. Why? Because any tinkering with the small statistical sampling yields perverse results, and this should be self-evident. (At least it is to me.) And I do believe that J. D. Powers is giving this heads up to Cadillac, because I recall a Cadillac official being quoted as saying (and I am paraphrasing) that Cadillac talks to these J. D. Powers surveyed Cadillac owners on the telephone because Cadillac wants to make sure that mistakes are not made in this survey. But here is the $100,000 question: Is Lexus being offered this same “courtesy” from J. D. Powers?
I can think of another car-related way that this perversity is in use.
I recall when I took my Honda to a Honda dealership for an oil change. After that oil change I received a letter from the dealership instructing me to talk to the service manager if I should receive a customer satisfaction survey from Honda, and if I did this then my next oil change would be free. Sure enough, my customer satisfaction survey from Honda was in my mailbox about a week later. In other words, Honda gives their Honda dealers the heads-up when a customer is randomly selected for a customer satisfaction survey.
So what exactly is Honda measuring? Honda is measuring their dealer network’s ability to influence a small random sampling of their customers, and nothing more. This tinkering with the small random sampling yields perverse results, but who cares? Honda can boast of internal measures that say they are doing a great job, and the Honda dealers are saying the same thing about their dealership service experience.
In general, anytime somebody is trying to sell me any product or service with “according to our own internal metrics” or “according to the independent such and such yardstick” I do not believe it because I am left with the impression that the marketing man hiding behind the curtain has feverishly worked to gain perverse results with his metrics.
allen5h –
The sample size for each individual make/model is a few hundred per vehicle. I think that far exceeds the “n = 50” rule of thumb. Also, GM does not know the name(s) of anyone who receives the survey unless the survey recipient were to contact GM and offer some small blackmail. I think you’ll have to dig up that article where GM asserts contact with the survey respondents. If GM did interfere with the survey before the respondent submitted – I believe that would be grounds for that vehicle being removed from analysis.
I think the fallacy with CR is that OEMs cannot have access to their raw statistical data. And thus, OEMs cannot make any changes based on CR’s findings. Simply seeing red or black circles may be what the public wants. But automakers need to know specific areas of concern to very deep levels. JD Power offers that information in an objective and consistent basis. When JD Power identifies a weakness with their survey, they adapt. But for the most part, their survey questions do not change year over year and all vehicles are tested in a discrete time period. CR doesn’t get into the detail of JD Power and their findings (while more friendly for consumers) are less useful for business decision making.
yankinwaoz –
JD Power sells the same results to every OEM that wishes to buy them. They cannot cater the results to a single company because of bribery. Their incentive is to give results that their customers believe in.
Imagine a situation where JD Power succumbed to temptation and forged the reliability numbers of the CTS or Lincoln MKS. Suddenly these products would be ranked better than the Japanese vehicles. This greedy action would cause a downfall of their business model.
JD Power offers services in almost all business arenas. Hundreds of business ranging from banks to construction companies use JD Power findings in their business development. Would you risk the trust of your multi-million dollar business business because GM or Ford offered a few million dollars today?
And even if your deception were not discovered, the findings reported by JD Power would be significantly different than other research agencies. And if JD Power were consistently different, then their findings are less credible. It would only be a matter of time before the Japanese would stop perceiving any value in the JD Power study since all these American nameplates were always ranked higher and there were fictional problems attributed to the Japanese products.
Unlike the American Auto Industry, JD Power is not willing to get short term gains at the expense of long term viability of their business.
Contrast this with what TrueDelta aims to accomplish. TrueDelta offers no meaningful feedback to an OEMs because the sample size for each automobile in each model year is low. They do not have an objective definition of a design problem versus a mechanical problem. Remember, the initial quality survey measures both “annoying” things with the car as well as mechanical failure. And there is no way to see year over year improvements or changes versus a defined market group.
But if you’re a customer, TrueDelta is more interesting because you see feedback for nameplates with narratives. Instead of black circles you read about the problems people have had as well as an aggregate of the number of problems per hundred.
Unfortunately, since the goals of TrueDelta and JD Power differ, it is incorrect to say that TrueDelta is more or less meaningful overall.
If there is a result that is less trustworthy – I believe Consumer Reports would be that group. Their survey is not sent out randomly. A significant portion of their sample consists of Consumer Reports subscribers. I have yet to see any evidence that Consumer Reports subscribers represent a valid cross-section of new car buyers. They are also more subjective in their interpretations. The notion that they had to remove the V6 Camry from “automatic recommended” is evidence of a pre-existing idea of quality. JD Power makes no recommendation and merely present their numbers as they were discovered. I believe a random sample of new car registrations accurately represents a cross section of new car sales for a given model year.
holydonut,
GM does not need JDPower information to improve GM’s cars. GM has all the information they need in warranty work orders and spares consumption. This is far better information than they get from JDP, which is, after all, a survey. GM KNOWS EXACTLY how good their cars are.
GM needs JDP to provide a rosy picture of their vehicles to 3rd parties. A rosy picture, not the picture they get from working on the cars.
KixStart: “GM does not need JDPower information to improve GM’s cars. GM has all the information they need in warranty work orders and spares consumption.”
Right. But they need JDPower to get data on other manufacturers’ cars. Without that, they could erroneously conclude “we’ve improved a lot; our cars must be as good as Japan’s now!”
Interesting that GM quotes Dan Neil, considering they pulled their advertising dollars from his newspaper after his critical piece on the G6 a few years ago.
It also seems they are stretching the truth a wee bit.
The Ford Fusion also beat all 3 Japanese competitors, according to he J.D. Power Initial Quality Survey, which also reveals that above average are American brands Mercury, Ford, Cadillac, Chevrolet , Pontiac, Lincoln, and Buick. Below average are import brands Acura, Kia, Nissan, BMW, Mazda, VW, Subaru, and Scion (and several others).
When I sort the table by Overall Quality, I find that only one American brand is rated above average (four red circles), Mercury. The other are all rated “about average” (three red circles).
http://www.jdpower.com/autos/ratings/quality-ratings-by-brand/sortcolumn-1/ascending/page-#page-anchor
holydonut,
TrueDelta’s sample sizes are already over 100 in a few cases. Sample size is a current limitation, but won’t be a permanent limitation. If we had 1/10 the free exposure the general media gives J.D. Power, it wouldn’t even be a current limitation.
No one has as objective a definition of what counts as a problem as TrueDelta does: if the car went into the shop for a problem, and the dealer did something to make the problem go away, it counts. Mere annoyances that some owners will choose to complain about but others won’t even notice–and for which there is no fix–don’t count.
With J.D. Power and CR, varying consumer perceptions will have much more of an impact, because respondents decide what is worth reporting. Also, the way J.D. Power focuses on scores that combine mechanical and design problems–I don’t believe manufacturers are allowed to use either score alone in advertising–leads to misleading results. People might see BMW’s relatively low scores and conclude that the cars are unreliable, when complaints about iDrive are really largely to blame.
Two totally different things–design and mechanical quality–require two separate results. They should not be combined in a score that people will associate with reliability.
The “meaningful feedback to manufacturers” is largely BS. For mechanical problems, manufacturers already have the best data possible–their own warranty data.
But then the J.D. Power scores are more and more a matter of what little noises the survey can entice people to complain about. So I suppose J.D. Power does provide some assistance in identifying which of these are worth eliminating.
But, more than anything else, J.D. Power’s results are like other standardized tests where the results have a significant impact on public perceptions. Those being tested figure out what they have to do to get a good score on the test–in this case by paying the test giver for detailed data and training. No doubt some product improvements result, but the real goal is a better test score.
This is why industry insiders commonly refer to the whole scheme as a “racket.” Create a test that people want a high score on, then charge them big bucks for the information and training needed to earn a high score.
One little trick: avoid options that are likely to garner complaints during the testing period (manufacturers know when the surveyed vehicles will be built). Maybe also bump the number of inspectors and customer satisfaction phone calls during the critical month or two.
One advantage of TrueDelta: our surveys are continous. This leaves the manufacturers nowhere to hide. Also, we cover more than just the first 90 days and the third year.
I do not believe either J.D. Power or CR is biased towards any particular manufacturer. Both are biased–like any organization–in favor of whatever makes them seem more important. One way they both do this: keeping the focus on relative rather than absolute scores. This keeps consumers from concluding that, in most cases, reliability simply isn’t a factor anymore. If people did conclude this, both organizations would suffer from a major decline in revenues.
Finally, there’s one major sop J.D. Power gives all manufacturers: their “circle dot scores” all include a bonus dot. A “two dot” score is actually the bottom 30%, and should be just one dot. (Three dot=30 to 60, which isn’t symmetrical about the average; four dot = 60 to 90; five dot = 90 to 100.) This makes all cars–well, except those on top–look better than they actually are.
I suppose J.D. Power might argue that a one-dot score (which no car ever receives) would be for a car in the bottom ten percent, the same way a five-dot score is for a car in the top ten percent. These days, those are the only cars really worth worrying about. But J.D. Power refuses to identify which models are in the bottom ten percent, no doubt because this might upset a client.
TrueDelta’s latest results:
http://www.truedelta.com/latest_results.php
Karesh –
I don’t think it’s worthwhile to engage in a discussion with you regarding the validity of JD Power. There is no benefit to be had to convince you or any reader that there is a “correct” valuation of a vehicle. But I do want to clarify what JD Power is trying to do. Right, wrong, or indifferent I think it would help readers to understand the general motive for these surveys rather than just dismissing them because they don’t like them.
First, I want to clarify and point out that JD Power aims at recording customer satisfaction in their IQS test. I believe it is common to equate satisfaction with reliability and you did so in your previous post. Satisfaction/quality is a product of both reliability as well as positive/negative aspects of the vehicle. And where JD Power excels is that they provide a good comparison of one vehicle versus the segment average or best award vehicle.
To the average consumer, JD Power really shouldn’t mean anything. I often wonder why they even bother publishing their very topical and shallow PR items showing their findings. Those ascending or descending bar graphs do not help a customer, and they tend to over-generalize th brand. I suppose it makes for good sound bites and gets some PR. But really, you cannot appreciate how useful their surveys are until you obtain the full data dump for each of those surveys.
Anyway, 50merc rightfully points out – you have to take things to the next level. I posted this example before, but I guess the response wasn’t read by many people. 50Merc is doing the critical thinking that many in the industry fail to see.
Let’s pretend the individual automaker knows they have recorded warranty repairs of uneven tire wear. For the sake of discussion let’s say the individual automaker finds that this condition occurs within 3 months on 10 cars per hundred.
1) Should the automaker dismiss uneven tire wear because they believe the owners are too stupid to get their vehicle aligned?
2) Should the automaker assume that they must strive to design for 0 incidents of uneven tire wear?
3) Should the automaker draft a memo to dealers telling them never to service uneven tire wear because no service work orders would result in zero incidents recorded in the system?
4) Should the automaker assume 10 problems per hundred is fine and then focus on other issues?
JD Power adds value because one of their survey questions specifically addresses tire wear and the other question addresses the vehicle not tracking in a straight line. Everyone who completes a valid JD Power survey must complete this qustion.
The automaker can compare the number JD Power finds for uneven tire wear and then compare their individual (or collective) vehicles against the award segment or industry average. What better way to identify your performance than to have an objective way to compare your vehicles against your competitors?
So let’s pretend JD Power survey respondents also come up with 10 instances per hundred for the same car. And let’s say that is twice that of the segment average of all respondents. At this point, the automaker can assume that something is causing the car to exhibit uneven tire wear more frequently than others. Maybe it’s excessive camber by design. Maybe there is a high incident of failing ball joints that is causing uneven tire wear. Maybe the cars are getting damaged during shipping and their thrust angle is off the moment it hits the dealer lot.
A more clear example is the question that asks the buyer if they have difficulty using your nav system and a separate question that asks if the nav system at any time failed to function. If it turns out that customers have trouble using your nav system setup, then maybe you need to switch the interface. After all, a clumsy nav system interface affects owner satisfaction but has nothing to do with reliability. For the Vehicle Dependability Study – they focus soley on the nav system working. JD Power has shown a correlation of one model year IQS (subsection of reliability issues) with the same model year in the VDS.
Either way, JD Power repeats the survey year after year after year. If the automaker decided to pursue actions to address certain topics – they should expect to see an improved score for the subsequent model year. And that’s how JD Power helps the automaker. At no point in time is JD Power telling Joe Public which cars they recommend.
holydonut:
Are you an employee of J.D. Power? People here generally know that I operate TrueDelta, but your identity is much less clear.
You really wonder why J.D. Power bothers to publicly release the scanty results it does release? You must really believe what you’re posting here, because the answer is obvious to anyone who hasn’t been drinking the Kool-Aid: without public results, J.D. Power can’t maintain “the racket.”
Here’s how it works:
1. Publicly release results.
2. This creates pressure for better scores, because every major news outlet reports which brands did well, and which did not. As you acknowledge, this information shouldn’t be that useful to car buyers–but many base their perceptions on it anyway. “I heard that X brand has the highest quality cars.”
3. The need for better scores justifies paying for the detailed results, training, and so forth.
Without #1, demand would be lower for J.D. Power’s products and services.
As for “satisfaction,” the word isn’t anywhere in the press release discussing the 2008 IQS except in the blurb about the firm at the bottom:
http://www.jdpower.com/corporate/news/releases/pressrelease.aspx?id=2008063
The IQS supposedly measaures “quality” not “satisfaction.” Otherwise it would be the ISS. J.D. Power does have other surveys with “Satisfaction” in the title, but not this one. When people hear “quality,” a notoriously nebulous term, they think “reliability” much more often than they think “satisfaction.” Hence we get many people incorrectly concluding that BMWs are unreliable because they have relatively low IQS scores.
When a study measures something much different than people think it measures, you’ve got a harmful disconnect.
So why does J.D. Power lump design quality and mechanical quality together? To bump the scores–J.D. Power exec just about admitted this to Automotive News a year or two ago, when he said design quality wasn’t added JUST to bump the scores. So while there were other reasons, this was one reason.
Higher scores = more demand for detailed info and training = more revenue to J.D. Power.
Now, there’s no denying that J.D. Power’s research has pressured auto makers to improve their quality, and that J.D. Power’s information has been of benefit to manufacturers. But there’s also no denying that some serious distortions are built into this system.
This is an excellent thread but I think two points should be added:
1) Resale value. Yes, it doesn’t fit into the 3-month window the car makers (allegedly) want but it is a continuous, large sample size measure of “quality”. (Quality being a very ill defined term here.) In an open market where people put their hard-earned cash down the truth will out.
2) The Web. As a consumer if you want to know about quality go to web sites where owners post (visit as many as you can find). Whether they are “haters” or “fans” you will quickly find out a car’s weaknesses. Again, it won’t satisfy the 3-month window but if you see over and over again the same complaints it likely is a real problem. Problems seem to persist over many model years so it is unlikely that if model year X suffers this problem model year X+1 or X+2 (or X-1, X-2) won’t also (and why take the chance). The web is large enough that manufacturers have a difficult time controlling it.
Karesh –
No, I don’t work for JD Power. In fact, a few months ago I was arguing with my coworkers against all the attention and money spent on looking through JD Power data. I didn’t believe in their system was because I viewed it as flawed as you have described.
But I changed my tune when I realized that there is no better tool or service out there that can be a positive benefit the way they’ve set up their “racket.” Their methods are hardly perfect, but as it stands, they are the only consistent source of objective comparison data for the automakers.
In my previous example with 4 alternatives, there is a whole bunch of waste and idiot decisions because of preconceived notions that JD Power is flawed. So instead of taking advantage of some extremely useful data, people write it off because they “know better” or simply hold the belief that a flawed system can have absolutely no benefit.
Obviously my numbers above are made up for the sake of discussion; but let’s continue that point. You would either have a group of people saying “JD Power data is stupid why should I worry about it?” You would also have a group of people looking to try to get incidents of tire wear down to zero. And a very few (but most often overruled) group would actually attempt to identify how to use their limited resources in order to pursue the items that they could improve the most.
Let’s say that that your 10 problems per hundred was well below the segment average for tire wear. But let’s say the feedback for brakes was way above the segment average. Would you rather investment money and time to improve your brakes or investigate the cause of the tire wear?
You have a whole bunch of these causal relationships mapped out in your discussion about their motivations. You’re rifling out these slipperly slope ideas based on their potential actions, and it really sounds absurd. If an automaker could game the surveys by building cheater cars in the 3-month window (as you pointed out, every car in that timefrae would need to be a cheater car since they are gaming the entire sample), then that automaker’s competency in execution would be through the roof. They’d be so good at their jobs that they wouldn’t even have a “quality” problem to begin with.
I have no doubt they want to make money, but if their test had no value because of idiotic behavior and “racketeering” then servies would appear that could provide better information and service than JD Power. A profit opportunity would exist for someone to give the automakers an objective source of information for them to evaluate short-term and long-term quality/reliability.
A clear disconnect was occuring where customers had problems using something – and also customers having stuff failing to work. What would you do if you ran a survey and your customers (automakers) felt it necessary to differentiate the two types of incidents? Would you add a completely new survey or would you simply expand the scope of your existing one?
And to Morea’s point… good luck talking to any two people on this planet who have the same understanding of the word “quality.”
So how does JDPower jump from surveying customers (most of whom know little about cars) about “quality” to being able to help automakers with problems like your (intriguing) example:
So let’s pretend JD Power survey respondents also come up with 10 instances per hundred for the same car. And let’s say that is twice that of the segment average of all respondents. At this point, the automaker can assume that something is causing the car to exhibit uneven tire wear more frequently than others. Maybe it’s excessive camber by design. Maybe there is a high incident of failing ball joints that is causing uneven tire wear. Maybe the cars are getting damaged during shipping and their thrust angle is off the moment it hits the dealer lot.
Will customer answers like “The tires wore out too fast” help an automaker answer your question? Shouldn’t service managers be in a better position to provide a more cogent answer?
I’m just asking, no rhetoric involved but plenty of skepticism since I am a believer that bad data leads to bad results (and that bad data is worse than no data). Asking customers what they think without careful consideration leads to “new Coke”.
holydonut,
For my Ph.D. I spent a year and a half inside General Motors, observing their product development activities. I’ve observed exactly what you’re talking about: people spent far more time coming up with reasons they really didn’t have to make any changes to the product than figuring out how to improve the product.
Within that context there really is no current substitute for J.D. Power. Nor will you ever find me arguing that manufacturers shouldn’t use the data. “The racket” has motivated such people, and provided them with information on which to act, when simple competitive pressure and their own warranty data has not.
Now, a more proactive bunch would not have needed J.D. Power. But of course we’ve got to work with people as they are.
I should note that it’s not just Detroit. Mercedes only re-engineered its U.S.-market braking systems to produce less unsightly dust because they wanted to improve their J.D. Power score. Before that point, it was just “Silly Americans. Don’t they realize that the best brakes make lots of black dust? Just wash it off.”
You ask whether I feel they should have two separate surveys to handle design quality and mechanical quality? No, one survey is fine. But they should keep the scores separate, and not combine them.
In the end there are two problems with J.D. Power, both of which it’s hard for any provider of reliability information to resist:
1. Must choose between serving industry and serving car buyers; they’ve chosen the industry, because that’s where they started and that’s where the money is, and this limits and distorts the assistance they can provide to car buyers. Unfortunately, for “the racket” to work they’ve still got to pretend they’re providing the most useful information to car buyers.
2. CR of course chose car buyers. But both they and J.D. Power suffer from a second problem: making mountains out of molehills. Or at least mountains out of hills. When your product is reliability information, it’s tempting to present this information in a way that maximizes the customer’s demand for reliability information–i.e. make problems seem as large as possible.
Car buyers (and perhaps the manufacturers as well) play right into #2, because they want information in the simplest form possible. Many people, when they must choose between having to think hard about something and being misled, would rather be misled. This justifies reporting results in the form of dots, among other things.
Great thread. I also agree that JD Powers has an inherent conflict of interest from the vehicle consumer’s point of view. However, that is NOT their concern since OEM’s are interested in looking good. So they try to make everyone happy by making hundreds of tiny niche segments so that everybody has a winner.
One thing that hasn’t been discussed is a very simple one: warranty costs as expressed as a percentage of sales. This value is very important to all automakers. Honda and Toyota keep theirs at about 1.5%. Ford has recently improved theirs to 2%. GM has kept mum what their % is. That tells me everything I need to know.
And I love CR for calling out the same kind of bullshit that isn’t tolerated on this site!