Details, details: Small survey errors may produce fallacies in research results
Businesses rely on research to gather data and process it into the knowledge needed to identify markets and satisfy customers. When exploring questions about attitudes, beliefs and other intangibles, researchers use Structural Equation Modeling (SEM) to analyze data. A W. P. Carey School of Business marketing professor and her co-authors have discovered that a significant percentage of academic researchers used the wrong measurement approach in their studies, resulting in deceptive conclusions. If the researchers performing studies for businesses follow the pattern, companies may be making critical business decisions based on misleading research findings.
Author E.B. White once said, "Humor can be dissected as a frog can, but the thing dies in the process and the innards are discouraging to any but the pure scientific mind." For many, the same could be said of statistics. Still, new research indicates that even those who don't crunch the numbers need to understand the details in certain sophisticated research models. The reason: Research design mistakes are common, and when you're looking at how one business circumstance affects another, even tiny blunders can produce very deceptive findings.
After examining recent research studies, W. P. Carey marketing professor Cheryl Burke Jarvis found survey design errors that in some cases produced results skewed by as much as 555 percent, says. She notes that with errors that weighty, "there is real potential for managers to come away with flat-out wrong conclusions from their market research."
The search for business intelligence
For those who don't do statistical calculations, the first thing to understand is that statistical science has evolved beyond simple, century-old regression analysis where researchers probe how factor "A" affects factor "B." Since the early 1970s, statisticians have been exploring relationships more deeply by looking at systems of equations that explain how "A" impacts "B," then "B" influences "C," and so on.
Called Structural Equation Modeling (SEM), this newer research technique allows market researchers to examine complicated, interrelated variables that, like dominoes toppling one over the other, can form a meaningful pattern. For instance, SEM could help managers see how job satisfaction among service workers affects job performance, employee turnover, customer satisfaction and, ultimately, profits.
That is the kind of modeling possible with SEM, but it only works if the measurement model is created correctly. Jarvis teamed with Scott B. MacKenzie and Phillip M. Podsakoff, both professors at Indiana University's Kelley School of Business, to see how well researchers have used SEM over the 24-year period from 1977 to 2000.
The trio reviewed market research papers published in the discipline's four leading scholarly journals (The Journal of Consumer Research, Journal of Marketing, Journal of Marketing Research and Marketing Science) to evaluate research that used SEM to explore latent constructs — abstract psychological concepts such as "attitude" or "job satisfaction."
Out of 178 papers examined, the team found that 28 percent utilized measurement models that were "incorrectly designed," says Jarvis, who adds that if this error rate "holds true in the top four academic marketing journals, there's no reason it shouldn't hold true in practitioners' work." In fact, Jarvis suspects that marketing practitioners are even more likely to make mistakes because, generally, academics — the people teaching others how to perform SEM analyses — are more adept at using this statistical approach.
Fault lines
As Jarvis explains, latent constructs are immeasurable in and of themselves because they are immaterial. "In physical science, you can measure the weight of an object or put a tape measure on something to determine its length," she says. But, that's not the case with beliefs and attitudes. Consequently, researchers measure latent constructs by their effect on other things.
"For example, most researchers today conceptualize job satisfaction as comprising a variety of distinct facets, including satisfaction with one's work, pay, coworkers, supervisor and promotion opportunities," write Jarvis et al. in a paper titled, "The Problem of Measurement Model Misspecification in Behavioral and Organizational Research and Some Recommended Solutions."
Under that definition, job satisfaction fits within what Jarvis and her team call a "formative" model, in which answers to survey questions define the latent construct we call "job satisfaction." If you drew a picture of the model, you might have a circle to represent job satisfaction and smaller squares beside the circle to represent all those variables used to measure job satisfaction: pay, work, etc. Arrows would point from the measurement variables to the latent construct, thereby indicating that those variables drive the ultimate job-satisfaction rating.
Contrast this with a reflective model, that might be used to measure a latent construct such as "brand loyalty," where attitudes about a brand would likely drive responses to the questions researchers are using to uncover how much brand loyalty exists within the consumer's mind. In this case, the arrows indicating which factor propels the other would point from the latent construct — brand loyalty — to measurement variables.
Why worry about which way the arrows point? "The direction in which those arrows point has mathematical implications," Jarvis says. Unlike good old-fashioned algebra, where "X equals Y" also means "Y equals X," the two variables are not equal in SEM. Instead, SEM researchers must ask, "Does X cause Y or does Y cause X?" Jarvis says, explaining that it is vital for survey authors to know this distinction because "the direction of the arrows affects how the math behind the model is written."
Often, it is the direction of those arrows where researchers make their mistakes. Jarvis and her colleagues found that nearly 30 percent of the academic measurement models were mislabeled as to whether they were reflective or formative. This seemingly miniscule slip-up has implications that Jarvis says left many who've seen her work "surprised."
Snowball effect
Recall that SEM harnesses a system of equations to deliver its research information. "SEM has two subsystems," Jarvis explains. One is the measurement model subsystem — the impact of the observable measures on the latent construct — and that can be either formative or reflective. The other and bigger picture in a structural equation model is the structural subsystem, where researchers hook together all those latent constructs — the attitudes and beliefs being measured — to see their impact on each other.
As it turns out, wrongly drawn arrows designating whether a measurement model subsystem is formative or reflective are a little like snowballs. The impact grows as it tumbles through the terrain of the larger SEM structural subsystem.
"If you make that small error in the measurement subsystem of the model, it biases — erroneously alters — the predictions the model makes about the connections between the latent constructs by as much as 555 percent," Jarvis says, referring to results she and her team found when they ran a series of simulations designed to see how wrong an incorrectly specified model could be.
If the model measuring the impact of employee satisfaction on customer satisfaction is 555 percent off, it would be telling managers that employee satisfaction has 555 percent more impact on customer satisfaction than it actually has. That would be a positive bias.
A negative bias is a simple switcheroo, and it could be off by as much as 90 percent. In other words, a manager could look at a piece of incorrectly designed research and wind up thinking, If I do this, it will increase sales, when the action would actually decrease sales, Jarvis explains. "Misspecification of models could lead to really inappropriate conclusions and bad decisions on the part of corporate managers," Jarvis adds. Fortunately, she has some advice to help others avoid model-specification gaffes.
Picking wisely
Jarvis remembers with dismay many queries from corporate workers looking for guidance on research projects. "I've had people contact me and say, 'Hi, I'm the intern, and my boss is having me design a survey,'" she recalls. "That's not something you give your intern. It is something you give the most experienced person in your group."
Jarvis urges researchers to "think through what you're measuring upfront" and take the survey-writing process very seriously. "Worrying about survey design after the fact is not the way to go," she says.
To help researchers know the difference between formative and reflective models, Jarvis and her team have compiled a handy list of rules to follow in specifying the measurement piece of their survey. In reflective models, a latent construct such as brand loyalty or satisfaction with President Bush will drive the indicators, and those items are highly correlated, meaning that if a respondent answers highly for one question, she'll probably answer highly for others, as well.
Dropping an "indicator" or question from a reflective measurement model won't change results significantly, but it could have serious impact on a formative model. There, the indicators aren't correlated reflections of a strong attitude. Instead, they are the building blocks of a belief, the pieces of a puzzle. Miss one, and your picture is incomplete.
These and other guidelines for smart model design appear in a paper Jarvis and her colleagues produced for publication in 2003. Its title: "A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research."
Of course, you always can hire the research out, and then, the question to ask is very simple. "If you're working with a market research firm that offers to do SEM, ask the researchers if they know the difference between formative and reflective models," Jarvis says. "Ask them if they know how to handle those two types of indicators," she adds. If they don't, maybe it's time to find another firm.
Latest news
- Lab lessons: Roadcase.com VP shares how ASU's SMB Lab fueled growth and efficiency
The Arizona-based audio/visual equipment case manufacturer gets expert guidance on improving…
- Best installment loans
Loans should be prioritized by their ability to improve human capital, says an ASU finance…
- Why does online shopping make me feel like absolute crap?
Online shopping can cause anxiety and frustration, says a W. P. Carey marketing expert.