Blog

Are Polls Getting a Bad Rap?

Posted by Becky Quarles on February 2, 2016

Comments Off on Are Polls Getting a Bad Rap?
Category: Uncategorized

Adolf Hitler once said: “If you tell a big enough lie and tell it frequently enough, it will be believed.” In this hyper-political year, a chorus of critics is loudly proclaiming that you can’t trust polls anymore. And many people seem to believe it.

Can you trust surveys to make accurate predictions?

Let’s examine the evidence. The critics of public polls invariably cite declining response rates as a reason not to trust survey research. Yet, the proof is in the pudding, and the pudding is predictive accuracy, not response rates.

According to the American Association for Public Opinion Research and Nate Silver’s FiveThirtyEight, the average error in candidate preference polls conducted by leading survey organizations was less than 3 percentage points in 2014, which is within the margin of sampling error for a sample size of 1,000. Furthermore, the accuracy of election polls has been virtually unchanged over the past 20 years, even as response rates have declined.

How can polls with low response rates make accurate predictions?

 There is a tendency in the media to equate declining response rates with increasing non-response bias and, thus, decreasing accuracy. But that is a simplistic view. Non-response bias is the product of two factors: (1) the non-response rate and (2) the degree to which respondents differ from non-respondents. Because multiplying any number by zero always results in zero, there is no non-response bias if respondents do not differ from non-respondents. Pollsters have developed sophisticated weighting algorithms to compensate for demographic differences in survey response, thus, minimizing non-response bias despite declining response rates.

Why Do We See So Much About the Failures of Political Polls to Predict Elections in the Media and on the Internet?

 There are several reasons.

*Pundits and politicians who are not faring well in the polls are often quick to criticize them. Any well-publicized poll that fails to predict an election is likely to be publicly called out while those that succeed are ignored.

*News organizations are not funding as many high-quality polls as they used to. Not only has the cost of conducting a valid and reliable poll increased, but also news organizations are more likely to be strapped for the cash they need to conduct them. As a result, more low quality polls are being conducted. Nate Silver of 538 has conducted analyses of polling accuracy that clearly show that polls based on sound methodologies, predict elections much better than do those based on questionable methodologies. For example, some organizations are using “Robopolls,” which use automated scripts, because they are much cheaper than polls using live interviewers. However, they cannot – by law – call mobile phones and, thus, they exclude a very large percentage of the population. The increasing use of Robopolls and other less-than-rigorous methodologies make it inevitable that some public polls will fail.

*The media and public may have false expectations about polling accuracy. An election may turn on a very small difference that lies well within the margin of sampling error of even the most ambitious polls. Also, it is important to remember that polls are a snapshot of vote intent at a given moment in time, and other factors (e.g. snowstorms, family or peer pressure) can intervene between vote intent and the actual vote. And then, of course, there is the difficult process of predicting voter turnout, upon which many an election has turned.

The Iowa caucuses are a case in point for having unrealistic expectations of the polls’ ability to predict the results. The caucuses are social events where the voting takes place in public. There is plenty of opportunity for bandwagon effects, especially for Democrats who assemble at their favorite candidate’s table, while the undecided voters stand back until they see how things are going. If the candidate you have chosen is not getting enough votes, you can move to the table of another candidate. Social pressure can also change vote intent at the caucuses. Suppose you are married to a Democratic party activist who has been campaigning for Hillary Clinton, but you harbor a desire to vote for Bernie Sanders. You would have to go to a separate table from your spouse to vote for Bernie. What do you do – vote your heart and mind, or support your spouse?

What is the profession doing to improve the quality of public polls?

 Years ago, Philip Meyer, who virtually invented journalism based on polling, created a list of criteria – such as the margin of sampling error, the sponsor of the study, the number of interviews – that major newspapers used to provide information to their readers so they could judge the credibility of a poll. Today, polling is much more complicated.

The American Association for Public Opinion Research (AAPOR) has created a election resources page on its website. This page is designed to help journalists, the general public, decision-makers, and – hopefully – purchasers of polling services learn about the contributions polling makes to the election process, the limitations of the polling process, the importance of quality and how to evaluate it.

This information help journalists and members of the attentive public evaluate polling information more realistically and should provide important ammunition for those who argue for placing polling quality above cost in news organizations’ decisions about the purchase of public polling.

Another strategy is for news organizations to place less emphasis on prediction of election results and more emphasis on understanding the real concerns of Americans and their positions on issues. The Gallup Organization, which is staying out of the election “horse race” this year and focusing on shedding light on the dynamics of public opinion, has taken an important step in that direction.

 

 

 

 

 

 

 

How Trashing the Rating Scales Can Lower the Boredom Quotient and Make Your Branding Studies More Effective

Posted by Becky Quarles on January 12, 2016

Comments Off on How Trashing the Rating Scales Can Lower the Boredom Quotient and Make Your Branding Studies More Effective
Category: New Approaches

Branding studies show clients their brand’s relative position in the market and provide rich detail on the competition. But all that rich detail has a price – questionnaires that are long, repetitive, and boring. And that often leads to respondent disengagement, break-offs, and low quality data.

Traditional branding studies are based on ratings of multiple brands on the basis of multiple attributes. In the worst-case scenario, respondents are asked to identify the brands they are aware of and, then, asked to rate all or most of the brands they identified on a long series of attributes and other measures. The first go-round isn’t bad, but the boredom quotient quickly rises as respondents realize that they are going to have to go through the same series of rating questions for each of the brands they said they are aware of. And a high boredom quotient leads to lack of engagement and break-offs, which, in turn, compromises the validity of the survey.

To minimize these problems, survey researchers try to get their clients to agree to branding surveys that include fewer brands, but limiting the number of brands also limits your ability to get a comprehensive view of the market. For example, you may miss rapidly emerging competitors. And it won’t completely solve the problem of boredom. Even if you cut out all but three brands, the respondent still has to respond to the same list of attributes three separate times, a repetitive and boring task.

Those of us who conduct branding surveys need an alternative to rating scales  because,

*The use of rating scales requires a separate question for each attribute for each brand being rated. That can add up to more questions than most people are willing to answer.

*People don’t ordinarily use ratings when they make choices between brands.

An Alternative to Rating Scales

I first conducted a branding study without rating scales for a B2B client I had been working with for several years. A new CEO wanted to see all of the brand image ratings for all of the major brands worldwide. Initially, we were stumped, but – necessity being the mother of invention – a new idea began to unfold.

We started by doing our homework – interviewing sales and marketing people in the client organization and reviewing the results of a study of decision-making in the industry. We found that the corporate buyers we wanted to reach thought in terms of superlatives: a supplier was either the “best” or “an industry leader” in a certain area or they were not. And a supplier had to stand out on at least one important attribute to be invited to prepare a proposal or give a presentation. Thus, our charge was to map the competitive market in terms of the buyer’s competitive radar screen and to identify the attributes with the greatest impact on both getting the invitation to propose/present and winning the sale.

To meet this challenge, we dumped the brand image scales and simply asked which of the many possible brands “stands out” on different attributes. We then asked respondents to identify the brand they would recommend asking to prepare a proposal and, based on what they know “now,” which brand they would recommend purchasing. We call this methodology Brand Choice.

In this, and other applications of Brand Choice, we have found several important advantages.

Brevity. Surveys take less time to administer and traditional branding surveys.

Data Quality.  There is typically little missing data, break-offs, or inconsistent responses (which would indicate lack of engagement).

Comprehensiveness.  Results include a competitive radar screen score for all suppliers, including those that were not mentioned (they get a score of zero).

Predictive Validity. Regression modeling reveals strong relationships between key brand image attributes and desired outcomes like purchase intent.

Relevance. Results directly address the questions our clients want to answer, i.e., how can we get invited to propose? what do we have to prove in order to win the sale?

Actionable Results. Results have been very useful in setting marketing priorities, identifying targets of opportunity, and crafting messages.

Many clients resist this complete overhaul to their branding approach because they do not want to lose the tracking information they have built up over the years. When tracking is a major concern, we recommend using the Brand Choice method to get an overview of the entire market and, then, asking a limited number of questions about the client organization using key brand image ratings from prior surveys. This temporary measure can be repeated until the client is ready to start a new Brand Choice tradition.

I would be happy to discuss how Brand Choice  (or another approach customized to your industry) might help your organization get more out of your branding studies. Contact me here.

 

A Modest Proposal for Increasing Survey Response

Posted by Becky Quarles on January 4, 2016

Comments Off on A Modest Proposal for Increasing Survey Response
Category: Research Quality

We depend on survey research for the information that makes economy tick and supports democratic processes, but response rates are declining. What can we do about it?

 

Business and government and depend on government surveys to for reliable economic statistics on unemployment, health insurance, inflation, and poverty. Businesses also depend on survey research to help them discover new opportunities, understand consumer motivations, configure and price new products and services, and design and target messages to best prospects. Survey research is also an essential part of our democratic process: Social science tells us that people are deeply affected by other people’s opinion, or their perception of them. Elizabeth Noelle-Neumann’s Spiral of Silence explains how the Nazis used public ignorance on public opinion to stifle dissent. It is no accident that dictatorships do not allow public polling.

If asked why survey response rates are declining, most researchers will cite the large and growing number of mobile-phone-only households, the availability of voice mail to screen calls, the clutter that clogs so many online in-boxes, and public concerns about privacy. All of these are serious problems that make it more difficult and expensive to conduct quality survey research.

But all of these problems can be at least partially mitigated by four factors: time, money, and the combination of expertise and empathy.

Time. Today, most researchers, particularly market researchers, are under tremendous pressure to conduct studies as quickly as possible and, often, to meet unrealistic deadlines. A few years ago, a client representing one of the largest corporations in the world told CASRO members that he expected online survey turnaround to take hours not days. Some of us protested, saying that pulling a survey out of the field after just a few hours would not only assure a low response but would also mean that the pool of respondents would most certainly be non-representative. He was not moved by our arguments. By contrast, clients who allow a reasonable amount of time for data collection are rewarded by higher response rates and more valid results. If you doubt this, read Public Opinion Quarterly or attend the annual conference of the American Association for Public Opinion Research. Many of the papers presented prove the point that it is still possible to get respectable response rates if you make the effort and allow enough time.

Money. Most research budgets are not sufficient. Almost everyone working in the field of survey research today has had to make the choice of cutting research quality or blowing the budget. Since blowing the budget is not good for one’s career or financial health, it is usually necessary to cut quality.

It is no coincidence that well-funded studies get above-average response rates since having more money to spend allows one to make a greater effort to reach respondents and convince them to participate. It also allows one to utilize the services of higher-level people to design the survey and manage quality control, which brings us to the final point, a combination of expertise and empathy.

Expertise/Empathy. Respondents often complain about long, boring, and repetitive surveys and questions that are difficult to answer or are too personal. This makes me believe that many don’t participate because they have been subjected to too many inept and/or abusive surveys prepared by people with no background in the art and science of survey research and with little appreciation for the fact that respondents are volunteers.

Today, anyone can conduct an online survey at minimal cost for data collection. The results may be worthless or, worse, misleading, but it can be done. As a result, most people are subjected to a continual stream of email invitations. I decided to click on all the invitations I got for a few weeks and was dismayed by the results. A few examples:

A customer satisfaction survey from a major financial services company that went into exquisite detail about every experience with their services I had ever had and asked me to evaluate each experience on the basis of attributes I had never thought about. I finally succumbed to boredom and fatigue and broke off.

A spate of invitations asking me to evaluate online purchases I had made from a department store. Several invitations arrived before the packages were delivered and way before I could have had time to use the product.

An invitation for a branding study that asked a long list of attributes for every competitor I said I was aware of.  It was very difficult to provide responses because many of the attributes did not seem to apply to the product class and there was no “no opinion” or “not sure” option. Since I had to provide an answer to proceed, I just guessed. Garbage in, garbage out!

In short, were I not a researcher, I surely would have been dissuaded from taking surveys in the future. And I doubt that the survey sponsors got valuable information from their misguided efforts.

Back in the heyday of survey research, when response rates hovered around 80 percent, being asked to participate in a survey was a rare event so most people were eager to do so. Surveys, usually conducted by telephone – or occasionally in-person – were expensive to mount so those funding the research took care to hire the most knowledgeable professionals to develop questionnaires. Then, senior researchers monitored the initial interviews to make sure the questions were understood by respondents and working as intended. In other words, they knew their business (expertise) and worked hard to understand what made their respondents tick (empathy).

Six Things You Can Do To Improve Survey Research Quality

1. Become an advocate for respecting and trying to understand respondents. If the survey doesn’t work for respondents, it doesn’t work!

2. Understand that designing and conducting reliable and actionable research is not easy. It can’t be assigned to the most inexperienced person in the office. Nor can a template for your branding or customer satisfaction survey be downloaded from your software vendor’s website.

3. Get out of your organizational mindset and focus on the respondent’s frame of reference when you write survey questions. Conduct cognitive interviews and/or qualitative research to help you better understand they way they think and talk.

4. Don’t be afraid of change. Understand that habit is not a good reason to continue research that is no longer working, if it ever did.

5. Fight for budgets that are adequate to support quality survey research. That includes writing to your Member of Congress asking him or her to vote for more funding for the census and other government surveys.)

6. Hire expert consultants if you don’t have the relevant expertise in your own organization.

7. Keep the survey length to 15 minutes or less.

8. Watch for my next post: Getting More Out of Your Branding Surveys Without Antagonizing Respondents.

 

CITATION

Noelle-Neumann, Elisabeth (1984), The spiral of silence. A theory of public opinion – Our social skin, Chicago: University of Chicago Press, ISBN 0-226-58932-3.

 

David Almy, CEO of the Marketing Research Association, recently told participants at the Spring Symposium of the Mid-Atlantic Chapter of the MRA that he had searched in vain for articles on the value of market research. Yet I, along with many other market researchers, know that market research has enormous value when conducted and used properly. It can help companies increase demand for their products and become more profitable, improve brand image, and target messages more effectively. And sometimes, it can save lives.

Much of this life-saving research comes from the public sector, which conducts social marketing research or “research for good.”  There are many other examples of market research that have guided successful advertising and outreach campaigns in such areas as heart disease and cancer prevention, smoking cessation, diet and exercise. All of these campaigns have saved lives.

HIV/AIDS

During the mid-1980s, I – along with my colleagues at a San Francisco research firm – saw the results of a study tracking the success of the first-ever HIV/AIDS prevention program – a program based on recommendations from a benchmark study we had completed nine months earlier.

The results were startling: Nearly nine in ten Gay and Bisexual men in San Francisco had adopted safer sex practices or cut down on the number of sex partners during the nine-month interval between the two studies. At first, the change seemed too good to be true, but the study had been carefully conducted with built-in controls and validity checks, and we had been able to connect positive changes in sexual behavior with recall of specific messages.

But the real confirmation of the power of our work came a few years later when epidemiologists reported a substantial drop in new HIV infections that could be traced to time of the campaign. Then, we knew that our research had, indeed, helped save lives.

The Key Insight:  The marketing research showed that many Gay and bisexual men were unhappy with the promiscuous lifestyle that was the norm in San Francisco’s Gay community. They yearned for more stable and meaningful relationships but were afraid that this would estrange them from their friends. Thus, the communications campaign sought to convince Gays that there was growing support for safe sex and monogamy.

Ironically, more than ten years later, I found myself conducting focus group research for a major metropolitan church, whose pastor had performed a Gay ceremony of commitment and whose pledges had dropped as a result. Every focus group moderator knows that they need to mask their emotions no matter what the respondents say. Although I had moderated many groups in past on every conceivable topic, this was a real test for me since I had played a major role not only in conducting the survey research in San Francisco but in formulating the recommendations for Gay monogamy (at the time, no one would have been brave enough to suggest marriage). Thus, I was really pleased when one of the strongest opponents took me aside after the groups to thank me encouraging him to express himself despite objections from more liberal members of the congregation.

Heart Disease

Since the early HIV/AIDS research, I have been involved in a many other successful prevention and social marketing campaigns on such topics as heart disease, cancer, early treatment of various diseases, transit ridership, early treatment of HIV, many conservation topics, and checkups; but one study stands out.

This study was conducted as part of an intervention in Wellsburg, West Virginia, which had the highest rate of heart disease in the nation. It was conducted as part of a wellness program sponsored by Bayer and Hill and Knowlton Public Relations. The baseline survey showed the depth of the problem: poor diets, smoking and lack of exercise. It also suggested programmatic and communication strategies for reaching and convincing residents to take advantage of the Wellness Program change their lifestyles. It also showed barriers to healthy changes among different segments of the population and recommended targeted communications to these segments.

This intervention offered residents the opportunity to participate in a Wellness Program. Not only were participation rates high, but statistics and physiological measurements taken among participants showed significant improvement. A tracking survey, conducted after about one year, showed that there had been profound changes in attitudes, behavioral intentions and behavior in the community as a whole. As would be expected, these effects were strongest for participants, but nonparticipants, especially the great majority aware of the Program, also showed improvement.

The Affordable Care Act in California

Most recently, we worked with Senior Fellow Larry L. Bye with NORC at the University of Chicago to develop an audience segmentation that was part of the baseline survey used to guide Covered California’s advertising and outreach campaign to enroll residents in health insurance under the Affordable Care Act (ACA).

The Washington Post recently named California as the “best state in America” for its smooth rollout of the ACA. More than 2.5 million people signed up for coverage during the first six months of open enrollment; and more than 42 percent of those eligible to sign up, did so. Medicaid enrollment jumped almost 16 percent, but “California was one of only six states to grow their private insurance rolls more than their Medicaid numbers, even though they accepted federal dollars to expand Medicaid.”[1]

There is even better news: A new survey by the Commonwealth Fund[2] shows that the percentage of Californians without health insurance was cut in half during the Covered California’s open enrollment period (from 22% to 11%). By contrast, the U.S. uninsured rate dropped just 25 percent (from 20% to 15%). Clearly, a number of factors, including Covered California’s relatively bug-free website, contributed to this dramatic drop in the uninsured rate; but it is also clear that its research-based advertising and outreach program played a major role.

All of this experience has confirmed what I first learned years ago in San Francisco: Well-designed, properly conducted and intelligently used market research can, indeed, save lives.

Please contact me if you would like to learn more about marketing research for social good.

[1] Reid Wilson. “The best state in America: California for its smooth rollout of the Affordable Care Act.” The Washington Post, June 26, 2014.

[2] Chad Terhune. “Rate of uninsured Californians is halved under Obamacare, survey finds.” The Los Angles Times, July 10, 2014.

 

 

Strategies for Minimizing Report Revisions

Posted by Becky Quarles on July 14, 2014

Comments Off on Strategies for Minimizing Report Revisions
Category: Research Quality

“Rosebud (the symbol of the great classic movie, Citizen Kane) will go over my typewriter to remind me that quality in movies comes first.”

Steven Spielberg

 

 

Client-side researchers dislike revisions because they are a frustrating drag on their time and effectiveness. Supplier-side researchers dislike them even more because extensive revisions can quickly blow the budget, demoralize staff and raise havoc with schedules for other projects.

So, years ago, when I started my company, I decided to see what I could do to keep revisions at a minimum. This blog post is about the things that worked most of the time (remember I said minimizing, not completely avoiding).

The most important lesson I have learned is that you must start thinking about the report at very beginning of the project.

Strategy 1.  Hold a project initiation meeting with the objective of understanding the true objectives of the research and how the findings will be used. This meeting can be held via teleconference; but it will be more effective if held in-person. The key players should be present.  On the supplier side, this includes the analyst responsible for the questionnaire and report, as well as the client service executive. Usually, attendees on the client side depend on the structure of the organization and how they usually do business with suppliers, but these meetings are almost always more productive when the ultimate users of the research are included. In addition to understanding the objectives of the research and how it will be used, you should also ask about reporting requirements and ask for an example of a model report that meets all these requirements. Also ask if there is a stylebook, and if not, inform them that you will use your company’s standard style (also put this in your proposal and/or contract).

Strategy 2. Think about how you are going to analyze every question that you include in the questionnaire and push back if people suggest questions that will yield ambiguous or misleading results. I learned this lesson painfully on my first market research job. I was conducting a survey of a suburban community for a developer who wanted to get public support for a very large new development. A political consultant that he had hired gave me a question to add to the questionnaire. The question was not just double-barreled but triple-barreled. That is, he wanted to follow a general approval/disapproval question with question that asked, “would you approve if”…. and listed three very different things intended to make people more approving. I pushed back but not hard enough, and the question ended up in the questionnaire. Unfortunately, the great majority said they wouldn’t approve and we were left trying to figure out which of the three things they didn’t like. When I presented the results, the developer blamed me for the “stupid” question, and the political consultant made no move to defend me. Lesson learned in spades.

Strategy 3.  Once the questionnaire is approved, prepare a report outline and analysis plan and send a copy to your main contact in the client organization. Even if the client does not read and/or respond to the outline or plan, you have not only stated your intentions but have also thought through the reporting process. Trust me. Reporting will be much easier.

Strategy 4.  After the data collection is finished, prepare the first chapter or section of the report and ask the client to comment on it. This often reveals unspoken stylistic or formatting preferences. Having to revise one small part of a report is a small price to pay for avoiding revisions to the entire report.

Strategy 5.  Check and double check the report to sure make there are no inaccuracies. There is nothing more embarrassing than to have a client point out something that is clearly wrong.  With modern analytic software, there is no excuse for wrong numbers, but we are all human and can make inadvertent mistakes like omitting a “not” or other key words that change the meaning in a sentence. Spell check can also change the word you meant to type, often to something embarrassing. So I usually check it myself and then ask another person in my firm to double-check it before sending it to a client.

Strategy 6.  Rewrite as necessary.  Often when we are writing about research findings, we are thinking about the meaning, not the most elegant way to express it. Whether you are writing a traditional report or a presentation, words matter. Review your report to omit unneeded words, use active rather than passive voice where possible, make sentences short and crisp, and be sure to put the news upfront. Read The Elements of Style by Strunk and White for more tips.

Strategy 7.  You may call it a draft but format it like a final. The first draft you send to the client should look polished. It will send a message that you care about your work and have done everything necessary to create an excellent report. Plus it will prevent clients from calling out minor formatting errors for revision.

These seven strategies have worked well for me, and I hope they work for you.