Skip to main content

Customer surveys are a popular research method across industries for a reason. When implemented carefully and strategically, surveys can provide a wealth of knowledge about your existing, former and potential customers.

This crucial step validates your qualitative findings against a statistically significant audience, ensuring your marketing strategy is both predictable and reliable.

Conducting a market survey requires precision. Seeking external expertise for this research will be immensely beneficial to you and offer you deep insights into your customers’ characteristics, behaviors, and purchasing habits.

Quantitative research measures the validity of your hypotheses. Surveys statistically confirm, within a margin of error, the likelihood of a customer’s decision based on their profile. This method confirms or discounts trends and patterns previously identified through your qualitative research, providing a solid foundation for informed decision making.

Surveys allow you to reach a broader audience than interviews and present a comprehensive understanding of various customer perspectives. They also offer anonymity and encourage honest feedback, which is crucial to identify areas for improvement.

Finally, surveys test your assumptions about customer behavior, supplying concrete data to support or refute your beliefs and, thus, shape effective strategies.

Best Practices

To design effective and actionable surveys that yield results, consider the following best practices.

Determine a credible response rate: It’s crucial to define the minimum number of responses required for your survey to have statistical relevance. This ensures that your findings are meaningful and can be confidently used for decision making.

To arrive at the targeted number of responses, you must first determine your minimum acceptable confidence rate and error rate:

  • Survey confidence rate: This is a measure of how sure you can be that the results of your survey reflect the views of the entire population you’re studying. A survey with a 95 percent confidence rate means that you can be 95 percent sure that the results lie within the margin of error.
  • Error rate (margin of error): This is the range within which the true answer is likely to fall. For example, if you conduct a survey with a margin of error of 5 percent and 80 percent of respondents indicate on the survey that they like a product, the true percentage of people who like the product could be as low as 75 percent or as high as 85 percent. The margin of error acknowledges that there is a chance that the survey results could be off by a small amount.

For statistically valid research, the best practice is to go no lower than a 90 percent confidence level and no higher than a 5 percent margin of error.

The formula for a targeted response rate is complex, so I’ve provided you with a sample size calculator in the 12 BattlesTM Reader Hub. All you need to know is your total audience size—your total number of customers, lost customers, and prospects over the period in which you intend to pull your survey sample. 

Let’s say the period is five years. If you’ve had 5,000 customers and prospects over five years and you want to maintain a 90 percent confidence rate with a 5 percent error rate, you need results from 259 customers and prospects to complete your survey. If you assume a 20 percent survey response rate from your customers and prospects, you’d need to send the survey to 1,295 customers and prospects, though it’s best to give yourself some padding by sending it to at least 50 percent more than you need. In this case, you’d send 1,943 surveys total.

Identify the right survey incentives: Offering incentives to survey respondents can boost participation rates and improve the quality of responses. Incentives can be monetary, such as gift cards or discounts; non-monetary, such as charitable contributions, branded gear, and access to exclusive content; or the chance to win a prize. This practice can result in a larger and more diverse respondent pool, leading to more comprehensive and representative data. Be sure to select a survey incentive with universal appeal among your survey population or you could get a biased group of respondents and skew your data.

Consider the survey structure: A well-structured survey is user-friendly and ensures that questions are logically organized. It minimizes respondent confusion and fatigue, making it more likely that respondents will complete it. 

Proper survey structure includes clear instructions, a logical flow of questions, concise wording, and appropriate survey branching (or skip logic, which changes the next question the respondent sees based on the way the current question is answered), which allows you to tailor the survey experience for individual respondents based on their previous responses. By using branching, you can ask follow-up questions that are directly related to a respondent’s earlier answers. This not only reduces survey length, as respondents are only prompted to answer relevant questions, but also makes the survey more engaging and focused on individual preferences or experiences.

Avoid survey bias: Survey bias can lead to skewed results by favoring or disfavoring certain groups or responses. It compromises the survey’s objectivity and undermines the accuracy of the data. The two most common types of survey bias are questionnaire bias and sampling bias:

  • Questionnaire bias: When the wording of questions leads respondents toward a particular answer. Here are the five common types of questionnaire bias:
    • Leading questions: Questions that are phrased in a way that suggests a certain answer. For example, “Don’t you agree that product X is amazing?” implies that product X is indeed amazing and pushes the respondent to agree.
    • Loaded questions: These are questions that contain a controversial or unjustified assumption. For instance, “How problematic do you think the recent pricing changes are?” assumes that the pricing changes are problematic.
    • Double-barreled questions: Questions that ask about two things at once, making it unclear which part the respondent is answering. An example is, “How satisfied are you with our pricing and speed of delivery?” The respondent may be satisfied with your pricing but not with your speed of delivery or vice versa.
    • Absolute questions: Questions that allow for no degree of variation in responses. For example, “Do you always buy online?” This does not account for occasional changes in routine.
    • Order bias: The sequence of questions or answers can affect responses. If a survey asks a respondent to evaluate a list of product benefits immediately after detailing a common product concern in your industry, the respondent might place undue weight on this product concern in their evaluation. Order bias also can also occur when you don’t randomize the order of multiple-choice responses. The response options you list in the first spot or two are likely to be chosen more than those lower on the list. You can set up your survey so that it automatically randomizes the order of the multiple choice options from user to user in order to prevent this.
    • Sampling bias: Occurs if the survey participants do not accurately represent the larger population. For example, if you only survey your larger or newer customers, you’re biasing the survey results. If you take a random sampling of your customer/prospect population, you won’t have to worry about this one.

Avoid lengthy surveys: Lengthy surveys lead to survey fatigue, causing respondents to abandon or rush through the survey. This results in incomplete or inaccurate data. Surveys should be concise and focused on essential questions to maintain respondent engagement and data quality.

On average it takes seven and a half seconds to answer an online survey question. If you keep the questions simple, you can ask eight of them in one minute. The best practice is to construct a survey that takes respondents no more than 10 minutes to complete. Five is better. I’ve provided you with a survey length estimation calculator in the 12 BattlesTM Reader Hub.

Include limited open-ended questions: Open-ended questions offer valuable qualitative insights and allow respondents to provide detailed feedback. A survey without open-ended questions may miss essential nuances and context. Incorporating open-ended questions into your survey is important for capturing robust data and understanding the “why” behind quantitative data. However, include too many, and your respondents may get survey fatigue. Use them where you know they’ll count, and generally limit them to a maximum of five per survey.

With these best practices in place, your customer survey findings will allow you to put the full weight of your team behind your marketing strategy, guaranteeing your company will achieve its annual growth goals.

By Lori Turner-Wilson, RedRover CEO/Founder, Internationally Best-Selling Author of The B2B Marketing RevolutionTM: A Battle Plan for Guaranteed Outcomes

Taking Action

Documenting processes is one of hundreds of best practices found in The B2B Marketing RevolutionTM: A Battle Plan for Guaranteed Outcomes the playbook that middle-market B2B CEOs and marketing leaders lean on to scale. Backed by a groundbreaking research study, this book offers time-tested best practices, indispensable KPIs for benchmarking, insights on where your dollars are best spent, and, above all, the proven 12 BattlesTM Framework for generating guaranteed marketing outcomes. The B2B Marketing RevolutionTM is a battle-hardened approach to becoming an outcomes-first leader who’s ready to shake up the status quo, invest in high-payoff market research and optimization, and — yes — even torch what’s not serving your endgame. Download more than 50 templates, scripts and tools from the book on the Battle Reader Hub.

If you’d like to talk about how to build a marketing engine that delivers predictable results — whether you want to build it yourself or tag in our team to lead the way — we’d be delighted to help you get started.

Leave a Reply