The Problem with DIY Survey Projects

This is tough to admit.

We love surveys. We’re all about surveys. We live, eat and breathe surveys.

And we think you should love surveys, too.

But the fact is, survey projects are extremely delicate. They’re easy to mess up. One little mistake can make your entire dataset completely useless. And, unfortunately, it’s more than likely your DIY survey project is going to be a total waste of time.

So there you have it. Industry secret.

But does this mean market research surveys aren’t worth doing?

Nope. In fact, what’s funny about this is most of our clients are about 95% of the way to a good survey by the time they find us.

What does that mean? It means they know what questions they want to ask. It means they know how they want to ask them. It means they know who they want to ask and what they want their results to look like.

So why just 95%? Why not 100%?

Because surveys are about precision. The language in your questions must be absolutely perfect (not a shred of ambiguity) and your targeting must be perfect (not just close).

This is where professionals can help. That last-mile tune-up is what turns a decent survey into a great survey.

At PeopleFish, every one of our clients’ surveys is programmed by a survey research analyst. Word for word, our analysts migrate our clients’ survey instruments—typically a Word document or email text — into a field-ready survey instrument. As they do this, they review every question and answer option to make sure nothing is left to chance—no ambiguous language or unnecessarily confusing phrases. No “double-barreling” or mismatched question types.

In fact, we’ve codified this practice into a guiding rule: WE DON’T FIELD SURVEYS FOR CLIENTS THAT WE WOULDN’T FIELD FOR OURSELVES.

This means you can rest assured that your PeopleFish survey results are real-world accurate and ready for action. No wondering if you asked the questions properly or if respondents were perhaps confused by how you phrased something.

We apply this same level of analytical rigor to survey fielding strategies. Fielding surveys to consumers is, of course, our primary function. It’s what we do best. That said, we carefully review every client’s fielding strategy to ensure we’re reaching the right people with our surveys.

For example, say you’ve designed a new toy for toddlers and preschoolers. You come to PeopleFish with a request to target your survey to “parents of young children” to gather their opinions about your product concept. It’s they, after all, who will be buying this product, right?

Well, it’s not that simple. Consider that, based on secondary consumer research, mothers are five times more likely than fathers to buy toys for their children. And more than half of a child’s toys are given to him or her as a gift from other family members.

With that in mind, who you really ought to be targeting is anyone who regularly buys toys for children—not necessarily just “parents of young children.”

The bottom line is that this becomes confusing. But we can account for all of this nuance when we field your survey. It’s what we pride ourselves in doing well — adding value to our clients’ survey projects by doing everything 100%, absolutely, entirely correct.

Got a product idea? Test the waters at people.fish. We help startups and innovators survey their target market the right way, the first time. No more wasted time and money with confusing, DIY survey tools.

SurveyMonkey is Great, But Not for Startups

SurveyMonkey is an all-around great tool.

It’s tried and tested. It’s been around since 1999—one of the earliest survey platforms on the internet. More than 25 million people use SurveyMonkey, and the company recently went public, further enhancing the stability and predictability of the tool—especially for users who use SurveyMonkey as an integrated part of a larger workflow.

But all of that said, SurveyMonkey is not a good survey tool for startups.

What SurveyMonkey Does Well

As mentioned above, SurveyMonkey is not only a powerful tool, but a stable and predictable one—ideal for integrating surveys into larger project management workflows and CRM systems. This is what makes SurveyMonkey the best among its competitors, which otherwise are almost identical tools.

SurveyMonkey also has arguably the most question types and data reporting formats of any other broadly-accessible tool. In our experience, 99% of all survey research needs are covered by the options and formats SurveyMonkey offers. From more basic functions like multiple-choice questions and nested dropdowns to more advanced ones like conditional logic and SPSS variable naming, there’s little one won’t find in the suite of tools SurveyMonkey offers.

Finally, SurveyMonkey’s offers an intuitive way to add “team members” to your projects, whether they are inside or outside your organization. Perfect for collaboration on advanced surveys, or if you’re hiring an outside specialist to work on one of your survey projects, but with whom you’d rather not share your login information.

What SurveyMonkey Does Poorly

It’s simple: Pricing.

SurveyMonkey has decent rates, but not if you’re launching just one, or two, or three survey projects. The fact is, their basic license (Advantage: $384) won’t get be sufficient if your survey has any mildly-complex logic branching. You’ll need a higher license level (Standard: $444).

That’s prohibitively expensive for a startup with plans to run only a handful of surveys. With some panel providers (like PeopleFish), it’s the cost of more than 200 consumer survey responses!

Additionally, SurveyMonkey isn’t easy to use. Yes, it’s about intuitive as it can be, but to get the survey you have in mind into a usable, functional, programmed instrument takes a lot of time and experience, no matter what tool you’re using. It’s one of the most frustrating parts of any survey research project.

Here at PeopleFish, almost one-fourth of our clients have tried and failed to program and field their own survey using a self-serve tool like SurveyMonkey. The fact is, it’s just not easy to get all the survey logic and branching right, and you’ll often find that what you want to accomplish is hidden behind yet another paywall only after you’ve got your survey 90% programmed.

Talk about a frustrating waste of time and energy. And worse, whatever work you’ve put in up until you discovered that your needed logic or branching feature is going to cost another $300 isn’t transferable into another platform. Once you’ve started in SurveyMonkey, you’re stuck—you’ll have to start programming all over again if you decide to use a different tool.

What PeopleFish Does Better

With PeopleFish, you don’t need to buy a subscription. You pay only for the responses you keep.

In other words, the $444 you’d spend on a SurveyMonkey subscription can instead by used to pay for, say, 200 survey responses from your target market consumers.

On top of that, you don’t have to program your survey. We do that for you.

It’s easy to just write down what you want your survey to do. It’s harder to actually get your survey programmed exactly that way. So leave the frustration to us. We’ve programmed many thousands of surveys, and can do so quickly and efficiently. There’s no need for you to waste time learning how to use yet another survey tool. Just send us your questions in a Word doc, spreadsheet, or even in an email. We’ll take care of the tedious programming.

Finally, through strategic partnerships with a host of survey tools, PeopleFish is able to implement whatever logic or branching you need. Full stop. There’s nothing we can’t accomplish when it comes to survey programming—just tell us exactly what you want to see, and we’ll have your survey programmed in no time. We even employ JavaScript programmers to accomplish logic and branching that’s not part of any survey tool’s standard features.

The Bottom Line

At PeopleFish, we believe every entrepreneur and marketer deserves the chance to hear from consumers. To “test the waters” and validate their ideas with real-world consumer feedback.

So we make it easy. Your questions, quality responses, no minimum purchase. That’s it. No expensive consultants, frustrating survey platforms, or stacks of confusing raw data. Just ask the right questions – we’ll take care of the rest.

Survey Research is Changing. Here’s how.

When I first started PeopleFish in 2016, I called everyone I knew to sell our survey research tools.

Almost no one had work for me. I remember one visit to a friend’s company. I presented my concept, but got just one sentence in response:

“Sorry, we just don’t do surveys.”

Survey market research just wasn’t something people talked about. Sure, lots of businesses were doing market research, but it was typically left to specialists working with expensive, hard-to-understand tools.

Indeed, I was one of those specialists at a big market research firm before quitting to start my own business.

But things have changed.

What’s Changed?

Fast-forward to 2019, and it’s hard to find a business owner who isn’t curious about survey research. Who isn’t trying to survey their customers. Who doesn’t have questions for me about how to survey their target market.

I get unsolicited questions about survey every day. On LinkedIn, Instagram, Twitter. And they’re typically great questions — new challenges or hard-to-solve issues surrounding consumer survey research projects.

What changed over the past three years? I propose these three things:

  1. Companies have gotten better at harvesting customers’ contact info, such that survey research becomes POSSIBLE.
  2. Survey research tech has become more AFFORDABLE, such that even small startups can launch powerful survey research projects.
  3. Survey research tools improved but have become COMPLICATED, such that a pro is often needed to make sense of the tools and programs out there.

At first glance, that last item doesn’t seem to belong. If it’s more doable and lower-cost than ever before, why has it also become more complicated?

Before I answer, I’ll note that probably 25% of our clients come to us after one failed attempt at a survey project. They bought a SurveyMonkey license and the tool was just too complicated. They surveyed their customer database, but no one took the survey. They designed a product concept survey, but the findings were nonsensical.

That’s my evidence for survey research having become more complicated. But what’s behind this trend?

Simply put, as survey technology has improved, the list of possible ways to survey consumers becomes longer and more complex. Sure, anyone can now design a quick 10-question survey to pitch their product to consumers. But what about all those other tools or survey templates floating around? And if I can use complicated survey logic to dig deeper into consumers’ minds, shouldn’t I at least try that, too?

These things add up. Surveys get complicated.

This is true of any technology. The better it gets, the more things are possible. The more things are possible, the more complicated things become.

What Does This Mean for Business Owners?

Why does this matter?

For survey nerds like me, it’s just interesting. It helps me strategize. It helps me sell my company’s services. It helps me design better surveys.

But what about for business owners?

For one, you really shouldn’t be waiting any longer to take advantage of the survey technology that exists. Start now.

If you run a business, you have customers. If you have customers, you have their contact info. Use that, along with survey tools, to ask deep, tough questions — to uncover what your customers want, and what actions you can take that might make them happier.

Second, hire an expert. Yes, SurveyMonkey’s low monthly fee is attractive. But what if you want to run just one survey? Or what if you pay the fee, but then can’t figure it out? Or what if the tool can’t meet your business’s unique and specific needs (the most common complaint I get from new clients)?

The fact is, these tools are increasingly built for specialists and enterprise users, and their best tools are hidden behind high paywalls. While knowledge about general “how-to’s” and the value of survey research has become ubiquitous, the nuts and bolts are increasingly complicated.

So the bottom line? Reach out. I’m around. Our team is ready to field your survey to your target market. Here’s my blueprint for designing your first market research survey. And here’s my free course on surveying your target market.

If you’re starting or running a business, survey research isn’t something that can wait.

Some Best Practices for Market Research Surveys

There’s a million ways a survey project can go wrong.

We see it happen all too often. An energetic startup founder designs a survey he thinks will change the way he does business—turning his “gut instinct” into validated, data-driven insights about his customers.

But the data comes back, after weeks of work, and it’s anything but helpful. Worst case, it’s 100% useless because of some error in the way the survey was designed.

We hate seeing this happen. So we’ve put together a list of industry-secrets when it comes to designing high-quality market research surveys. Some of these may seem like common-sense, but some of them are likely to surprise you.

Remember, we at PeopleFish review every survey we field for our clients, and we won’t ever field something that will yield useless data. That said, we can’t deep-dive into the nuances of every question—ultimately, only our clients know exactly what they need to learn from their target market. That’s where these tips come in handy!


Tips for Writing Surveys

Writing a survey is harder than it seems. Here are some tips—some obvious, some not—on writing a survey that will yield useful findings for your team.

  1. Design your surveys in a word processor (Microsoft Word, Google Docs, etc.). It’s too easy to get thrown off by all the bells and whistles in a survey platform and lose track of your primary research questions and goals.
  2. Use short, simple sentences. No exceptions.
  3. No double-barreling. Don’t try to fit two questions into one.
  4. Stick to single-select, multi-select and rank-order question types. Only use other types if absolutely necessary, and don’t require respondents to rank more than seven items.
  5. Use scales wherever possible. For example, less Yes/No and more Always/Sometimes/Never. Use only five- or seven-point scales, and fully label all scale items (don’t label either end and use numbers in between).

Tips for Cleaning Data

At PeopleFish, we clean our clients’ survey data for them before sending over results. So for those of you working with us, there’s no need to worry about fully understanding these steps. But we thought it might be helpful to explain standard data-cleaning methodology—we do this each of these steps for every survey we field.

  1. Check for speeders. Use Excel to calculate the sample’s median time spent taking the survey, then flag everyone who finished in less than half the median time. Some exceptions apply, depending on the survey logic and the total length of the survey.
  2. Check for flatliners. These are respondents who selected the same scale item across multiple scales. While this doesn’t necessarily mean they should be disqualified, flag anyone that appears to have “flatlined” in this manner.
  3. Check for gibberish & logical inconsistencies. This is self-explanatory—respondents who write nonsense in open-ended boxes, or who contradict themselves across answers need to be flagged.
  4. Add up the flags for each respondents, and discard the worst offenders. Use your judgement to decide what the flag threshold should be.
  5. Track the edits you make while cleaning data. Start a new sheet after flagging for each of the three items listed above, so that someone else could conceivably review what you did to see if they agree with your flag thresholds.
  6. Your sample size shouldn’t change much through your survey. It’s OK to keep some partials, but by-and-large, the number of people who answered the first question in your survey should be close to the number who answered the last.

Tips for Reporting Data

Most of our clients report their data to someone—investors, their boss, as marketing collateral, etc. Here are some tips to make sure you make maximal use of your survey findings in the form of a report.

  1. Remember that when you segment (cut) your data, the sample sizes for each segment drop. Keep this in mind, and don’t segment if it means indefensible sample sizes within segments.
  2. Condense figures and graphs wherever possible. Don’t make clients switch between pages or slides to view obvious comparisons between questions—put like topics and similar questions together.
  3. Don’t impute meaning into respondents’ answers. Just report the facts. Draw conclusions, of course, but that’s different than imputing meaning. Conclusions come with caveats and some degree of confidence. Imputing meaning is asserting more about your respondents’ answers than is actually there.
  4. Write a Key Findings section at the beginning of your report. Most clients won’t read much beyond this anyways, so be sure it contains the most salient findings and directly answers their research questions.
  5. As when writing the survey, use short and simple sentences in your report. No exceptions.

As always, we’re open to any questions about these items. Hit us up on Twitter, or inquire here about launching a survey for your business.

How to Clean Your Market Research Survey Data

You’ve fielded a market research survey.

For weeks, you wrote and rewrote your survey questions. You paid for a SurveyMonkey license and spent hours learning how to program your survey. You leveraged dozens of industry connections to get survey answers — a hard-earned set of 300 respondents.

Getting here wasn’t easy.

But unfortunately, you’re not done. Before drawing findings from your survey, you need to clean your data. This is absolutely essential for maintaining the quality of your research. Here are the three most important things to look for when cleaning your survey data.*

Speeders

These are respondents who took your survey too fast. Identifying responses like this is based on the median time spent taking your survey.

The rule of thumb here is to disqualify responses from anyone who completed your survey in less than half the median time. There are some exceptions, like if your survey includes a logic branch that had certain respondents answering just a few questions. But in general, anyone going more than twice as fast as the average respondent is likely someone who sped through the survey without giving the questions much thought.

You can identify speeders by downloading your survey data into Excel, then subtracting the “time completed” from the “time started.” Most survey platforms I’ve used record this information. If yours doesn’t, you may have to skip this flag, but be sure to check for the following two.

In general, not more than 10% of your survey sample should be discarded for speeding.

Flatliners

These are people who picked the same answer to every (or most) multiple-choice question in your survey. For example, say that you asked four open-ended questions about price (like a Van Westendorp question set). A flatliner answered the same thing for each of these four questions (say, $10).

If you notice that more than 10% of your survey respondents are flagging on flatlining, you may want to look more closely at the questions you’re including in your scan for flatliners. It may be that some respondents only appear to be flatlining, when they are, in fact, giving honest answers. This may have to do with the way you’ve asked your questions (for example, if you placed student, 18–22 years, unmarried, and no kids all as the first answer options to questions asking about employment, age, marital status,and children).

You can only determine this by looking at respondents’ individual answers — but don’t bother with this if less than 10% of your survey sample is flatlining.

Gibberish and Contradictory Answers

These types of responses can be harder to spot. They require you to look, line by line, at answers to open-ended questions in order to identify ones that 1) are gibberish (i.e., dk3i8sw) and/or 2) don’t correspond to other answers in that row. For example, if someone says they are single at the beginning of the survey, then mention their “wife” or “husband” in a later open-ended question, delete that respondent. They are not being honest, and you want data that you can stand on.


If you designed your survey well, your data cleaning shouldn’t result in discarding more than 15% of your responses. If you’re worried you’re throwing out too many, take a closer look at the ones you’re throwing out. Consider keeping a few that give other indications of being good, honest answers or the ones that flagged on only one of the three criteria listed above.

*PeopleFish analysts clean every one of our client’s survey datasets according to the criteria set forth in this article. Data cleaning is a standard piece of our survey project offering. Nevertheless, it’s helpful for researchers to understand how survey data is cleaned, for their own knowledge and, of course, should they want to conduct their own market research survey independently of a market research firm like PeopleFish.

What Does Startup Market Research Actually Look Like?

”Don’t go to market before doing market research.”

Aspiring entrepreneurs hear that all the time. But what does this actually mean? Does every startup do market research? Is market research realistic for a startup with a small budget, little-to-no marketing team, and no product prototypes?

Here’s our Founder’s outline for startup market research projects, based on years of experience running surveys and focus groups for companies large and small. This is the lowest-cost way to validate your business concept before going to market, and wow investors in the process.

Read the full article at Startup Grind. To learn more about how PeopleFish empowers entrepreneurs and innovators with real-world consumer feedback, click here.

Rule #1 When Writing Screener Questions for Your Survey

Screener questions are the gate-keepers of your market research surveys. They sit at the beginning of your survey instrument, and they disqualify anyone who you don’t want to hear from.

Poorly-designed screener questions will undermine the entire purpose of your survey project. They will let the wrong people into your survey, diminishing the accuracy of your data—that is, the extent to which your findings will reflect the feelings and preferences of your actual target market.

There’s more to writing screener questions, though, than just getting the “logic” right. The fact is, dishonest people will always be out there, trying to enter surveys for which they don’t qualify in order to be compensated. That said, here’s the most important thing to remember when writing your survey’s screener questions:

Don’t be obvious.

What does this mean? By way of example, imagine you’re trying to survey parents of children with asthma. The easy and obvious screener question would be something like:

Do you have a child with asthma?

Yes/No

This would be fine if all respondents were honest. But they aren’t. And the fact is, this approach makes the “right answer” obvious. Dishonest respondents who want to be compensated for taking your survey will know what to select (Yes) in order to proceed (because it seems unlikely that a survey would target parents of children who don’t have asthma).

So instead, ask a series of questions that don’t give away the right answers. Here’s an example we used in a recent survey project:

1. You have young children who still live at home.

True/False

2. Do any of your children have Asthma, ADHD, or Diabetes?

Yes/No

3. Which of the following conditions do any of your children have?

Asthma, Diabetes, ADHD

This approach doesn’t give away the “right” answers, and it has three points at which someone might disqualify. A much more robust approach than one simple question.

An additional quality control measure would be to disqualify anyone who selects all three diseases in question #3. While it’s possible that someone’s child, or children, has all three of these conditions, it’s highly unlikely. Better to just disqualify such respondents from your survey than risk allowing dishonest respondents to enter.


In an ideal world, we wouldn’t have to worry about all this. Respondents would simply be honest. But as surveys become more ubiquitous, dishonest respondents are finding creative ways to enter into surveys for which they don’t qualify. Robust screener question series combat this trend, and if written well, can almost entirely eliminate the risk of fraudulent responses.

How to Design your First Market Research Survey

At the end of the day, entrepreneurs need consumer data. Investors simply won’t trust your gut. Nor should you.

That said, the first big step toward turning your product or service idea into a sound business concept, and ultimately toward wowing investors, is market research. And the bottom line is that your first market research survey should answer three very specific questions.

In this Startup Grind article, our Founder Nick Freiling identifies these three questions, and explains how to design a basic, first-pass market research survey to test & validate your product idea.

To learn more about how PeopleFish empowers entrepreneurs to get feedback on their market research questions, click here.

How to Overcome Sampling Bias in Your Market Research Survey

In the market research world, sampling bias is a consistent error that arises due to the way a survey’s sample was selected. It occurs when a sample is not random, meaning certain types of respondents are more or less likely to be chosen for the sample.

The result: Survey results that don’t reflect the population you purport to represent. Instead, they reflect a stilted sample.

For example, a survey of potential voters in the upcoming presidential election may suffer from sampling bias if the list of people invited to take the survey come from, say, a conservative think tank’s donor list. Such respondents are going to be more likely to favor the Republican candidate than are voters in general, and it’s precisely those voters in general that a political pollster probably cares about.

Overcoming sampling bias

Generally speaking, sampling bias cannot identified or overcome by examining a survey’s response data alone. Sampling bias is identified only by comparing a survey’s sample to the population of interest.

In other words, you can’t just look at a survey’s results and decide the sample is biased one way or another. You can (and should) compare a survey’s results to other similar surveys to see how respondents’ sentiments might differ, but that’s inexact.

“You can’t just look at a survey’s results and decide the sample is biased one way or another.”

The only way to accurately measure sampling bias is to compare your survey’s sample, on every relevant characteristic imaginable, to the general population your survey aims to understand. This, of course, is impossible, but that doesn’t mean we can’t get close.

For example, pollsters from the hypothetical voter survey mentioned above might include questions about their respondents’ ages, political affiliations, and past voting behavior, then compare those results to other surveys of the voting population to see how well their sample compares.

You can see here, though, that judging sampling bias relies heavily on intuition. What characteristics are relevant to your particular survey? What should you look at for when judging whether your survey sample is biased, based on the issues you’re trying to understand?

What does this mean for you?

First, don’t trust just any survey about your customers. Regardless of the topic or sample, analysts must consider how the sample selection may be biased, and what differences may exist between those who did and did not complete the survey. Further, these differences must be considered in light of the client’s final research question — it could be, for whatever reason, that differences between those who did and did not complete the survey aren’t meaningful to you and won’t affect your key takeaways.

Second, be vigilant in collecting your customers’ contact information. When my team conducts a survey, we try to include as many customers as possible in the survey sample. If we are surveying a coffee shop’s customers, for example, about their willingness to pay more for a particular menu item, our results come with big caveats if we are only able to survey customers that have, say, a rewards account at the coffee shop. Those customers are going to differ from customers who do not belong to the rewards program. They may love the coffee more than non-members and be more willing to pay extra for the menu item in question. Or they may feel betrayed, as rewards members, being asked if they’d pay extra.

There’s really no way of knowing for sure that the sample isn’t biased unless we survey every single customer, and we get closer to that if we have contact info for as many customers as possible.

Finally, include demographic variables in your customer surveys. This way, you can compare the makeup of your sample — the “average” respondent” — to what your intuition tells you is your “average” customer. Gender, age, household income, job, family size, and/or other behaviors and characteristics relevant to your product or service. Asking for these demographic variables also has the benefit of allowing you to cut and segment your results along these demographic variables, perhaps exposing opportunities to up-sell or improve your targeted marketing campaigns.

One more thing…

All this might sound hopeless. Unless we survey every single customer, we can’t be 100% our sample isn’t biased one way or another.

But as mentioned above, intuition really is key to knowing whether your sample is biased and how that might affect your key findings and inferences. Our best work happens with clients who understand their customers in a way that supplements whatever survey they’re trying to run. If a business owner knows from experience, for example, that his rewards program customers are more loyal and generally willing to pay more for his products, we can account for that when drawing inferences from the survey results. If he knows from experience that previous price increases have not affected his business with non-rewards members, we can account for that when drawing inferences from the survey results.

Surveys are typically quantitative market researchbut quantitative data must be interpreted through the lens of experience and subject matter expertise. When it comes to your customers, you probably know more than any single survey can tell you.