4 types of bad verbatim responses: Why they 'Shall Not Pass' at Zappi

Lucy Robbins

Data is the essence of good market research. The quality of that data is the foundation of trust that you and your business make decisions on. But the quality of online data has always been the elephant in the room.

The trade off for speed, a large reach, and easy access to target groups is often accepting that not all respondents will take your survey seriously. Because the truth is that people will sometimes do the bare minimum to complete a survey and collect the reward.

We addressed this challenge head on with a custom-made machine learning tool, fondly known internally as “Gandalf.”

Data quality in verbatims

Open-ended questions, or verbatims, make consumer data come to life and give you direct access to the consumer’s voice.

Verbatims add color to your quantitative data and can quickly summarize the respondent's attitude towards your ad or concept. But as humans are unpredictable, and occasionally give some… let’s call them “interesting” answers, verbatims are also a great place to spot compromised data. They are a good indicator of the respondent’s overall quality: a person taking the time to give a relevant answer will, in general, give more representative responses throughout the survey.

Panel providers, naturally, sit at the forefront of screening for appropriate respondents for your survey. They are informed by ESOMAR and ISO guidelines to get you real people that fit your criteria. But real humans might fit all the criteria and still give unhelpful responses. So then what do you do?

Thankfully, gone are the days of combing through all verbatim responses by hand. Automation can give you consistency and higher quality.

At Zappi, “Gandalf” filters out responses while your survey is live in-field, so another respondent can be sourced to replace a poor quality one. Your data quality improves and you don’t waste money on useless responses.

4 main types of bad responses

What types of “bad responses” can occur? We pulled a few examples our data quality algorithm has filtered out of live surveys. What better way to explain verbatims than to use verbatims?

1. Random responding

The first group is the catch-all group. Whether they’re incoherent or just lack any relation to the stimuli, random responses are just that — random.

Some of the responses we screened out of a live survey read:

  • “Tyyyyyyyh”

  • “brara”

  • “Mhm sends hes rkg”

These responses are clearly illegible, but also easy to spot in most cases.

2. Illogical or inconsistent responding

Some of the more “creative” poor responses fall into this bucket. Ranging from “hi ya ya bye ya babe love y’all bye babe love you babe love you” to the cryptic “the club has a good selection and.”

Others are just as creative but obviously useless, like this random sampling from the Lorem Ipsum placeholder text:

“Enim labore ad sit quo dolore dolorem nisi ea cillum ratione rerum laboris laboriosam provident”

3. Repetition of responses (eg. "Don’t know" for every answer)

Surveys are designed to gather interesting opinions. Of course, being human, respondents will do the lowest effort option. That sometimes means repeating the same answer. While “nothing comes to mind” may be valid for one question, used repeatedly it signifies disengagement.

Through repeated answers in logical speech (“ok”, “good brand,” “normal”), these respondents give a logical, but not actionable, response. There is no value to get from in-actionable data, so they must go.

4. Speeding (too rapid survey completion)

The keyboard bashers fall into this bucket. Unlike the responses above that at least give full words, these verbatims seem to compete for the honor of most random, for example,



Speeders can also be caught by looking for common behaviors like completing answers very quickly. Also assessing the time taken helps us screen them out.

🎙️ Why sample consistency is everything

For more on data quality, check out our podcast episode on how to tackle the data quality crisis in the insights industry.


Our Data Science team nicknamed our machine learning tool “Gandalf” while they built it. Like the wizard of fame in that classic scene, we like to think of the algorithm standing guard and protecting your data.

In order to automate the data quality process, some providers will set up simplistic rules to identify when responses fit into some of the categories I described in this blog post. But simple rules may not catch everyone.

By contrast, as an algorithm, “Gandalf” grows more nuanced over time. It uses machine learning to detect poor quality responses by learning from thousands of manually classified surveys. This differs from rule-based filtering, which cannot learn from the examples it’s been given. “Gandalf” is always on and always feeding decisions back into the database.

The algorithm not only looks for patterns in verbatims, but also detects abnormal keystroke patterns. There are several common behaviors of low quality respondents. Speeders, for example, often auto-complete very fast. The final decision to cut a respondent is based on a mix of these text and input behaviours.

As a buyer you should be confident about the insights you get, and confident discussing quality with your provider. It's a conversation that we should all be having more often to push this industry to higher standards. Keep asking for better quality. For now, we have you covered at Zappi.

Subscribe to our newsletter

Each month we share the latest thinking from insights leaders and Zappi experts, open roles that might interest you, and maybe even a chart or two for all you data nerds out there.