Tag Archives: online sample

Infographic of the Week: General Contractor & Brand Loyalty

Many us would like to believe we are fiercely faithful to our brands, from toothpaste to smartphone preferences. Humans are creatures of habit. Yet nothing compares to general contractors when it comes to cemented brand loyalty (no pun intended?). We’ve learned this after many years from many studies, internal and for clients, with our general contractor panel.

However, general contractors can be influenced to try other products, all with the right understanding.

Thus, this week we present another infographic on general contractors and their purchasing habits. Certainly visit our article General Contractors & Brand Loyalty for further research, including another detailed infographic. Understanding all of this and other data is vital for anyone conducting market research, not only on contractors also construction trends and homeowners.

Here is it, and we hope it helps you build better market research structures:

general contractor brand loyalty

General contractors button

Download this infographic.

Embed Our Infographic On Your Site!

6 Respondent Supervillains That Destroy Your Survey Data

Getting the best possible survey data is a heroic endeavor for anyone performing online research. It makes for a better market and academic world. Sadly, there are those respondents who impede this adventure, spawned from the nether regions of River Sample™ and causing bad survey data. By viewing these respondents through the lenses of archetypal villains, a researcher’s champion task is not only easier but perhaps even entertaining (even the most stoic of us researchers get blurry-eyed glancing at survey data all day).

Get your Data Superhero tights on, because here are the six Survey Supervillains you need to watch for:

The Speeder1. The Speeder: The name says it all, and this Quicksilver or Flash-persona blasts through surveys at rates that defy logic (or more like enforce logic, since they’re obviously not paying attention to questions). The Speeder is easily detected because he or she finishes surveys in less than 30%-50% of the median time. For a more technical way to defeat the Speeder, an article on Research Access recommends:

Check the median time to completion and establish rules that you feel comfortable with – I often flag those taking <1/3 of median time with a “1” (“speeder”), and those taking < 1/4 of the median time with a “2” (“super speeder”).  You might consider removing outliers (at the slow end) before calculating your median.

Yes, there is a Super Speeder too! No superhero saga is complete without reboots and higher-level bosses.

The Flat Liner2. The Flat-Liner: Also known in as the Straight-Liner, the Flat-Liner is the grim reaper of market research, the Thanos of survey methodologies. When reading data, this Survey Supervillain’s death-mark manifests as the same line answered each time. The fiend is just not paying attention or has chosen a specific line in order to quickly complete a survey. The Flat-Liner may take the shape of the Speeder to take you off your superhuman game, but they typically finish at an average rate. One market researcher proposes a solution to defeat (or at least nudge) the dreaded Flat-Line:

For this reason it is important that any multi-numeric list used to measure straight-lining has questions that one would reasonably expect respondents to have varying opinions on (e.g. asking a respondent to agree with both positive and negative statements about the same product).

The Lord of Silence3. The Lord of Silence: Not communicating in surveys is doom for survey research, but easy to remove. However, claptrap or sloppy answers are just as dangerous yet harder to detect. That’s what the Lord of Silence does. A tell-tale sign this Survey Supervillain is invading your research is time discrepancies—such as claiming he/she was born in 1997 but then saying they have been at their current job for 20 years.

A data research blogger further advises: “Scan through open-end responses (and other-specify responses) for gibberish or excessively vague and short answers.”

Bizarro Grinch4. Bizarro Grinch: Yes, some Survey Supervillains hurt you by leaving loot. Bizarro Grinch will leave a Christmas-Tree pattern in the questionnaire multiple choice section. Often, this nemesis gets cuter and leaves other “artwork” shapes in the survey. Unlike his associate, the Flat-Liner, it’s harder to expose this Survey Supervillain when viewing a pattern on an Excel or other spreadsheet. As one researcher explains:

Christmas-Tree responses are a little more difficult to find, however, if you’ve used numeric reporting values, you should be able to easily spot ascending or descending patterns in Excel and remove these responses.

Take this mission with all of your heart, as this is worse than a pair of Old Navy socks under the tree on December 25th!

The Duplicator5. The Duplicator: Similar responses may appear in a survey, and that could mean this Survey Supervillain is committing a heist in your survey data. He or she is the sibling of the Lord of Silence. On the other hand, doppelganger information can frequently be the infamous maxim of “We have met the enemy and the enemy is us.”

In other words, there is a glitch in the software replicating data, or a respondent is taking the survey a second time. Your Spidey Senses should be clear that this replica data needs to be destroyed.

The Revolutionary6. The Revolutionary: Sadly, the Revolutionary is one your own citizens revolting because you didn’t keep the peace as the superhero in your survey neighborhood. Essentially, you didn’t write the best possible questionnaire, and thus unlocked a Zombie Apocalypse of respondent fatigue and respondent bias (Hint: both links grant the kryptonite you need to repeal these uprisings, or at least provide the best survey questions and survey sampling).

As with any responsible champion of good data, your task will be to win them over again.

If you follow these steps (before they autodestruct in five seconds), you will be a Data Superhero. In reality, panel providers these days offer excellent sample. Yet as the Research Access piece stated it’s inevitable that “1 and 5% of survey data from panel sample is garbage.”

It’s just the way it is. Yet a simple gallant inspection and a few Superman techniques can ensure you rout these percentages rather quickly. After all, it’s widely known that 70% of online surveyors clean their data before analyzing.

Afterward, it’s always prudent to inform panel provider of these discrepancies. Then they can make sure to minimize Survey Supervillains or glitches like Duplicators. Forward them the ID’s of the respondents you banished from your study Metropolis. They’ll take action on their side to warn and/or remove these panelists from their database (as we do every week at qSample).

Furthermore, many Survey Supervillains are actually computer bots. It’s just part of the internet to encounter them. Panel providers can tweak software to eliminate the Rise of the Machines.

With that in mind, you can continue with your research to become survey savvy and a true Data Superhero. Your data results will look more like The Avengers and less like The Watchmen.

button white papers

New Studies Claim Survey “Trap Questions” are Questionable for Market Research

Monty Python knights screaming at floating question marks that are "trap questions"

Two new studies indicate the conventional methodology of “trap questions” for online surveys may not be as effective as originally supposed. In fact, trap questions might have unforeseen results, according to both studies, and that is a notion that tends to unnerve the methodical market research industry.

The findings came from a pair of University of Michigan studies on instructional manipulation checks, or IMCs. Both concluded that answering trap questions may alter the way people respond to subsequent questions in a survey.

For those not entirely familiar with IMCs, they are principally the same as trap questions (or sometimes called “attention checks”). In essence, not all survey respondents will pay sustained attention to questions—or even follow instructions—effectively blazing through a questionnaire. These respondents, therefore, tend to dilute survey data.

Consequently, it’s not uncommon for researchers to place safeguards in the form of unrelated questions or instructions at certain intervals of a survey. This hopes to calibrate the focus of respondents or cull those who have no interest in providing usable data.

Here is an example from a social scientist:

So, in order to demonstrate that you have read the instructions, please ignore the sports items below. Instead, simply continue reading after the options. Thank you very much.

Which of these activities do you engage in regularly? (Write down all that apply)

1)    Basketball

2)    Soccer

3)    Running

4)    Hockey

5)    Football

6)    Swimming

7)    Tennis

Did you answer the question? Then you failed the test.

Another, perhaps more approachable example as it’s found in popular culture, would be in Monty Python and The Holy Grail, in the scene where the magical Bridgekeeper tests King Arthur and his knights with a series of questions. The right answers test the mettle of the knights, thereby allowing them to pass across over the Bridge of Death and get closer to finishing their hallowed quest:

Bridgekeeper: Stop. What… is your name?

Galahad: Sir Galahad of Camelot.

Bridgekeeper: What… is your quest?

Galahad: I seek the Grail.

Bridgekeeper: What… is your favorite color?

Galahad: Blue. No, yel…

Galahad is then thrown over the bridge into the Gorge of Eternal Peril. He was a respondent attempting to blaze through the Bridgekeeper’s questionnaire.

Whether it’s Galahad or respondents sketchily answering about their sports hobbies, the two studies from the University of Michigan point that the thinking of respondents may be modified following an IMC or trap question

In the first study, subjects received a trap question in a math test. Half of the participants completed the trap question before the math test, whereas the other half completed the math test first. Researchers found in this study that completing a trap question first increased subjects’ analytical thinking scores on the math test.

In the second study, subjects also received the trap question in a reasoning task assessing biased thinking. As with the prior test, half of the participants finished the trap question before the reasoning task— while the other half completed the reasoning task first. The researchers discovered that completing the trap question first decreased biased thinking and caused more correct answers. Hence, completing a trap question made subjects reason more systematically about later questions.

All of this, as the lead researchers pointed out, indicates that many past studies may have been affected by IMCs. It’s suggested that deeper thinking may not always be the best state for a respondent during a survey. Instead, an optimal thinking state is where respondents are reasoning as they normally would in daily life. As more research is conducted on the efficacy of IMCs for survey research, it might be in order to focus more on other traditional safeguards such as mitigating response bias or response fatigue.

It should be noted that neither of the two studies points to any alarming suppositions of past research. In other words, these findings should not unseat market research from its continued quest over bridges of river sample to the Holy Grail of the best possible data.

Market research just has to be persistently vigilant that it and its respondents are in the right thinking to remember their favorite colors.

button white papers

What is Response Bias in Online Studies? (and how to slay it)

Business man tempted by angel and devil on his shoulder, signifying response bias

You’ve got it all ready: vetted panel, crafted questionnaire, and an even enterprise software platform from a trusted provider. You’re ready for that data that will forward your market research to blissful success. Everything should be fine, right?

Wrong. A lot can go wrong.

One is response bias, or respondent bias. It’s a major issue in any survey methodology.

The other issue is respondent fatigue—addressed fully in our article Empathy for the Devil that is Writing Survey Questionnaires, which also addresses how to remove this other potential adversity for methods of data collection.

But back to the first issue. What exactly is response bias?

In Personality and Individual Differences, Adrian Furnham states:

Response bias is a general term for a wide range of cognitive biases that influence the responses of participants away from an accurate or truthful response. These biases are most prevalent in the types of studies and research that involve participant self-report, such as structured interviews or surveys.

The Encyclopedia of Survey Research Methods further explains:

Response bias is a general term that refers to conditions or factors that take place during the process of responding to surveys, affecting the way responses are provided. Such circumstances lead to a nonrandom deviation of the answers from their true value. Because this deviation takes on average the same direction among respondents, it creates a systematic error of the measure, or bias. The effect is analogous to that of collecting height data with a ruler that consistently adds (or subtracts) an inch to the observed units. The final outcome is an overestimation (or underestimation) of the true population parameter.

In brief, response bias is the reality that participants bring a lot baggage to surveys (and attempt to hide a lot of this baggage by playing it safe alongside the proverbial pack). Furthermore, respondents have the innate desire to please studies they’re participating in, and therefore have the tendency in answering questions as the researcher might want instead of answering honestly.

Even briefer, humans are humans: complex in public as they are in private.

Yet there are research techniques that can decrease response bias. Some of these procedures paradoxically mean being more human and less scientific!

 

Don’t Lead Your Respondent

 

 

In the law arena, it’s akin to the prosecutor loaded question: “Where were you on the night you murdered your wife?”

In a subconscious way, researchers frequently do the same. It’s also referred to as inherent bias. As one researcher wrote:

For example, a satisfaction survey may ask the respondent to indicate where she is satisfied, dissatisfied, or very dissatisfied. By giving the respondent one response option to express satisfaction and two response options to express dissatisfaction, this survey question is biased toward getting a dissatisfied response.

The answer to this is ensure that your questions are balanced. In addition, verify every questionnaire with other colleagues to always ensure no personal partiality contaminates the study.

 

Give Them a Way Out

 

 

As mentioned, respondents have a natural desire to assist the studies they’re involved in. They also naturally will put pressure on themselves to offer the best possible answers. This pressure may (naturally AND mathematically) create skewed answers. Alleviating this pressure will ensure more honest responses.

An op-out choice is an effective manner way to relive any pressure. As one authority on surveys explained:

That is why it is imperative that every question has an opt-out choice. This is usually in the form of a “Don’t Know,” “Not Sure” or “Undecided.” Not only will adding the opt-out choice eliminate a lot of inaccurate answers from your study, but it will also provide you with valuable information.

The same source states that a notion called “social desirability” is potentially present in surveys—and that is the resistance of respondents to answer sensitive questions due to an intrinsic fear of being exposed to society. The solution is to stress the anonymity of the study beforehand, as well as reminders throughout the survey in the form of text reminders on the screen before a section (as an example).

 

Offer Questions in a Dynamic Manner

 

 

When respondents are presented with a steady pattern of inquiry, they typically answer based on the previous question or subject arrangement. (This can likewise cause respondent fatigue.)

Researcher Sam Mcfarland found that: “When you start with a closed question, you may affect how the respondent will answer a subsequent open-ended question on the same topic because the earlier question has primed them to focus on that issue.”

A White Paper dealing with response bias in surveys, issued by the University of Jerusalem, further stated solutions:

Other ways of circumventing or revealing response bias could be, for example, presenting items on separate screens when using computerized versions, instead of presenting all items simultaneously on one page. The same can be done with traditional PP questionnaires, though the procedure is much more cumbersome. Moreover, the use of a computerized questionnaire enables simple manipulation of the visual presentation of both items and scales. For instance, the computerized questionnaire can present a small number of items simultaneously. Using such techniques might reduce response bias by hindering participants’ attempts to rely on answers to previous items, or on the visual pattern of their answers that is visible when using PP questionnaires. This predicted reduction in response bias is expected to result in lower measures of internal consistency for the computerized versions of questionnaire.

To wit, keep it exciting, mix it up and scatter the topics in the questionnaire.

Most experts on survey procedures, including most of the ones quoted in this piece, agree that these additional techniques can go a long way in mitigating response bias:

–  Use clear and simple language in market research questions.
–  Do not used loaded/lightning rod terms or words unless necessary (e.g.: environmentalist, terrorist, politician, etc.).
–  Avoid negatives like “not,” or at least highlight them so the respondent understands their context.
–  Be as transparent and communicative with the panel throughout the process.

With all if this in mind, response bias can be alleviated to much less than a malady. Just as positive, humans can be humans while research data can become divine for market research.

button

 

The Four Agreements to Asking the Perfect Question, in Business and Beyond (Infographic included)

Compass with arrow pointing to answer while all other markers are questions.

“There are no right answers to wrong questions.” – Ursula K. Le Guin

It has been said that the only stupid question is the one not asked. That doesn’t mean all questions are equal, though. I’ve seen many clients provide inadequate questionnaires that ultimately compromise their market research studies. Thus, as someone in the “question” business, I’ve gathered what I call “The Four Agreements of Asking the Perfect Question,” based on the inspiring book The Four Agreements, by Don Miguel Ruiz.

After all, asking the right question is always inspiring.

These agreements can be employed for interviews, podcasts, webinars, or really anything in life’s box of chocolates—not just for questionnaire templates and other survey methods.

Agree to be empathic

 

 

Knowledge and enthusiasm are not enough when finding answers, in any arena. You truly have to be in another person’s shoes—even if it’s taking a snapshot of your respondent’s day, background, or even the very reality he or she doesn’t want to provide answers at the moment. As a Forbes article on leadership explained: research shows that too many business students are primarily being taught personal achievement, and not enough empathy. These may seem natural, except that empathy can easily be replaced with the “people first” mentality, and this is a significant element in business success.

Quote: “To be or not to be, that is the question.” – Hamlet

Agree to be brief

 

 

In marketing analysis I’ve heard it say that if you can fit in a tweet, then it’s a good question. Research studies support this, stating that short and sweet questions keep respondents more engaged. Very few can be byzantine in their questioning and get away with it, like Barbara Walters or Jon Stewart. Beyond the entertainment world, concise questions commonly result in concise honesty. The reality, too, is that attention spans are getting shorter—so life is becoming one big elevator pitch…or tweet.

As examples, see the length of some of the most poignant and memorable questions in humanity:

Adam, is that you?
– Et tu, Brute?
– Aren’t you a little short for a Stormtrooper?
– Do you want to build a snowman?

Quote: “My greatest strength as a consultant is to be ignorant and ask a few questions.” – Peter Drucker

Agree to be childlike

 

 

At some point in our lives, around later toddler age, we stop asking “why?” as much. The world begins to demand answers about who we are instead being interested in what we want to know. Yet as creative journalist Warren Berger wrote in the Wall Street Journal: toddler stage is a time immense learning. Recapturing this explorative and playful persona leverages key lessons. Furthermore, many great inventions came to be because of continuous questioning; and many entrepreneurs like Steve Jobs were notorious for their “why?” tornadoes.

This doesn’t mean be annoying, though. Socrates was an expert at machine gun questions. His secret is that he knew the purpose and direction of the conversation before it even started.

Quote: “Let the boy mature, but do not let the man hold back the boy.” – Philip K. Dick

Agree to be honest

 

 

The passive, modern cultural context for honesty is referred to as “transparency.” In market research, informing respondents of why exactly they are in the study and what the data will be utilized for typically results in better statistics. There is a sense of equality and even comradeship. In the rest of the business world, this also applies because it creates that trust factor that is the cornerstone of any long-lasting relationship. As an article in The Guardian elegantly stated:

Transparency and trust combine, in turn, to support sustainable growth. By putting credible social, environmental and ethical data in people’s hands, they can make more informed – and therefore better – decisions.

Quote:  “Judge a man by his questions rather than his answers.” – Voltaire

With the Four Agreements of Questions implemented for market research methods, or beyond, there will be likely improved answers, and therefore richer communication. It’s not necessary to wait until arriving to work or crafting the next questionnaire template, but simply turning to the first person you see in the morning and just asking:

“How are you?”

Then listen.

How to Write a Survey Questionnaire

 

Download this infographic.

Embed Our Infographic On Your Site!

Empathy for the Devil that is Writing Survey Questionnaires

A bad dream for those crafting questionnaires for online surveys might go as follows:

Panel members take an online survey critiquing a previous online survey they participated, and that you wrote. The results are overwhelmingly negative. The panel members hate it and admit they offered slapdash information that will not assist in research.

You don’t have to wake up. It hasn’t happened…yet…but if you want to avoid this reality it’s wise to understand two concepts in order to craft the best possible questionnaire. The first one is:

Respondent fatigue

Also known as survey fatigue, respondent fatigue basically refers to the mental state when respondents become weary during an online survey, to the point their answers are rushed and even dishonest—thereby reducing statistical accuracy. Furthermore, an article in Great Brook explains that:

Fatigue means that those with extreme views are more likely to respond, leading to a serious survey bias, known as non-response bias. That is, those who don’t respond likely have different views from those who do. The survey data that winds up in the survey data base don’t properly reflect all customers’ views. The data are biased.

Respondent fatigue can be seen as more of a direct threat than just a negative aspect of online surveys. The New York Times reported that respondent fatigue has caused “declining response rates over the last decade.”

A chief reason for respondent fatigue is the length and wordiness of a survey, and science supports this. A study found survey fatigue often sets in after 20 minutes of a survey. It states: “They found survey respondents exert less effort and spend less time thinking about their answers as respondents get deeper into the survey.”

Here is a breakdown chart from research on revealing the risks of longer surveys:

Study on online survey respondent fatigue

As qSample’s own Director of Business Development, Connor Duffey, said: “Respondents will look at the scrolling bar during a survey. If it’s not moving fast enough for them, respondent fatigue becomes a self-fulfilling prophecy.”

Duffey further said that a sound way to get around respondent fatigue—even if a questionnaire has to be extensive —is by simply placing the most relevant questions at the beginning. Also, never taking incentive to respondents for granted is also important, Duffey added.

Length, clarity, and approachability are effective tools to combat respondent fatigue, but ultimately are not enough. The second notion is essential:

Empathy

Putting yourself in a respondent’s shoes goes a long way into getting into their minds and hearts. Sharing their interests is not enough, but understanding that online respondents are real people with busy lives and little time (just like you). They are just as essential as those above you seeking data, and just as part of the process.

qSample’s own president Rudly Raphael encapsulated this issue in a recent interview:

Companies need to have more empathy for the research participant. The person(s) who writes the survey instrument should ask themselves if they could sit through that survey for 25-30 minutes. Companies should make surveys fun and engaging, regardless of the topic. They should test their surveys over and over again to identify the fatigue points in the survey. This is usually the area where data integrity is compromised.

There are sensible solutions that qSample, along with other data research companies, proffer to cultivate empathy with respondents:

–  Be communicative from the beginning—as in explaining the number of questions, privacy policies, and purpose of the online survey. Communicating survey results during the survey is also an effective of holding respondent interest.
–  Use more than one panel for the same survey or even future ones to ensure they are fresh. Often, companies pass around panels to different departments, and it wears them out.
–   Ask yourself if every question is absolutely necessary—if you were to personally take it in a real-time event (on your smartphone, waiting for the train, for example)—and get rid of anything else.

In addition, there are practical techniques that can increase empathy and improve the overall architecture of a questionnaire. For example, Duffey states that minimizing matrices questions is a vital tactic to producing efficient questionnaires, as they tend to exhaust respondents and produce sloppy answers. Another expert in the online survey industry called matrices: “The laziest type of survey writing.”

An additional practical technique is including more opt-out choices, as explained by market research expert :

This is usually in the form of a “Don’t Know,” “Not Sure” or “Undecided.” Not only will adding the opt-out choice eliminate a lot of inaccurate answers from your study, but it will also provide you with valuable information. You can learn how many people have not made up their mind or are uneducated on a topic.

Of course, there are other issues that compromise online surveys—such as response acquiescence (the tendency to agree with survey questions regardless of content) and respondent bias (the inability to answer survey questions because of perceived social pressure). These issues can be moderated by various means that online survey providers should be able to recommend; they are commonly addressed by tweaking questions and providing some safeguards in the programming of the questionnaire software platform.

However, having empathy is an alpha and omega for the best possible questionnaire, certainly for decreasing respondent fatigue. This should not be surprising, in the end, as empathy is the solution to so many other problems, in both the virtual and real world.

How to write the best possible online survey

Need Better Online Surveys? Try Washing Your Rental Car

Man leaning on his computer, with backdrop of washed car

In his article in GreenBook, Is Online Sample Quality A Pure Oxymoron?, Scott Weinberg presents a “state of the union” on the present online survey industry. The news is not good, according to him. Weinberg boasts more than 12 years of experience in the field with some lauded companies, so his alarming declaration carries some weight.

The article is spanning and often byzantine with evidence on seemingly the glaring issues facing online panel providers, from methodology to business philosophy—almost to the point of being apocalyptic. The online survey industry is booming, but according to the author it is in danger of catastrophic irrelevance.

Here are some examples that are endemic not only with established online panel providers but also in research departments in marketing companies:

– Tainted panels, such as doctor panels with non-doctors included.
– Highly-paid salespeople without a rudimentary knowledge of sampling terminology.
– Panels regularly infiltrated by Chinese hackers trying to make a quick buck.
– Offering studies with close to 100 questions that raises profits but forces panelists into unnecessary marathon surveys.
– Submitting specialized panels filthy with river sample (which in the industry basically means the sludge of online sampling, not vetted in any meaningful way).
A huge disconnect between the back and front ends of operations, with the client in the middle having nowhere to go but down.

The article states that companies are making money, so heads are turned away while dollars are counted, and the online survey apocalypse gets closer.

This Armageddon can be halted, though. Weinberg both captures the problem and offers the solution with this Zen-like quote (which he actually heard from a colleague):

When is the last time you washed your rental car?

That may not make sense at first, except when one replaces “rental car” with “online panelists.” Then things change. After all, market research companies don’t own their online panelists, but rent them from the public and then rent them to their clients.

Unlike car rental companies, many don’t “wash” them—meaning they are not properly screening, nurturing, or engaging online panels. This would be unacceptable to anyone doing business with Hertz or Enterprise, yet somehow it’s being overlooked to an extent in the online panel industry.

So how do you “wash” online panels? The article offers some suggestions, and these are already being embraced by many in the industry:

Better surveys: Many online panel providers put the onus of surveys on their clients, and leave it at that. Weinberg writes in a (sadly) comical way:

I’ve seen literally hundreds of surveys that have been presented to online panelists. I’ve been a member of numerous panels as well. Half of these surveys are flat out laughable. Filled with errors. Missing a ‘none of the above’ option. Requiring one to evaluate a hotel or a restaurant they’ve never been to.

Online panel providers could act more as consultants. Surveys should be as trim and direct as possible to keep panelists involved. (Weinberg further suggests minimizing matrices, calling them “the laziest type of survey writing.”)

Better incentives: Weinberg wastes no breath explaining the cost-cutting culture of online panel providers. Every company wants to save money, of course, but you get what you pay for and information is too important in this era to be ever diluted. A panelist without enough of an incentive is a panelist that will potentially lie or rush; and that will be one piece of information that may injure a company in its quest to please its customers/clients.

Furthermore, the effect of depressing prices leaves less room for companies to invest in security software and other quality assurance that can remove the blight of hackers and other unfit respondents.

Better employees: Weinberg doesn’t mean fire those who manage online panels, but invest more in the back end operations; create an environment where all ends of the company/department are in constant communication. He states that the industry has become akin to a high tech deli where the sales team is all about “slinging sample by the pound, and let the overworked and underappreciated sample managers handle the cleanup and backroom topoffs.”

Better mobile: Weinberg specifically states that smartphone surveys are the way of the future that should be embraced right now.  He doesn’t go into detail, but the advantages of mobile surveys are generally known in the industry:

– Easier to administer and reach target audiences.
– Reaches the coveted millennials and businesspeople.
– Utilizes GPS technology that shows actual respondents.
– More versatile and real-time.
– Broader in its use of various media.

If these issues are not addressed (and we and many other companies address and solve every day), it only means less effective online sampling and less impactful information for market research. It may not be an actual apocalypse, but it’s just plain bad business.

For those seeking quality market research via online sampling, it’s obvious from the above to look for these three issues:

– Does the online company/department assist with the survey, and is present throughout the process?
– Does it prices seem too good to be true, and do they not have propriety panels? (Weinberg suggests online survey providers use more “invite only” panels, so this might be a relevant questions when shopping around).
Do the employees work together, as with difference sectors communicating with the client throughout the process?
– Does the company optionally provide mobile technology for online surveys?

If these questions are answered positively, then a prospective an online panel provider likely has “washed their rental cars,” and you should drive it for better market research.

 

More on mobile surveys:

How Eye-Tracking Technology can Transform Online Surveys

Eye-tracking technology is a recent tech gift for online marketing and advertising. It allows businesses to evaluate the intimate desires of test subjects—by using a combination of an eye-tracker device (usually digital video camera) to capture eye motion or point of gaze, and software to record and analyze the digital images from the device.

In essence and from a selling standpoint, eye-tracking software deciphers a potential customer’s preferences in regard to webpage layout, brand placement, or even the product itself. Reading an individual’s eyes may not be as essential (or romantic) as in film when one thinks of gun fights in Westerns, card games with tuxedo-wearing spies, or epic love affairs; but it can make a sizeable impact in market research.

In a way, eye-tracking technology is a form of online survey, albeit in a different language, able to measure the intimate tastes of respondents.  In fact, online surveys and eye-tracking technology could be a marriage made in marketing heaven, as their union truly focuses on a key issue in any manner of research sampling: honesty.

The idea of measuring an individual’s honesty while responding through eye movement has been around for a while (beyond movies), although in recent times it has been criticized by scientific examination. However, leading research indicates that the eyes are indeed the windows to the soul (or at least windows to the mind’s most sincere intentions). A piece in Psychology Today detailed a recent study at the University of Buffalo’s Center for Unified Biometrics and Sensors. The article stated:

In their study of 40 videotaped conversations, an automated system analyzing eye movements correctly identified whether subjects were lying or telling the truth 82.5 percent of the time. That’s a better accuracy rate than expert human interrogators typically achieve in lie-detection judgment experiments. (Experienced interrogators average closer to 65 percent.)

The study utilized an automated system that focused solely on eye movement, and employed a statistical method to model how people moved their eyes in two separate situations: during regular conversation, and while fielding a question designed to stimulate a lie.

Without getting too technical, determining a person’s honesty through eye movement is viable, and (also again) technology can now and readily track eye movement.

So why not blend eye-tracking technology with online surveys to maximize honest responses?

After all, honesty from online panels is an issue that deeply concerns all companies. Beyond ensuring panels are properly nurtured and engaged beforehand, marketing researchers utilize different methods to negate dishonesty/laziness during online surveys, such as:

Time tracking
Pattern reading
Trick/repetitive questions

These safeguards are for the most part efficient, but adding eye-tracking technology could further improve online surveys by gaging a respondent’s sincerity before and during the survey.

The costs might seem staggering at the moment, from a developer’s standpoint. However, the idea of eye-tracking technology on smartphone screens and other mobile technology was once deemed too pricey. This is no longer the case, with companies already offering eye-tracking technology for home devices at under $100. Samsung, as an example in the marketplace, has incorporated eye-tracking technology for a variety of its mobile products.

Humans are not perfect, and there is no such notion as a perfect technique to mining for the most distilled desires of an individual—unless it is James Bond studying the eyes of the villain across the casino table. Yet eye-tracking technology is an intriguing option for better online surveys, as it already has improved both consumer markets and market research, for your eyes only.

 

In Data We Trust: qSample President Discusses 2015 Online Survey Trends

Rudly Raphael is the President of qSample. He is a Harvard University Graduate in Information Sciences, and has been working in market research for the past 15 years, specifically panel management and data analysis.

In this qSample Q&A, Rudly discusses the trends in online surveys for 2015 and beyond, addressing many topics like how mobile technology will affect online surveys and how can companies obtain the best possible panels and survey methodology.

Q1: What are you predictions for 2015 when it comes to online surveys or just surveys in general?

Rudly: Although online survey data collection does have its challenges, penetration for online survey continues to increase at a surprising rate—making it once again the data collection mode of choice for research practitioners. This trend indicates that online survey is here to stay and will not be extinct anytime soon, as some in the industry have predicted. I expect online surveys to maintain this pace in 2015, and while mobile is gaining some speed, its challenges are too far greater to overcome online within the next 5 years. In fact, CATI is regaining its place in the race, making it the second data collection mode of choice in 2014.

Q2: How has mobile technology changed the way online surveys are conducted?

Rudly: I’m not sure mobile technology changed the way online surveys are conducted, but all the rave about mobile certainly brings more awareness to those of us in the online space to step up our game and address some issues with the online methodology. Perhaps it’s about writing shorter surveys or implementing better tracking tools in online surveys to identify fraudulent respondents or simply optimizing online surveys for mobile as well. Only mobile survey is experiencing a similar growth in penetration, in comparison to online. However, while much has been made about mobile and there are certainly benefits to conducting mobile surveys, it still has a long way to go.

Q3: What are some the biggest mistakes companies make when conducting surveys?

Rudly: Our industry’s motto should be “In data we trust.” Without the data, there’s no research. One would think a lot of emphasis would be placed on data acquisition. Companies need to have more empathy for the research participant. The person(s) who writes the survey instrument should ask themselves if they could sit through that survey for 25-30 minutes.  Companies should make surveys fun and engaging, regardless of the topic. They should test their surveys over and over again to identify the fatigue points in the survey. This is usually the area where data integrity is compromised.

Q4: What advice/guidelines do you have for companies seeking to conduct online surveys when it comes to gathering the best data possible for their research?

Rudly: First, surveys should not be viewed as a final exam to the research participant. Second, the shorter the better.  Although the topic and the audience being surveyed can dictate the length of the survey, keeping surveys at a reasonable length will eliminate respondent fatigue and greatly improve data quality. Third, uniformity and simplicity in design. Survey pages must be free of design distractions and multicolors/pop ups flashing on the screen. Questions, instructions must be clear and legible to the respondent. Finally, understand your audience. If the survey is targeting millennials, perhaps instructions on how to move to the next page or click this or that button can be kept to a minimum. Whereas, if the survey is targeting baby boomers, maybe the question font might need to be a little bigger than normal or spell out those tech acronyms.

Q5: What is the most interesting factor about sampling to you?

Rudly: Sample from internet panels are typically non-probability samples. However, since our focus area is in developing and managing specialty panels that are smaller than general consumer panels, these niche audiences make it more likely to achieve a probability sample than your average panel. While some clients can sometime employ a quota to set the proportion of levels or strata within the sample, it’s always intriguing to see the results and how they compare to the population at large. We sometimes use this feedback to make adjustments in either recruiting more or less to make our panels more representative.

Q6: What is the most unusual survey you ever had to spearhead?

Rudly: The most unusual had to be an online usability testing survey, whereby we were asking respondents to take a picture and upload a female hygiene product, which was somewhat very personal or intrusive.

Q7: There are plenty of companies that offer affordable panels. Do you see a problem with that?.

Rudly: If it’s affordable and they have access to that audience, I don’t think there’s a problem. If the cost is below industry standards, then, like everything else, I’d question the quality. After all, I believe in the old saying that you get what you pay for.

 

Please visit Rudly’s other interview at Survey Analytics, where he delves into the more technical aspects of online surveys.