In order to generate useful insights, you need to be able to trust your data. We know that the evidence for communities is very positive. However, we also know that there are a wide range of problems with many of the traditional alternatives. This post highlights the problems with third-party providers and offers several solutions.
The quality problems with online panel research
A wide range of exposes have highlighted quality problems with online access panels. For example, Ron Sellers in his post ‘Still More Dirty Little Secrets of Online Panels’ highlighted problems that he found when he fielded a study with five leading access panels. Sellers found that almost half (46%) of the responses needed to be deleted because of substantial issues, such as gibberish in open-ended comments, straight-lining and completing the survey in an unrealistic time.
These problems have plagued online access panels for more than a decade. In 2009, Research-Live reported on co-ordinated efforts to deal with quality issues and, in particular, an ARF study which reported on 17 different panels. An academic study published by Carina Cornesse and Annelies Blom (Response Quality in Nonprobability and Probability-based Online Panels) showed that non-probability panels (e.g. online access panels) tend to have more problems than probability panels (where people are recruited randomly to match the population). While not all of the leading panels are non-probability panels, problems were not uncommon in both types. In terms of straight-lining, the percentage ranged from 10% to 16% in the non-probability panels, and from 3% to 9% in the probability panels.
The key, persistent problems that the panel companies have been battling are:
• People taking studies for which they are ineligible, by faking their responses to earn the incentive
• People satisficing to collect the incentive eg. straight lining, speeding, typing gibberish in response to the open-ended questions, and selecting options designed to a) qualify for the study, and b) finish as quickly as possible
• People taking the survey more than once, to gain the incentive
• People using bots to enter responses, to gain the incentive
• Underlying biases that result from how the panel was recruited
• All of these problems are even more pronounced when dealing with B2B samples
Over the last 20+ years, the panel companies have been engaged in an arms race with potential cheats and people who respond carelessly. This battle has been made more difficult by the large number of end-clients and agencies who field long and boring surveys.
For any company keen to focus on the quality of the data, one of their biggest problems is the behaviour of other companies, companies that are not actively trying help solve the problem. The companies who are creating and fielding tedious, demotivating surveys are poisoning the well for everybody. Note, one of the sources of bias in an online access panel is the nature of surveys being run by other users of the panel.
Solving the quality problems
Although examples of the issues with online panels (and alternatives such as river sampling) abound, there are several steps that can and should be taken to improve data quality:
1. Be choosy about which panel company you use
2. Check and clean the data you get from panels
3. Use better surveys
4. Focus on communities, rather than panels
Being choosy about which panel company you use
Earlier this year, Andrew Grenville published a study (Can we Count on You) conducted by Maru, which looked at 28 panels across 14 countries. The study looked at validity (i.e. were the results representative of the actual market) and reliability (they fielded the survey twice with each panel, one week apart). They found that just over half the panels (16 out of 28) were reasonably valid and reliable.
Don't just buy the cheapest or easiest option, develop a relationship with panel companies that are winning the quality battle, and work with them to improve things still further. Indeed, it has very little to do with price; the study by Carina Cornesse and Annelies Blom mentioned above found no association between price and quality, which means you need other measures to assess quality.
Check and clean the data you get from panels
Start by assuming that you will always find some suspect responses, even from the best panels. Include some simple checks in your questionnaire, check the speed of responses, watch out for things like straight-lining, and check the open-ends for gibberish and indications of non sequiturs (bots are likely to respond with open-ends that do not relate to the question).
In addition to cleaning your data, maintain a log and record changes, feeding the information back to the panel companies to help them improve. If one of your teams is not logging rejected responses, they are probably not checking properly.
Use better surveys
One of the reasons that participants satisfice (i.e. take shortcuts) is that too many surveys are too long, too boring, and too often badly designed. Engaging, well-designed shorter surveys have been shown to improve the quality, for example the Dimensions of Online Survey Data Quality research published by Jon Puleston.
Focus on communities, rather than panels
One of the reasons that communities came on the scene 20 years ago was the desire to create a two-way communication, to co-create the future of brands and services. An essential part of the process of creating a community is knowing who you are talking to. This knowledge is a natural consequence of building a relationship over time. When you develop a community, you know who you are talking to, you know where they live, what their family is like, and what they have said in previous discussions.
As communities have become larger, and as methods have improved, we have found that most types of research can be conducted via communities. This is particularly true when communities are used as a hub to connect big data, qual, and CX programmes. Over the last twenty years, the shift in insights has been away from conducting most of the research with the whole market towards conducting most research with customers (hence the rise of both communities and CX).
With the advent of agile qual and the growth of longitudinal research, communities will continue to be central to the insights process of many organisations, and can provide a quality backbone for survey research.
Can communities do everything?
No, communities can’t do everything. There are times when you need to speak to a wider group of people, for example to non-customers. A well-run community will leverage online access panels and other sources as necessary. This utilises the tools from the community platform, linked to additional sources, such as customer lists, ad hoc recruitment and online panels.
However, the well-run community will be choosy about the panels with which it partners. The data will be checked and cleaned, and the surveys used with panels will reflect designs optimized through symbiosis of brands and their community members.
Delivering trust
One of the key issues in society and business at the moment is trust. Trust in institutions, the media, and politicians is in decline. Brands need to defend and grow the trust between them and their customers. Communities can help build trust by creating a shared mission to co-create the future. Communities help build trust between research participants and companies, trust between clients and agencies, and trust in the data we are using for decisions.
For more information on HX Communities and how they can help your business, contact us today.