A few months ago, we reviewed a published study about an insulin pump and CGM system from a reputable research institution in the USA. Naturally, we compared the published work with our own data – but our research pointed to a materially different conclusion!
Trying to make sense of the discrepancy, we realized that the dQ&A research had ten times the sample size, a national rather than local sample, on average 80% more experience with the product, satisfaction data across double the number of product attributes, and much more representative participants on many dimensions. If that wasn’t enough, we also had statistical significance, direct comparisons to other pumps, and ten years of quarterly trended data. Subsequent studies, and additional user feedback after more time on the market, have shown that our initial results were correct.
The biggest takeaway for us was that it’s important to be sure you have high quality answers before making important clinical or commercial decisions. There can be significant differences between research projects. Please – don’t do “minimum viable research”!
You can read our four top tips on getting it right below.
1. Know Where Your Sample Comes From
We all know that the online consumer panels are increasingly being challenged to weed out people who ‘cheat’ and bots – both groups motivated only by collecting survey incentives. Despite efforts to fix the problem, our experience has been that some consumer panels are difficult to trust for diabetes research.
We’ve studied the quality of diabetes respondents from well-known commercial sample providers. In a recent case we found that half of the sample was invalid: their answers would make no sense to anyone who understands the detail of how diabetes therapies work, and which drugs are approved for which patients. A survey based on this sample would have given seriously incorrect and misleading results.
By contrast, our proprietary patient community is well validated and trustworthy. We spend a lot of time making sure that it stays that way, through regular communication, and all kinds of quality assurance tests. It also helps that we understand diabetes well, and that we store every single answer from our members in a single longitudinal dataset. We have respondents who have been with us for years and many donate their incentives to charity.
It’s definitely worth finding out where your sample comes from. If 80% of your research effort goes into finding and validating your sample, you only have 20% left to get the insights you came for.
2. Context is King
We’ve all seen product satisfaction scores and made assumptions as to whether a product is ‘good’ or ‘bad’ on certain dimensions. But scoring doesn’t mean much on its own – it only becomes valuable when you can benchmark against similar products and see the trends over time.
For example, we’ve found that satisfaction scores trend over the product lifecycle. Scores are typically higher right after a product or therapy comes out, then go down over time. But that doesn’t necessarily mean that someone using an older product is having a horrible experience and is desperate to change.
Similarly, in product concept testing, an isolated number is meaningless without context. If I tell you that 25% of respondents say they will definitely get and use your product, is that good, or bad? You need a benchmark. dQ&A keeps a database of the dozens of products we’ve tested that we use for calibration. By asking the right questions, in the right context, we can see if potential customers will prefer a product to its competitors.
3. Understand Your Respondents and Treat Them with Respect
This sounds obvious. It could be the First Rule of Market Research. But diabetes adds an extra dimension: there’s a lot of social stigma surrounding diabetes. Managing it is a big burden. And there are a lot of misunderstandings.
We’ve learned that if you want people to be open and honest, and really put care and thought into their responses, you have to understand the position they are in and treat them with empathy and respect. Put them on the back foot and you’ll get answers you can’t trust, or no answer at all.
For example, questions need to accurately reflect the daily reality of living with diabetes. If the language doesn’t have the right tone or vocabulary, people will ‘switch off’ because they figure that you don’t understand them. And if your pick list options don’t capture the full range of people’s real-life experiences, what will the results be worth?
No one wants to take a survey with confusing questions and poor survey logic (and let’s be honest, we’ve all seen those). But when that survey is about something that impacts every part of their daily life, or when they’re already dealing with high or low blood sugar, it can cause an early exit or slapdash answers.
At dQ&A we have team members with diabetes or family members with diabetes. We go out of our way to make sure we’re being considerate and respectful of our respondents in every question we write. We are rooting for them and supporting them. Over time, we build trust with our patient community, which pays off in many ways.
4. Your Language is not Their Language
We’ve seen so many surveys that look perfectly fine to a healthcare provider or member of the diabetes industry, but are guaranteed to bamboozle regular people because of the medical vocabulary or industry jargon they contain: surveys written by people with Master’s degrees, and taken by people who didn’t graduate high school. It’s not that the respondents aren’t smart – it’s just that when you use terminology they’re not familiar with, they can’t answer your question successfully. At best, you’re adding to the survey’s cognitive load, which impacts the quality of all your answers. At worst, you’ll have people just guessing at the answer to your questions.
For example, most people with diabetes don’t talk about “basal insulin.” It’s “long-acting insulin” – or perhaps just “Lantus” (we know, because we asked). So if you ask people “do you take basal insulin?”, many people who are actually on basal insulin will say “no.” If that was your screener question, good luck with the rest of your survey.
We have dozens of examples of this kind of pitfall. Even something as simple as “what type of diabetes do you have?” will often get you the ‘wrong’ answer. Type 3 diabetes, anyone? So we’ve tested many ways of asking particular questions until we’re convinced that we have it right. We make sure we use simple, easy-to-understand language (which takes more thought and time), and we test the reading level of our surveys so they are as inclusive as possible.
And once we started translating surveys into multiple languages (seven to date), we were faced with the same problem but a less familiar cultural context, so we’ve gotten feedback from native speakers with diabetes, (and done even more testing), to ensure we maintain the same quality level.
dQ&A would like to express its sincere gratitude to all of the community members who participated in its research studies, and who teach us every day how to do our job better.
Sign up for our newsletter to receive the latest data and news from dQ&A.