The short story

The industry is demanding more streamlining and automation… the only way that can happen is via standards – what are the Panel providers doing/proposing to do in this respect? We would like better visibility on their APIs and the differences between them… possibly talk about harmonising some key variables. We think there should be an automated standard evaluation of surveys in terms of length and complexity to better pre-evaluate the cost of sample.

We would like panel providers to explain their position – and their added values – in a (wait for it) panel discussion on Thursday the 9th of November in London – ORT House, London NW1 7NE as part of the one day ASC conference.

The very long story

I have always wanted to join an English gentlemen’s club. If I moved to the UK, I was going to be Phileas Fogg: travelling the word after a drunken boast and a wager over a bridge game. Last month (after 22 years in the country), it finally happened; I was asked to join the Association for Survey Computing.

I expected a standard acceptance ceremony: arriving blindfolded in a dark room, greeted by men in togas, a solemn oath with my hand on the 15th century preserved skull of the founder of the organisation, uttering something in Latin, maybe “Nam melius quaestiones”.

I was not disappointed. It was a Thursday morning Webex call to agree the subject of the November one-day conference. After the usual rambling about the weather (it was a cold September morning with a forecast for rain in the afternoon), roles were assigned. “You’re French”, they said, “you’re good at starting revolutions” they said “write a manifesto!”

And in truth, a revolution is needed. In previous years, the only way to have a lucrative MR business (not that I know about that) was to delocalise. The new trend is to automatise: you standardise a survey (want an ad test?), select the target (nat. rep. sir?) and you have your dashboard with your data ready just as your PayPal account is being debited. For this to happen, you need an automation platform (Zappi Store and GetWizer for instance) or a survey platform with an API… and you need a sample provider.

And that’s where it gets complicated.

A short digression into the real world

Let’s imagine you have built the perfect automated survey solution… it works nicely and you get results for every wave in exactly 2 hours 47 minutes. But for a given survey, you want to use a different panel provider to reach a very niche B2B target. You contact that specialist panel provider and explain your needs. They are enthusiastic about the idea and Adam, your contact there, wants to test your survey first – their panellists are special, you don’t get to burn their community like that. After 48 hours, Adam calls you back with a price, it’s on the expensive side but you agree right away because you want the data now – well you actually wanted it 45 hours and 13 minutes ago. Now he sends you a list of the internet parameters you need to accept in your survey… what was called SG with panel provider 1 is now called SocialGrade and GE becomes Gender3b… of course you already know why it’s called Gender3b; they introduced an “other” (and a “prefer not to say”) to the gender question. Your survey scripter says he needs a day (or two) to impact the changes… but he can only start after the week-end because it’s Friday and the web designer who did the icons for the gender question has already gone snowboarding for the week-end.

Here comes Monday, the designer damaged his knee and you decide to scrap the icons. The client checks the survey on Monday afternoon (they are based in the East Coast) and they want the gender icons back to verify the sample… so you add (early next morning) a nice routing to exit the survey if they say “other”. No soft launch, we don’t have time for that. Quickly (but not quite quick enough) you realise you have screened out 99% of respondents – your scripter wrote the routing the wrong way. You call a very unimpressed Adam to stop sending sample. Your guys finally correct the routing but unimpressed Adam has gone for the day. You eventually get through to him late morning the next day and he agrees to send more sample.

The data fills your automated portal nicely… you start to relax. You shouldn’t, your client has had a look at the data and he has noticed something very weird with the student segment. How is that possible? You’ve changed nothing there… until you decide to call Adam who reluctantly agrees to take you on. He explains calmly that although the internet parameter is indeed SocialGrade, the value 23 does not indicate “Students” but “Deep sea divers”… Did you not read the explanatory document he attached to his email on Thursday last week?

Now you know you are going to have an interesting conversation with both your client and your boss. But you may as well leave it until tomorrow.

And that’s how automation got scrapped in what you must now call your previous job.

The quest

So let’s get back to my personal quest – how can I make automation and surveys better? The answer is simple: by getting panel providers talking to each other.

That’s never going to be easy. Some of them are already panel aggregators and they feel they have already done the hard job. Others feel commoditising panels is not in their interest and will drive prices down. Some say it’s simply not possible because their own data is too rich. And all agree that sending sample to a broken or boring survey is the one reason that response rates – along with data quality – are dropping.

And they are right. Data is precious. We need to treat interviewees with respect and that’s not what we do when we send them a 40 minute conjoint survey (and tell them it will last 10). For panel providers to evaluate pricing properly, they need to know how good (and more likely how bad) our survey is.

We need to build metrics on the length of a survey (a lot of data is available there) but also on the boredom index of a survey: number of grids, number of responses per question, number of words per question text, number of questions with similar text, number of mandatory open-ended questions… and prices should vary accordingly.

Another option would be that the price could be fixed by the soft launch data. At the end of the survey, we measure interview interest and fix the price of the panel accordingly – with a rebate if the full survey data is actually below the early measure.

And how do we harmonise panel data? Should we break down questions in categories and sub-categories (demographics, lifestyle, political leaning) and incorporate that in the naming? Can we have the same break-down across different countries? For which questions? Should the naming convention clearly indicate the number of responses to avoid coding errors?

Be our panelist for a day

We’ve so many things to discuss… and we thought it’d be best if we did it in public. You, the panel providers, could tell us what you think… explain what’s special about your company, detail your API or your choice not to have one. And the ASC audience – rather technical but friendly – could tell you what they want and stand witness to your promises. The result could be a standard, (national or international), an API router or just an Excel spreadsheet, depending on the uptake… but independently managed – by the MRS, Esomar, ASC or SampleCon.

So please come to ORT House in London, on Thursday the 9th of November. Tell me who from your company is ready to speak and take part in the panel’s panel discussion, and in a few lines, give me an outline of how you’d respond to our challenge on harmonising panel data and panel interfaces by Monday 2 October. We’re looking for original thinking, fresh ideas and practical answers.

Panel Providers of the World, Unite!

Show CommentsClose Comments

2 Comments

  • Steve McGee
    Posted 29th September 2017 at 12:40 PM 0Likes

    I am the Panel manager for the Which? Connect panel and whilst we dont ever sell access to our members it is all internal I would love to participate

  • Ian Roberts
    Posted 7th November 2017 at 5:02 PM 0Likes

    Unfortunately, I cannot attend. However, I do strongly believe their is a benefit to all involved in Research to ensure quality isn’t compromised, whilst automation is maximised: Getting this balance right will be the difference between list + quick online survey tool and panel + MRSoftware package.

Comments are closed.