Panel providers, unite – the speech at the ASC

On the 9th of November, the ASC invited some panel providers to attend a discussion on panel harmonisation. The discussion was orchestrated by Tim Macer.

Here was my speech – the written version at least as I may have ad-libbed a few unscripted things.

ASCPanel

Market Research is changing. You have heard it a million times – not in the way that Ray Pointer announced. There will be more surveys in 10 years than ever. That’s the good news. The bad news is that most of them won’t be run by MR institutes. The goose with the golden eggs is dead – client now run their own surveys which means MR companies – just to stay in business – have to be more competitive.

Goose with the golden eggs (before / after)

before-after

They started to delocalise in India, in Romania or in Ukraine. But that was not enough. To save more money, they have started to use automation.

This has its advantages – of course the surveys were a little bit more formatted… but Millward Brown had done that successfully for years. But once the bugs are eradicated, it’s efficient, fast and most of all cheap. And no blockade by disgruntled employees – although that’s more a French problem.

PresentationBrands

The problem is that end-clients are following up the trend – they can do automation too! They are using Zappi Store and Wizer… and SurveyMonkey and SurveyGizmo and ConfirmIt (and Askia). And ToLuna. And SSI self-serve. And Lucid. And Cint.

I have mentioned it at the ASC’s last conference: we have entered a golden age. The age of the API. A golden age for geeks like me at least: the internet is changing into a gigantic API where information is exchanged through web services. Everything is interconnected and uses the same interfaces.

IoT

I do not know if any of you have used IFTTT – If This Then That. It’s an app where you define a condition and an action. If I get near the house, put the lights on. If the temperature gets below 17 at night, put the heating on. If I enter the kitchen in the morning, put the radio on and start the coffee machine. If I have no milk in the fridge, order some. The IoT – the internet of things – is happening through one common interface through web services… and all industries are playing ball because they want their share of that big cake of a connected world.

oil-rig

I know we all have panel providers on stage so they might disagree with me. But panel data is no longer the only oil on planet Research. Customer databases are increasingly used because they can be energised by communities. And there is all sort of big data available at large – aggregated or not. It could be a loyalty card data, www foot prints or mobile phone data.

wine-glass

And just like for a good Bordeaux wine, to get quality you need to master the art of Blend. The merlot a bit dry and earthy – that will be your panel data. There is some cheap Merlot and very good Merlot too. And the Cabernet Sauvignon with its fruity flavours – that will be your behavioural data.

But unlike the IoT industry, Market Research providers have not decided to play ball. There are the ones who do not facilitate automation because they are afraid of losing control and burning panel. And there are the ones who do but work in isolation.

I do not believe there can be one company that will fill all the needs in Panel data. ToLuna is posturing itself as a one stop for all MR needs: the software, the panel and the behaviour. SSI is doing something similar and the merge with ResearchNow is going to be very interesting. The Leonard Murphy analysis about that on GreenBlog was great btw. And it won’t be scraps left for the others – because the need for data is growing – the need for specialised quality data will be growing too.

babel

But we need a common language. A common grammar. What is a social grade? How do I define national representativity? And how do I trigger a soft launch? How do I notify that a quota is full?

But there is another side to this discussion. If we let anyone access a survey which is tedious, long, repetitive, with grids, 2 max-diff exercises and one 20 minute trade-off, how do we reward the dedicated weirdos that filled that nightmare of a survey? How do we warn them that they are in for the long run? Because we might lose another goose with golden eggs. How can we stop the cull of panellists and the ever drop in response rates?

tediousness

I suggest we build metrics: number of questions, number of responses in a question. And then number of words per question, number of similar questions, number of mandatory open-ended questions… and then build a model.

$(Survey) => (Length(Survey) x TotalTediousness(Survey))-1

And then remunerate the panellists (and their providers) accordingly.

While I was preparing this discussion with all of you, most of you mentioned of how slow moving our industry was. It’s not just that: it’s protective, short-sighted and technologically unaware. And that’s everything the ASC is not. It’s at the ASC that triple-S, a format to exchange survey data between competing survey software was created and promoted. It’s two of my competitors, Steve Jenkins and Keith Hughes, who patiently showed my errors and taught me how to write a proper triple-S file. Let’s all be a little bit more like them and a little bit less like Apple who introduces a new plug and a new format with each new version.

chinese-propaganda

That’s my manifesto – a call for arms… please discuss and let’s move it forward.

Richard Collins becomes Askia’s first Chief Customer Officer

In this board-level role, Richard will manage and develop Askia’s international client base, as well as take overall responsibility for Askia UK office.

Richard Collins

Richard (pictured above) has built a unique track record in the Market Research industry playing key roles in leading companies. Most recently he was Chief Customer Officer for Big Sofa Technologies and before that he founded the first international office for Decipher Inc. in London (known as Decrypt and acquired by FocusVision). Prior to that, he has also held senior positions with Confirmit, Pulse Train and SPSS/IBM.

Patrick George-Lassale, Askia CEO, comments:  It’s the perfect fit at the perfect time! We are ready to take our global business development to the next level. Richard has inspiring skills and experience and we share the same values: we simply had to work together.”

Richard adds:  “It is an extremely exciting time to be joining Askia. We have some important announcements that we are preparing to share over the coming months that will see the company change significantly: both from an organisational and a technological point of view.”

Stay tuned for further details.

MaxDiff grows!

This article provides an in-depth explanation of AskiaDesign‘s built-in capacity to manage MaxDiff data collection & analysis methodologies. For those of you who, like me, need a short reminder of what MaxDiff is; this is the definition provided by Wikipedia:

The MaxDiff is a long-established academic mathematical theory with very specific assumptions about how people make choices: it assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance. It may be thought of as a variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us on five of six possible implied paired comparisons:

A > B,  A > C,  A > D, B > D, C > D

The only paired comparison that cannot be inferred is B vs. C. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons.

MaxDiff table

We have recently added a new ADC to our offering that allows you to easily create MaxDiff tables in AskiaDesign. This article covers the setup process and usage for such comparison tables:

MaxDiff table ADC

 

This Askia Design Control allows you to easily create the required screen format for MaxDiff surveys. Add the ADC to your resources, drag it on to your Most response block, set any captions you want to appear in the headers of your grid and select the Least question it should be connected to. As with most ADCs, this survey control allows you to customise many parameters, such as:

  • Least Question: when you drag the ADC on to the response block for your ‘Most’ question, this is where you define which ‘Least’ question it relates to.
  • Most Caption: the caption you want to appear in the ‘Most’ column header.
  • Least Caption: the caption you want to appear in the ‘Least’ column header.
  • Centre Caption: the caption you want to appear in the centre column header e.g. this can be information about the loop iteration or screen number.

You can play around with this survey control in the following demos:

Alternatively, you can download (or even contribute) the MaxDiff ADC from Github!

MaxDiff interactive library

When conducting MaxDiff methodology you have a number of different parameters to consider and produce programming instructions for. At Askia, we have used the R software environment to do this for the different parameters and a large range of the options for each. We have created an interactive library in Design which asks you what option you want for each parameter. The result is a greatly simplified process for producing any MaxDiff design with Askia.

The available parameters are:

  • Number of questions: also known as the number of arrangements or number of screens. This is the number of screens the respondent will see during the course of the MaxDiff section.
  • Number of selectable items: this is the number of options to choose between per screen.
  • Number of items: this is the number of attributes or statements you want to include overall in the MaxDiff design.

As from version 5.4.6 of AskiaDesign, you can now use our Interactive Library feature to easily create and setup your MaxDiff design with the help of the above parameters:

MaxDiff interactive library

 

Check out the full article for more in-depth information & resources.

Adaptive MaxDiff

As we have seen in the above, the key point with standard MaxDiff is that the arrangements on screen are pre-set and do not adapt to the responses given in interview. In addition, the number of selectable options on screen is a constant.

However, in adaptive MaxDiff, the number of selectable options will change. Each round of screens, the items selected as Least are removed from the next round of screens. The number of items on screens therefore diminishes until you get to the start of the last round where you are asked to pick between all those you chose as Most.

The advantages of adaptive MaxDiff are that greater discrimination between items of importance is achieved. The disadvantages? Well, it could be argued that, since your initial answers create the upcoming arrangements, you do not have as much opportunity to change your mind about items you have rated least important in previous rounds.

This article details these differences, provides an example questionnaire to showcase the setup of this methodology with Askia as well as instructions on using and updating the example file for your own list of items.

New KB article roundup

This article aims to provide you with the best of our most recently published articles on our Help Centre, these range from AskiaDesign and AskiaSurf to AskiaWeb.

Redirect out of an Askia survey and back again

Sometimes it’s required to leave an Askia survey to take part in an external exercise and return to the survey to complete it. In such cases, it may be required to take parameters from the Askia survey to the external application or page. This article will show an example of these requirements using AskiaDesign.

Check out the full article for more details, access to the example survey and download all the attached resources.

Survey router

This article shows how to route a respondent from a main survey to two follow-up surveys out of a possible six depending on their initial selection and remaining SQL quotas. The seven surveys are set up such that the respondent will always be taken back to the correct position in any of their surveys if they close the browser and then click on the original link again.

The original article contains a link to a demo survey as well as an example questionnaire file in order to help you setup this methodology.

Quota logic examples in Design

This in-depth article provides a detail explanation of how to automatically manage quotas during fieldwork, specifically for complex quotas and/or for edge cases such as:

  • Sending an over-quota respondent to a short survey
  • Least Filled quotas

Quota logic example

Each case is fully detailed and provides example surveys to help you adapt the chosen method to your needs!

Local Storage

The Web Storage API provides mechanisms by which browsers can store key/value pairs, in a much more intuitive fashion than using cookies. This API provides two mechanisms:

  • Session Storage: maintains a separate storage area for each given origin that’s available for the duration of the page session (as long as the browser is open, including page reloads and restores)
  • Local Storage: does the same thing, but persists even when the browser is closed and reopened.

This article covers the use of localStorage as it is often used in CAPI surveys, where you want the agent to avoid re-entering the same data twice. A typical use case is an agent interviewing passengers on a single bus line. Once the agent has entered the bus line during the 1st interview, we want to pre-fill this question for new interviews, while leaving the possibility for the agent to edit at a later stage.

Check out the article for more details and access to the example questionnaires.

Capture browser’s user agent after every survey screen

The User Agent is basically an application that acts on behalf of a user. In the case of web browsers, it provides to the website / web application information concerning which browser, browser version, operating system, …

Askia only captures one instance of the browser’s UserAgent inside of the SQL database, meaning anytime you use the “Browser.UserAgent” keyword, it references the UserAgent that was captured in the database (which is the last device to enter into the survey). This Askia keyword does not keep track of which devices/UserAgents partook in the survey itself. Again, it only records the UserAgent of the last device that entered the survey or answered a question. If you want to keep track of which UserAgent was used to answer which question, you’ll need to use the snippet of JavaScript included in the article to pull in the UserAgent into an open-ended variable after every screen.

Improve speed of large Surf set-ups

This article sets out the steps needed for using askia Analyse & Surf to improve the (metadata) speed of Surf set-ups with a large number of .qes (wave) files.

We already had some more general tips to improve such rendering that would be useful for standalone datasets. However, these would not suffice in the case of complex AskiaSurf set-ups that comprise a large number of waves. The article therefore details the use of AskiaSurf’s Improve Metadata Speed feature.

Askia User Survey 2017 results

The 2017 edition of our popular Askia User Survey was mailed out this past spring.

We received a record number of replies this year and once again, we were humbled by the tremendous amount of constructive feedback.

Whether it was an appreciative nod to one of our teams, a smart suggestion for improvement, or an insightful story about your personal Askia experience: all your responses are incredibly valuable to us and help us bring the best out of our software.

If you took part in the survey, we’d like to say again how thankful we are for your time. We told you in the introductory video that we would read all of the comments – and we did. There were a lot of them. You sometimes made us laugh. We also sometimes blushed with pride, other times with scorn.

Jérôme Sopoçko has personally replied to quite a few users already and whilst he’s still working on it, we thought now may be a good time to share the best bits of our analysis with you all, together with some cool infographics.

The highlights

When reading all your comments, we noticed four recurrent requests for which tools are either already in place, or will be very soon. So before we get down to the nitty gritty, let’s take a look at these all-important questions and the answers we have for you:

Request #1: improved quota management

You wrote: 

I want to use dynamic quotas 

Please can you improve quota management (grouping, quota scripting)

Well we have good news for you: it’s already in place in version 5.4.4! And this is how to do it We also recommend that you read Jerome’s blog post which is full of valuable information on this topic. 

Request #2: managing panels with Askia

You wrote:

We need a solution to manage panels

Can you provide integration with a panel management platform

We have a solution and it’s called Platform One: Askia and Platform One have created together a platform for panel and community management. Platform one is fully preattuned to the Askia survey software so you are guaranteed seamless integration and seamless service.

Request #3: creating my own look & feel using custom codes

You wrote:

I find it next to impossible to get web screens to look how I want them to

I’d like a slider to navigate through the questionnaire

The one page design survey from typeform is highly visual and I’m unable to recreate this look/feel on Askia

Here again, we have a solution: the ADPs enable you to define exactly what code is generated, in the manner of a master page (php or asp). The ADPs are available in version 5.4.6.

Request #4: dashboarding

You wrote:

What I really need is data visualization tools and online dashboarding

The solution is our upcoming revolutionary dashboard developed in partnership with E-Tabs. Read the press release here and stay tuned for more details.

These are the topics that stood out from your comments. Now let’s look at the rest of the survey results. But before, I have to take my hat off to Seyf who put together the very comprehensive analysis that follows: thank you!

The details

What did you like?

We asked you what you like about our products and a number of themes emerged.

To begin with, you praise the fact that our software is powerful and offers endless possibilities, yet remains very easy to use: Right combination of complexity (i.e. high function) and usabilityThe general speed of all the products is often mentioned, as is the overall look and design of the products.

You also like that it’s all in one, and that it covers most of the Market Research needs across all data collection methodsThe range and integration of our products get your approval: you can use the same file format from survey design, to multiple collection methods, to data analysis.

Finally, you give top marks to the flexibility and versatility of the software including the extensibility of products such as Design and Vista.

Many of you rave about our Support team, even though technically they’re not a product; in particular, their knowledge of the software and their response speed get the thumbs up.

What do you like most about our products?

ProductLikeWordCloud2

Our personal favourite was:

“Very flexible and can do almost anything (even making coffee if you have a coffee machine connected to the internet)“

> Dear Askia user, we’re keen to see pictures of your favourite software making a cuppa!

What did you not like?

No software is perfect, and so we welcome your constructive feedback about the things you do not like.

Mostly, you find that some elements are not 100% intuitive, making some tasks unnecessarily complicated or time-consuming.

You would like to see more documentation, including guides for beginners – and our French users ask for documentation in their mother-tongue.

Occasional errors and the odd bugs were also mentioned, and the fact that sometimes just fixing the issue isn’t enough:

When we ask for help, we often find the issue corrected for us – we’d really like to understand why something has gone wrong

“Generally the fact that when you ask for help, you often find the issue promptly corrected for you, whereas you would like to know how to correct yourself.”

All points duly noted by our team, and we hope that you’ll soon notice significant improvement on them all!

… and what do you dislike most about our products?

ProductDislikeWordCloud2

What should we work on?

What percentage of our askia design development time should we spend on these possible features?

DevTimeDesign2

There were still a large number of respondents who, without prompting, mentioned an improvement on screens is required. It is too difficult to achieve their requirements. There is a huge appetite for a screen editor that would be more WYSIWYG (what you see is what you get).

You’d also like us to work on:

  •         Short captions for responses or an easy way of clearing html from question elements
  •         More automation for building surveys
  •         Other specify in loops / grid tables

What percentage of our askia voice development time should we spend on these possible features?

DevTimeVoice2

Unprompted requests for SMS surveys saw a number of mentions as did speech to text technology, agent and interview evaluations and more variety/flexibility in the reports offered. Finally, clients consistently wanted dev time to be invested in maximising stability, including better diagnostics, tools and measures to prevent the causes of downtime.

What percentage of our askia analyse development time should we spend on these possible features?

DevTimeAnalyse2

Apart from the above areas, you would very much like to see the speed and stability for large volume or complicated data sets improve in both askiaanalyse and askiavista.

Intellisence in all analyse script windows*, more find options* and askiavista 6 admin mode all saw mentions as well as more progress bars in both apps when running lengthy tasks.

Already implemented in Analyse 5.4.6.0

What percentage of our askia web development time should we spend on these possible features?

DevTimeWeb2

You would like to see easier jumping in their surveys to aid testing. A parallax or single page view of any survey would be welcomed as would the ability to upload and reuse media in a web survey. Online focus groups/panels and the usual request of reopening links which are screened or quota failed saw mentions.

What percentage of our askia face development time should we spend on these possible features?

DevTimeFace2

The most consistent theme was reducing the number of crashes in the app.

Askia face users also mentioned the following features they would like to see development time spent on:

  •         Message notifications to interviewer devices or in app/survey from supervisors / agents
  •         Remote control or automatic options such as force survey update or auto sync interviews when connected to web
  •         Audio / screen recording of interviews
  •         Easier handling of multiple visit / session interviews

What else would you like to see?

Askia Users also asked for: web reporting for all supervisor functionality and perhaps dashboarding capabilities for this, connection to panel management systems and media (video / picture / audio file) coding.

Rating the software

  •         82% were either very satisfied or satisfied (up 6% on last wave)
  •         15% were neutral (down 5% on last year)
  •         2% were dissatisfied and less than half a % was ‘Very dissatisfied’

SoftwareOverallSat2

Rating the support

In terms of overall satisfaction:

SupportOverallSat2

  •         80% were either very satisfied or satisfied (up 9% on last wave)
  •         8% were neutral (down 17% on last wave)
  •         3% were dissatisfied and less than half a % was ‘Very dissatisfied’

Respondents were asked to rate their satisfaction with certain attributes of the support team:

SupportAttributeSat2

When it comes to improving Support, you’d like us to be better at explaining the solutions so that you can learn from it. Showing how you figured out the answer and what were the underlying causes. Points that were mentioned a few times:

  •         Support should test their solutions before they send or ask the client to try them
  •         They should listen and try to understand the issue before responding
  •         If it’s a fault of the software, explain what measures will be put in place to stop the same problem happening again
  •         More out of hours support
  •         Easier to navigate documentation, blogs, articles and of course as much of them in French as in English!

ImproveWordCloud

How does support compare?

When askia users were asked to compare askia support to other technical support service providers, we rated higher in 2015 and higher still in 2017!

SupportQualityCompare2

Your feedback is very important to us, so please do not hesitate to send your comments, questions or why not, suggestions for next year’s survey!

You can take a closer look at the results in Vista by clicking here.

One last word…

If you’ve read our entire analysis: wow, we’re impressed! But perhaps you’ve just picked the information you needed then skipped directly to the conclusion. Either way, we cannot say it enough: we are extremely grateful for the time you spent on our behalf, first sharing your thoughts and comments in the User Survey, then reading our answers and analysis in this post.

It’s you, the Askia Users, who keep us growing and improving day by day and you’re the reason we get up in the morning (together with software, our loved ones, and a few other things).

Please continue to be an active part of the Askia community and we guarantee we will make it worth your time!

Askia exhibiting at Research & Results 2017

In about a month’s time, over 3,000 Market Research professionals will flock to Munich to attend Research & Results, one of the leading international trade show for the industry.

Askia is a regular exhibitor at Research & Results and this year again, a sizeable Askia contingent will be in attendance, with team members traveling from Germany, France and Belgium for the occasion.

Visitors, who can register online for free, are sure to be kept busy with 175 exhibitors and over 100 workshops to choose from. This year will also see the introduction of a brand new innovation area.

If you’re planning to attend, please make sure you stop by the Askia stand # 138 in Hall 1. We look forward to seeing you there!

Research & Results
25-26 October 2017
MOC Munich, Germany

Panel providers of the world, unite!

The short story

The industry is demanding more streamlining and automation… the only way that can happen is via standards – what are the Panel providers doing/proposing to do in this respect? We would like better visibility on their APIs and the differences between them… possibly talk about harmonising some key variables. We think there should be an automated standard evaluation of surveys in terms of length and complexity to better pre-evaluate the cost of sample.

We would like panel providers to explain their position – and their added values – in a (wait for it) panel discussion on Thursday the 9th of November in London – ORT House, London NW1 7NE as part of the one day ASC conference.

The very long story

I have always wanted to join an English gentlemen’s club. If I moved to the UK, I was going to be Phileas Fogg: travelling the word after a drunken boast and a wager over a bridge game. Last month (after 22 years in the country), it finally happened; I was asked to join the Association for Survey Computing.

I expected a standard acceptance ceremony: arriving blindfolded in a dark room, greeted by men in togas, a solemn oath with my hand on the 15th century preserved skull of the founder of the organisation, uttering something in Latin, maybe “Nam melius quaestiones”.

I was not disappointed. It was a Thursday morning Webex call to agree the subject of the November one-day conference. After the usual rambling about the weather (it was a cold September morning with a forecast for rain in the afternoon), roles were assigned. “You’re French”, they said, “you’re good at starting revolutions” they said “write a manifesto!”

And in truth, a revolution is needed. In previous years, the only way to have a lucrative MR business (not that I know about that) was to delocalise. The new trend is to automatise: you standardise a survey (want an ad test?), select the target (nat. rep. sir?) and you have your dashboard with your data ready just as your PayPal account is being debited. For this to happen, you need an automation platform (Zappi Store and GetWizer for instance) or a survey platform with an API… and you need a sample provider.

And that’s where it gets complicated.

A short digression into the real world

Let’s imagine you have built the perfect automated survey solution… it works nicely and you get results for every wave in exactly 2 hours 47 minutes. But for a given survey, you want to use a different panel provider to reach a very niche B2B target. You contact that specialist panel provider and explain your needs. They are enthusiastic about the idea and Adam, your contact there, wants to test your survey first – their panellists are special, you don’t get to burn their community like that. After 48 hours, Adam calls you back with a price, it’s on the expensive side but you agree right away because you want the data now – well you actually wanted it 45 hours and 13 minutes ago. Now he sends you a list of the internet parameters you need to accept in your survey… what was called SG with panel provider 1 is now called SocialGrade and GE becomes Gender3b… of course you already know why it’s called Gender3b; they introduced an “other” (and a “prefer not to say”) to the gender question. Your survey scripter says he needs a day (or two) to impact the changes… but he can only start after the week-end because it’s Friday and the web designer who did the icons for the gender question has already gone snowboarding for the week-end.

Here comes Monday, the designer damaged his knee and you decide to scrap the icons. The client checks the survey on Monday afternoon (they are based in the East Coast) and they want the gender icons back to verify the sample… so you add (early next morning) a nice routing to exit the survey if they say “other”. No soft launch, we don’t have time for that. Quickly (but not quite quick enough) you realise you have screened out 99% of respondents – your scripter wrote the routing the wrong way. You call a very unimpressed Adam to stop sending sample. Your guys finally correct the routing but unimpressed Adam has gone for the day. You eventually get through to him late morning the next day and he agrees to send more sample.

The data fills your automated portal nicely… you start to relax. You shouldn’t, your client has had a look at the data and he has noticed something very weird with the student segment. How is that possible? You’ve changed nothing there… until you decide to call Adam who reluctantly agrees to take you on. He explains calmly that although the internet parameter is indeed SocialGrade, the value 23 does not indicate “Students” but “Deep sea divers”… Did you not read the explanatory document he attached to his email on Thursday last week?

Now you know you are going to have an interesting conversation with both your client and your boss. But you may as well leave it until tomorrow.

And that’s how automation got scrapped in what you must now call your previous job.

The quest

So let’s get back to my personal quest – how can I make automation and surveys better? The answer is simple: by getting panel providers talking to each other.

That’s never going to be easy. Some of them are already panel aggregators and they feel they have already done the hard job. Others feel commoditising panels is not in their interest and will drive prices down. Some say it’s simply not possible because their own data is too rich. And all agree that sending sample to a broken or boring survey is the one reason that response rates – along with data quality – are dropping.

And they are right. Data is precious. We need to treat interviewees with respect and that’s not what we do when we send them a 40 minute conjoint survey (and tell them it will last 10). For panel providers to evaluate pricing properly, they need to know how good (and more likely how bad) our survey is.

We need to build metrics on the length of a survey (a lot of data is available there) but also on the boredom index of a survey: number of grids, number of responses per question, number of words per question text, number of questions with similar text, number of mandatory open-ended questions… and prices should vary accordingly.

Another option would be that the price could be fixed by the soft launch data. At the end of the survey, we measure interview interest and fix the price of the panel accordingly – with a rebate if the full survey data is actually below the early measure.

And how do we harmonise panel data? Should we break down questions in categories and sub-categories (demographics, lifestyle, political leaning) and incorporate that in the naming? Can we have the same break-down across different countries? For which questions? Should the naming convention clearly indicate the number of responses to avoid coding errors?

Be our panelist for a day

We’ve so many things to discuss… and we thought it’d be best if we did it in public. You, the panel providers, could tell us what you think… explain what’s special about your company, detail your API or your choice not to have one. And the ASC audience – rather technical but friendly – could tell you what they want and stand witness to your promises. The result could be a standard, (national or international), an API router or just an Excel spreadsheet, depending on the uptake… but independently managed – by the MRS, Esomar, ASC or SampleCon.

So please come to ORT House in London, on Thursday the 9th of November. Tell me who from your company is ready to speak and take part in the panel’s panel discussion, and in a few lines, give me an outline of how you’d respond to our challenge on harmonising panel data and panel interfaces by Monday 2 October. We’re looking for original thinking, fresh ideas and practical answers.

Panel Providers of the World, Unite!

Askia party at Esomar 2017!

A quite substantial contingent from Askia will be attending the ESOMAR Congress on September 10-13 in Amsterdam. For those of you who plan to attend, we’ll be at booth #19 in the exhibition space, where we hope you’ll swing by to see us.

The other bit of significant news is that, true to tradition, we’ll be hosting a party immediately following ESOMAR’s Welcome Reception. Details are as follows:

When?

Sunday, September 10th from 9:00 PM until late.

Where?

In De Waag, an intimate bar/restaurant about a 7-minute walk from the ESOMAR Congress.

In De Waag

Nieuwmarkt, 4

1012 CR Amsterdam

Dress code?

Something, anything…

Welcome to the machine

The following is a transcript of a talk given by yours truly and Chris Davison from KPMG Nunwood at ASC’s One Day Conference on the Challenges of Automation in Survey Research on May, 11th 2017.

Introduction

We have entered the golden era of automation –in other word: make machines do things. At first it was repetitive and simple things – find duplicates in a sample list, copy that survey and substitute the word Coca-Cola by Pepsi and send the results to all the executives of the relevant company – not mixing up companies is the perilous thing that you do not want to get wrong.

Automation is for lazy people – and I have always considered laziness to be a quality! Lazy people [programmers] look to avoid doing things they don’t really have to, and when they do finally have to, they look to get it done with the least amount of effort.

Now if we are to believe claims automation can post-code open-ended responses (yeah right Tim!), write reports, win at Go (mark my words – that one is never going to happen) and soon Skynet is going to enslave us (it will be ok as long as we don’t choose the red pill… or the blue pill.. damn I have to remember which one).
Automation is not new. Software is automation. But not everybody is a programmer – well this is less true at the ASC.

For every-one to benefit from the work done by the best programmers in their sector, Application Programming Interfaces (APIs) were invented. An API is a pile of code (usually documented) inside a box accessible by another software without having to understand its inner working (or have access to the source code). Here’s a brief timeline:

Timeline

By 1999 it was cool to be a programmer (and not just at the ASC). Every web designer was now a programmer – writing awful code in JavaScript so they could animate their poorly designed website while growing and grooming their facial hair and sipping their Frappuccino soy latte.

XML was no longer the cool kid on the block JSON (JavaScript Object Notation) was – more compact, more elegant and directly usable in JavaScript. jQuery – a JS library, quickly replaced by Angular and then React – made it super easy to query any website and people started calling me a dinosaur because I am a C++ developer.

Automation was always possible before: to make software interact you needed a database accessible by both parts. Or start an executable from the command line… Or a file drop on an FTP server. All these are huge security risks. I am not saying that web services are not security risks – the risks are just less understood so easier to sell to your CEO.

So what was new in the survey world? Well everything.

Panel providers created APIs. First Cint then Lucid which lead to an explosion of DIY research. Software providers opened-up their APIs and some even documented it. I will not give you the list but these days, even SurveyMonkey has an API.
And for me the revolution came with CRM system – like SalesForce, Microsoft Dynamics or ZenDesk – opening up the Enterprise world. You could interview any customer after any touch point… understand what’s happening and adapt quickly, it’s called by one of our competitors the experience gap.

Behaviour is now captured outside of surveys. The “What and When” is known. Surveys can concentrate on the “Why?” and the “What if?”. Verve is managing insights for Walgreens, the owner of Boots. Thanks to the loyalty cards, they know you have bought paracetamol on the Monday, Ibuprofen on the Tuesday and nothing on the Wednesday (with an app and iBeacons they also know where you have walked) – and they can sample you accordingly and interview you to understand your buying pattern.
So the game is about asking the relevant question at the right time. In my opinion nobody does it better than KPMG Nunwood and that’s because – automation looks like magic when it’s well done – there is a wizard in the backroom somewhere in Leeds…

Case study: Nunwood (Chris Davison)

So I’d like to tell Nunwood’s automation story, how it started, the obstacles we faced and where we are now before later coming onto some of the ideas that we implement to take us further…
How did we start? Well I wish I could say we had some far reaching vision but in all honesty this is what kicked things off for us…

In case of emergency, panic.

Sadly there wasn’t much long term vision associated with our first foray into automation, that came later. What got us moving was necessity. We had several UK projects using bits and pieces of automation techniques but they still required manual intervention at key points to move the process along.

As we started to expand globally, we were faced with the challenge of processes needing to run at any point in the day – including when our UK team were tucked up in bed. Our first attempt at a solution was a successful failure – we met the needs of the project, but the mish-mash of command line, SQL scripts and Askia was very complicated and wasn’t very accessible to the entire team.

If we were going to extend the automated approach to other projects, it was clear we would need different tools and this is what brought us to LoadIt. While a simple tool to use, it allowed for a great deal of complexity meaning new starters could get to grips with it quickly but the more seasoned DPers could still deal with our most demanding projects. Later, its extensibility would allow us to integrate with our in-house developed systems such as our Fizz online reporting platform.

robot

Given LoadIt’s existing integration with Askia and the automation capabilities within Askia we soon developed the long term vision that we were missing – the Zero Hours Project.
Given certain conditions – mainly stability in the project’s design and outputs – could we automate all elements of data collection and delivery?

It was an ambitious goal and one that had to have some compromises – there would always be some elements that would need manual intervention, so “zero hours” really meant “very few hours” but that didn’t quite have the same ring to it.

Discussing these kinds of developments with the whole team raised justifiable concerns that this would mean people’s jobs, but automation does not necessarily mean reducing head count and it certainly wasn’t our goal.
The tasks that lend themselves well to automation are the ones that don’t change – removing these from the team’s workload would free up time for things that required the skills for which they were hired (primarily their problem solving abilities), the skills required for automation were also different from the typical work, meaning it provided an opportunity for people to learn more broaden their skillset. We could also expand the remit of the team – in particular we took on more responsibility for the configuration of our reporting sites. From an operational perspective it would mean we could go some way towards flattening out the peaks and troughs that had developed in our working patterns – driven by the fact that most of our work was tracking studies that came out of and went into field at very similar times.

Framed like this, it was a very positive message for the team and I’m pleased to say everybody got behind the idea.
The main challenge from the wider business was around quality control. If machines were doing all the work, who was checking it?
It was a valid concern but one that could still be approached with automation at the forefront. All surveys are based on a set of rules, however complex – each question has criteria that must be met before it is asked, so it’s reasonably simple to check that the rules have been met.
We created VB scripts that could test the rules and give a pass or fail to a set of Excel tables, this meant that the same files we were using for automated checks could also be verified manually passed to our Insight team should they want to double check things.

Quality stamp

So back to the original question – could we run a zero hours project?
The answer wasn’t simple – Yes, if you considered the caveats – when things didn’t change and we considered the standard elements of the project we could produce the outputs through automated scripts. No, because an unexpected consequence of the changes was a change to the way our Data Team worked with our Insight Team. With many of the repetitive, standardized tasks removed, we found we had more time to work on ad hoc requests and deeper analysis of the data – meaning we could add more value to projects.

We had seen many of the improvements we had hoped as well as some unexpected ones: operational improvements, better working practices and we’d started to extend our capabilities.

Paradata (Jérôme Sopoçko)

One of the most exciting areas of using automation and APIs is during (or just after) the collection of the survey. Paradata is often just the date and time of the start of the interview. But more generally it’s about storing any information about the way the interview was conducted. You can find out the name of the browser, the operating system and the language used in the HTTP request header.

If the interviewee is using Internet Explorer 5 (or more generally any version of Internet Explorer), do not bother asking technical questions. Similarly, if the operating system is Linux, forget these ask technical questions because you won’t understand the answer.

If IE is brave enough to ask to be your default browser...

Beyond the interview, you can find information about the world. If you interview someone who is boarding a Eurostar train, it’s interesting to check the volume of #Eurostar hashtags in the Twitter API: it’s a strong indication of problems on the line.

Now let’s talk about the IP address – this identifier assigned to you by your Internet Service Provider. Of course your ISP knows who you are and has been allowed to sell (in the US) your browsing history.
There are a number of companies out there who specialise in transforming an IP address in a geographical position: www.freegeoip.net, www.maxmind.com, www.digitalelement.com, …

But you can have a much better definition of the geo-location by authorising the browser. Google thanks to their StreetView vans was fishing for the Wifi network information so they can improve the location which is now at scary levels of accuracy.

As mentioned to have accurate geo-localisation, you need the permission of the user. The idea is not to gather information in a sneaky way. Tell the user what you are doing with this information, explain that you are reducing the number of questions you ask… Because there are so many resources available: openweathermap.org will give you the current weather in any location, developer.zoopla.com will find right away the average price of the house in the vicinity. And then you have the open data government sites. Data.gov.uk have put on 185,000 datasets. Call me old fashioned but I still think we are in Europe, the EU Open Data portal has 10,700 dataset with a full API to access them. For free.

So what can we do with this data? Does that help to know that my interviewee is in Cardiff where 60% of the people voted remain? Linking big data and survey data is one of the greatest challenge of #MRX – and if you are not one of GAFA (Google Amazon Facebook Apple) you are at a real disadvantage.

Let’s use a concept developed for advertising planning: the Average Issue Readership (AIR)
We ask a significant amount of people (very significant in the case of the TGI survey) these questions: “How often do you read this newspaper?” or sometimes rephrased in “When did you last read this newspaper?”. There is still a lot of discussion to find out which is the best way to ask these questions – they are usually called probability questions.

Grid question

So you get a very classic grid question like the above. Thank to these lovely people who do research on research, we get the following probabilities of reading, based on the “recent reading” question.

Recent reading - NRS survey

From there we can infer, for each interview, the probability that he has read or not any given issue of a newspaper… and the great thing is that we can use that information in crosstabs and crossing that by their likelihood of buying Corn Flakes and plan our advertising campaign according to this.
In a normal survey, if we simply ask people what newspaper they read and we cross that by their gender. When we do a cross-tab, for each interview of gender who has given brand Y, we add 1 in the corresponding cross-tab. With probability question, we add a value between 0 and 1 indicating the probability of the person having seen the brand.

AskiaAnalyse table with randomised data 01

AskiaAnalyse table with randomised data 02

OK, the data looks a bit weird because the counts have decimals but once you move them into percentages, nobody cares anymore. And of course you can still use weighting if your panel data was not balanced.
So what does that mean for you: although you never asked for the vote that fateful referendum, you can cross the NPS of your brand with the vote for leaving Europe who (from what I have seen at the MRS) increasingly used as a segmentation tool.
Of course it’s not just the election you can cross by: the level of crime, the amount of subsidy, the likelihood of rain…

Beyond paradata, you can also create additional information with the questions you actually ask. We have worked on a project where the conjoint analysis utilities were computed in real time – that meant automating R (like Ian showed earlier) to get the results a few screens further, show the best concept for a given user and validate it.
Beyond that – the revolution is also around open-ended question analysis: you do not write open-ends anymore.

You will speak to your device, your computer, but also your phone, tablet, Alexa, your fridge…any IoT device – which have ways of recognising you. MyForce, our sister company, works on Bison – a revolutionary platform of not just speech to text – but identifying people by their voice (who’s talking), classifying the tone and talking speed (how are we talking) and the content (what are we talking about).

It’s not just Bison – look at what APIs Microsoft offers (Microsoft cognitive services) and R is integrated in SQL Server…

Microsoft APIs

Google and Facebook are also on the bandwagon (the gravy train).

One of our clients, through our other sister company Platform One – Nuaxia – has a panel of 1,000,000 doctors (not all in the NHS). These guys are in a hurry but they have interesting things to say. So, Nuaxia lets pharmaceutical labs survey these guys but only ask them 10 questions. And instead of asking them to type, they film them.

Platform One interface

This is the interface to create the survey – this is kept simple – it’s for pharmaceutical people. From there a survey file is created through the API of a well-known software vendor, they debit the PayPal account of the white blouse guys, invite the doctors and the data pours in.
After that, the video data is nicely sent to a speech to text algorithm, the text data is classified with Artificial Intelligence (a la CodeIt but not CodeIt yet) and all of it sent to a dashboard.

Text-driven surveys (Chris Davison)

So we know what the typical survey is structured like and most have not moved on that much from the sort that would be posted to someone in the distant past. Linear structures and the logic dictated by closed questions. Technology gives us an opportunity to flip this paradigm on its head.
Imagine a survey more like this…

What we’re looking do is use open end questions to determine the route the customer takes through the survey, asking things that are relevant to them and providing a much more tailored survey experience.

Removing the structure from our surveys is, for me, an exciting proposition and live text analysis can be used to do just that.

Create a pool of open-ended questions and as one is asked, apply live text analysis to determine which would be the most appropriate follow up and continue until either there are no more relevant questions or some constraint, such as time limit or number of questions has been reached.
From a respondent’s perspective, they should have greatly improved experience – far less asking them questions that do not seem relevant, the questionnaire is steered by the issues they want to talk about.
From the analysis side, the data quality should be much greater – in theory you’re asking questions of the respondent that are relevant to them and their experience. Consequently the ability to understand the story behind the data should also improve.

We can also start to tackle some of the issues facing us such as falling response rates – when an invite says the survey will last 10 minutes we can guarantee that – once the time limit is reached stop picking new questions. Or take a different approach, state the number of questions you’re going to ask and don’t ask anymore.
You can always ask the participant’s permission to ask more when you get to the limit, but because you’re asking the most relevant questions to them you hopefully have got the most interesting feedback up front.

There are clearly some analysis considerations – by only asking people about topics they’ve expressed an opinion about could introduce some bias, but nothing about this approach precludes randomly selecting questions or sections to provide balance. But you know when you’re doing that – you know the context in which the question was asked when it comes to analysis – you can even tailor the way it’s worded…
“We know you didn’t mention anything about your experience at the checkouts, but we’d like to ask you about it…”

To take this a step further you can then allow participants to upload photos / videos and do the same real-time analysis and base the survey route from that.

So while this is a specific example, the key principle for me is that we start to utilise the technological landscape we have available to us to start to challenge some of the fundamentals of project design. Connecting through the myriad of APIs helps us to create a combination of services that moves our industry forward and opens up new horizons.

More engaging surveys with ADX Studio

ADXStudio is an open-source Integrated Development Environment (IDE) for people who want to create Askia Design Controls or Askia Design Pages easier and faster. This application supports AskiaScript / JavaScript / HTML / CSS and more.

ADX Studio app icon

We designed this application in order to provide a dedicated tool for survey authors who want to take their surveys one step further: interactive survey controls (Geo Maps, touch-friendly drag and drop, …) or custom layout (mobile first survey design). It therefore allows you to easily set the parameters for your survey controls or layouts, use script (AskiaScript and JavaScript) to push the boundaries of your assets and a preview in order to provide real-time feedback.

ADXStudio user interface

You can download ADX Studio or even contribute!

ADX Studio was built with Electron and is based on NodeJS. Furthermore, we have included CodeMirror (already used in DesignVista) in order to provide a complete text editor with syntax highlighting and code completion.

If you want to learn more about ADX Studio, we added two articles in our Help Centre: