Bricks, Innovations and other surprises at Printemps des Etudes 2018

A surprise is waiting for you at Printemps des Etudes on April 5 and 6 at Palais Brongniart in Paris.

Come and share your demands with us at stand C5 and we will tell you all about our latest innovations…

  • a new version for data collection
  • tips for the transition to GDPR
  • IRIS, our new Dashboard design platform developed in partnership with E-Tabs

… and our 50 commemorative bricks!

Au Printemps des études, 50 briques à gagner sur le stand Askia
Is Christine standing guard over the bricks or is she trying to steal them?!

GDPR: what we all need to keep in mind

What is General Data Protection Regulation (GDPR)?

GDPR is a regulation that requires businesses to protect the personal data and privacy of European citizens for transactions that occur within European member states.

The EU General Data Protection Regulation (GDPR) replaces the Data Protection Directive 95/46/EC and is designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy.

How can I prepare for GDPR?

You will find lots of advice on the web to help you prepare for the General Data Protection Regulation (GDPR) which will apply from 25 May 2018. We have decided to share these 12 steps by the  U.K. Information Commissioner’s Office because we think it is the clearest explanatory list, to provide an overview.

  1.      Awareness

You should make sure that decision makers and key people in your organisation are aware that the law is changing to the GDPR. They need to appreciate the impact this is likely to have.

  1.      Information you hold

You should document what personal data you hold, where it came from and who you share it with. You may need to organise an information audit.

  1.      Communicating privacy information

You should review your current privacy notices and put a plan in place for making any necessary changes in time for GDPR implementation.

  1.      Individuals’ rights

You should check your procedures to ensure they cover all the rights individuals have, including how you would delete personal data or provide data electronically and in a commonly used format.

  1.      Subject access requests

You should update your procedures and plan how you will handle requests within the new timescales and provide any additional information.

  1.      Lawful basis for processing personal data

You should identify the lawful basis for your processing activity in the GDPR, document it and update your privacy notice to explain it.

  1.      Consent

You should review how you seek, record and manage consent and whether you need to make any changes. Refresh existing consents now if they don’t meet the GDPR standard.

  1.      Children

You should start thinking now about whether you need to put systems in place to verify individuals’ ages and to obtain parental or guardian consent for any data processing activity.

  1.      Data Breaches

You should make sure you have the right procedures in place to detect, report and investigate a personal data breach.

  1.  Data Protection by Design and Data Protection Impact Assessments

You should familiarise yourself now with the ICO’s code of practice on Privacy Impact Assessments as well as the latest guidance from the Article 29 Working Party, and work out how and when to implement them in your organisation.

  1.  Data Protection Officers

You should designate someone to take responsibility for data protection compliance and assess where this role will sit within your organisation’s structure and governance arrangements. You should consider whether you are required to formally designate a Data Protection Officer.

  1.  International

If your organisation operates in more than one EU member state (i.e. you carry out cross-border processing), you should determine your lead data protection supervisory authority. Article 29 Working Party guidelines will help you do this.

How Askia can support me to prepare for GDPR

From a researcher’s standpoint, the first areas to be impacted, though not the only ones, would be data collection and data processing.

How can I collect, if I can’t store any individual data with personal data?

A while ago at Askia we started to implement easy-to-use features that will prove to be very helpful to reach your GDPR compliant Holy Grail.

  • Restricted access to data
  • Anonymization
  • Encryption
  • Privacy
  • Deletion
Restricted access to data

We have improved the restriction features and added default templates. Every individual who is granted access to the CCA gets his/her access rights via an elaborated set of restrictions. Access to personal data will only be available if predefined in those restrictions.

Anonymization

During Market research data collection personal information will need to remain accessible. However once fieldwork has been completed, we are no longer allowed to store any personal data. They need to be anonymized.

Askia provides an automatic anonymization feature that will modify personally identifiable information (PII), so they won’t be accessible to be displayed in any kind of data visualisation, nor exportable as part of a data set.

The anonymization process is associated to the above restrictions features that ensure that only data administrators, with appropriate accreditation, will have access to respondent data (until they are permanently deleted).

By default, all personal data will remain unreadable except by the main fieldwork administrator. Askia strongly advises that restrictions schemes are validated by the client’s Data Protection Officer so that they match the GDPR compliance expectations.

Encryption

We strongly advise that you also apply data encryption to all anonymized data. Askia has added anonymization & encryption features across the board that can be activated on all existing data as soon as you have updated to V5.4.9 of Askiafield. Encryption is available on both survey and list data.

Privacy

If a respondent requests not to be contacted anymore, whatever the data collection mode, his/her personal data,  such as phone number or email address must be added into a Do Not Contact list. Before any contact list usage, you need to ensure that you are not using any contact matched with your Do Not Contact list. The Do Not Contact lists are available for each data collection mode or a mix of them.

Deletion

And for those respondents who want to be forgotten you need to be able to demonstrate that you have deleted their personal information. Askia has introduced “Clean-up”, a feature that will generate automatic reports for deleted tasks (surveys, lists, statistics). Existing features supporting the right to be forgotten include deletion reports. Askia advises to run the tool regularly while keeping track of the surveys that need removal from the platform. Once identified, the tool will erase all data related to this survey, whether or not it holds personal data.

Sources

https://www.esomar.org/uploads/public/government-affairs/position-papers/EFAMRO-ESOMAR_GDPR-Guidance-Note_Legal-Choice.pdf

https://www.eugdpr.org/

https://www.csoonline.com/article/3202771/data-protection/general-data-protection-regulation-gdpr-requirements-deadlines-and-facts.htm

https://ico.org.uk/media/1624219/preparing-for-the-gdpr-12-steps.pdf

http://ec.europa.eu/justice/smedataprotect/index_en.htm

Why I joined Askia

It’s my first blog post for Askia and I thought a good initial piece would be to summarise the key reasons why I joined the team here. So in no particular order . . . .

  1. Jérôme – I have known Jérôme Sopoçko for over 20 years and he is an incredible talent. One of the best developers & technical brains in the MR industry, he is universally well-respected & liked and one of the go-to people the industry turns to for thought leadership. So a chance to work alongside him has always been an appeal. And despite competing against his company over the years, we always remained good friends and I was invited to every Askia party and received the famous Askia New Year card every time.
    • Other key Askia staff – as well as Jerome I knew that there were a number of people at Askia that I really admired and respected. Patrick George-Lassale is Jérôme’s co-founder and Askia’s CEO. A big guy with a huge heart and a massive smile – I have met him at dozens of industry events and conferences over the years and it was always a pleasure to spend time with him and compare notes. Jamey Corriveau is someone I worked with previously at Quantime/SPSS MR and he is another major industry talent. I always really enjoyed working with him and as soon as I heard about him moving to Askia (to run the US operation), I thought that was a very shrewd move on Jerome & Patrick’s part. Gaëlle Normand too was another person that I respected in the industry, from her time in the UK with SSI & USamp. She moved back to France and Askia snapped her up a few years ago to look after the marketing side of the business. And there were also a cast of characters that I met from time to time who came across really well – people like Christine Caggia in France, Dietmar Dzierzawa in Germany and Matt Long in London. So I knew that the team at Askia was a strong one.
  2. Fit – there seemed to be a perfect fit between what Askia was looking for and what I could bring to Askia. Having worked for three of the main industry technology providers over the last 25 years (SPSS, Confirmit & Decipher), I had a wealth of experience to bring in from these very different organisations.    
  3. Capability – I knew that Askia had an extremely capable solution and that most of the clients have been working with Askia for many years and were loyal “fans”. I was especially looking forward to working at a technology provider that was strong on analytics. I knew from trying to compete in the past that AskiaVista & Analyse were extremely good products. There was also the extensibility aspect that many clients talked about. They had been able to build their businesses around the Askia technology, rather than simply using a set product. I believe that increasingly this is what high-end MR agencies are going to be looking to do. And then the pricing model at Askia has always been very interesting – no cost per complete for online surveys, which is different to most of the rest of the competition. That is an intriguing concept to work with . . .
  4. The wider Askia Group – Askia is part of a group of companies that work closely together. Within the group there is Platform1, which is a really smart panel & community platform that covers both qual & quant research methods. Askia is the survey engine within Platform 1, so the two products are nicely integrated – I had heard that it’s a key part of the technology that Verve uses and they have built an amazingly successful business with it over the last few years. It was very exciting to have that kind of capability available in the group, as well as work with Platform 1 founder Jon Gumbrell – another of the leading technical talents in the MR industry. Also within the group we have MyForce, a long-term sister company of Askia who provides the autodialer & recording integration for Askia’s CATI solution, as well as doing groundbreaking developments involving speech recognition, which could potentially make CATI Centre verbatim responses a much more valuable asset (project Bison).
  5. Great partners – I knew that Askia worked with some excellent partners that would provide the opportunity to work with great, specialist solutions and also reconnect with more people that I have really enjoyed working with in the past. Digital Taxonomy’s new AI coding tool CodeIt is integrated with Askia and that gives me a chance to work again with the considerable talents of Tim Brandwood, Pat Molloy & Rudy Bublitz. Askia and E-Tabs had gone one step further (in terms of partnership) with a commercial joint venture and will be launching the IRIS Dashboard design platform in a few months.

So there you have it. If I was to summarise that all up, the reasons for me coming to Askia are a combination of the great people I will be working with, the overall fit and the strength, depth & reach of the technology that the Askia group is able to provide.

A fortunate chain of events – a dry read

At Askia we love to talk about Askia things… and about a year ago, the technical team got together in a room and agreed on what was our biggest need: the ability to elegantly call a web service from a survey and decipher the result and store it appropriately.

Web-service not included

I have mentioned in previous articles how an API allows you extend your para-data. With the IP-address that you collect (and that we encrypt – GDPR is watching you), you can obtain the general location of the person. With the location, you can get the weather at the time of the interview and the likelihood they voted to a given party in the last elections.

You could always call a web service by adding some JavaScript in your page but that was not very elegant… and also made it hard to hide any authentication method.

So we decided to create a new routing where the Web Service was called from the server and not from the browser – effectively hiding the call from the interviewee. We got inspiration from the Postman interface and quickly put together a new routing.

The interface allows you to run different scripts depending on the success of the call and to manipulate and store the different parts of the response… and we introduced a new keyword CurrentHttpResponse.

QueryWebService

At that point, we thought that this had been relatively easy and we contemplated a well deserved visit to the local pub for refreshments.

XML and the Argonauts

As we were putting together an example – calling openweathermap.org to get the weather anywhere in the world – we hit our first problem.

The response looked like like this:

<?xml version="1.0" encoding="utf-8"?>
<current>
   <city id="6690581" name="Belsize Park">
      <coord lon="-0.18" lat="51.55"></coord>
      <country>GB</country>
      <sun rise="2018-03-06T06:33:58" set="2018-03-06T17:50:36"></sun>
   </city>
   <temperature value="282.33" min="281.15" max="283.15" unit="kelvin"></temperature>
   <humidity value="66" unit="%"></humidity>
   <pressure value="988" unit="hPa"></pressure>
   <wind>
      <speed value="2.1" name="Light breeze"></speed>
      <gusts></gusts>
      <direction value="200" code="SSW" name="South-southwest"></direction>
   </wind>
   <clouds value="40" name="scattered clouds"></clouds>
   <visibility value="10000"></visibility>
   <precipitation mode="no"></precipitation>
   <weather number="521" value="shower rain" icon="09d"></weather>
   <lastupdate value="2018-03-06T13:50:00"></lastupdate>
 </current>

To get the temperature, we would have had to look for the string “temperature value=” and extract the following digits… it was possible but a bit of a dirty hack, we felt. As stated before, at Askia we love to talk but we hate dirty hacks.

So we started talking about having a XML parser. The cool kids in the dev team took a clear stand: we do not need a XML parser and we would be a laughing stock if we implemented one. What we needed was a JSON parser. Even better we thought: what if AskiaScript could natively support JSON? Note: I can confirm it, we did a XML parser anyway – I hope you are not laughing.

JSON native and the dictionary

So we came up with the following syntax:

Dim myAuthorVar = @{
 “name”:”Jerome”,
 “age“:21,
 “occupation”:”laughing stock”,
 “busy”: true,
 “children”: [“Mackenzie”, “Austin”],
 “address” : {
    “postcode”:”SW12”
    “city”:”london”
    }
 }
Return myAuthorVar [“occupation”]

We were very excited but that meant we need a new variable type – it’s sometimes called an object or a map but also a Dictionary – the failed librarians and encyclopaedists that we are loved that… so there it was: the Dictionary. It allows to store a series of named values in one object. You can set its properties with a method Set like this myAuthorVar. Set (”Busy”, False ). And access them like you would with an array but by specifying a string instead of a number like this: myAuthorVar [“name”].

Variant and Arrays of Variant

I mentioned that it would be a good time to go to the pub when somebody asked what was the type returned by a dictionary accessor. In other words what was the type of myAuthorVar [“age”] ? The response to this is “it depends”… and there was no way of knowing before. Right now, it was a number, but if a web service had indicated “age” as “fifty-ish”, the result would be a string.

So we had to introduce a new type: the Variant

If you called myAuthorVar.TypeOf(), it would return “variant”… but inside the variant is a dictionary. So we created a method for Variant to know what was inside and we called it InnerTypeOf. myAuthorVar.InnerTypeOf() does return “dictionary”.

It was also nice to write @[1 ,2 ,3] or even @[3.14159,”pear”, “apple”]  – both are arrays of variants that we decided to call “arrays” for simplicity.

A variant could hold any of what we decided to call the base basic types: number, string, date, dictionary and any array of the types above. OK – let’s go to the pub! But then we remembered that JSON supported Null and Booleans… and because we wanted full compatibility, we had to create two new AskiaScript types: Null (which does not do much) and Boolean having the possibility of only taking two values: true or false.

Booleans and back compatibility

This was a can of worms – because we used to consider True and False to be numbers. Let’s imagine some script like this:

 Dim myVariable = (Q1_Name = 7)
 ' … some clever coding…
 myVariable = 42
 ' … more clever coding…
 If myVariable = 42 Then
    ' Save the world ...
 Endif

In classic AskiaScript, this would create a variable called myVariable as a number with a value of 1 or 0 and later taking the value 42 allowing the world to be saved.

We did not want to break back compatibility. I am going to summarize what was hours of discussions. We decided that comparators (like equal or Has) had to return numbers. If they returned booleans, the setting to 42 would now trigger an error because 42 is not a Boolean. And if we permitted an automatic conversion of numbers into Booleans, my Variable would take the value True (and not 42) which would change the way the scripts ran… and the world as we know would perish.

Wordy woes

Having spoken for so long, we were quite thirsty as you might guess. But we realised that our language would become very verbose and somehow inelegant if we had to convert Variants into the type we wanted whenever we wanted to use them.

In the example above, if we wanted to find out the length of our author’s post-code, we would have had to write:

 Dim hisAddress = myAuthorVar["address"]
 Dim hisAddressAsDic = hisAddress.ToDictionary()
 Dim hisPostcode = hisAddressAsDic ["postcode"]
 Dim hisPostcodeStr = hisPostcode.ToString()
 return hisPostcodeStr.Length

This was ridiculous… it would take ages to write any serious code… and we had better things to do than write verbose code (at that stage I was thinking of all the beers I would not be able to drink if I had to type that much to get my own postcode). So we went back to the drawing table and agreed that

myAuthorVar [“address”] [“postcode”]. Length was all we needed.

This elegant code was only possible if Variants supported ALL the properties and methods of ALL the basic types. That was a lot of unit tests that we had to do. So we focused (no blurred vision) and we wrote them.

This meant a serious rewrite and a careful management of conflicts: Format is a method for numbers and dates and they act very differently. So we put together a set of rules.

I’ll give you a reference

At that point, we had spent a lot of time on this, we were (very) thirsty but we wanted it to be perfect. And we realised we had a problem – what if we wanted to change the Postcode of our author (by code).

myAuthorVar [“address”] returned a Variant holding a dictionary with the address – a copy of the address. So to change the postcode we would have needed to write:


 Dim hisAddress = myAuthorVar["address"]
 hisAddress.Set ( "postcode" , "EC2A" )
 myAuthorVar.Set ("address", hisAddress)

This was again way too verbose. So we decided that accessors ( the closed brackets [ ] used by dictionary and arrays) would not return a copy of the address but a reference to the address of the author. This meant that we could write

 myAuthorVar["address"].Set ( "postcode" , "EC2A" )

This added very serious complication the the code  it’s called pointers as in dangling pointers in C++)… and that’s very difficult to make work. In the above example (as in life), the variable hisAddress can outlive myAuthorVar. We had to write a lot of unit tests to ensure that everything worked and we did not have memory leaks. This is discussed here.

For short, a variable stops being a reference as soon as you assign it something else.

AskiaScript Anonymous

We had an ongoing problem with the Value property of question – and we thought it’d be a good idea to address it now before we went to the pub.

Q1.Value returns a string if Q1 is an open-ended question. And it returns an array of numbers if Q1 is closed with multiple response. It can also be a number or a date…

Now let’s imagine we have a script like this

 Dim myVariable = Q1
 ‘ On Mondays at precisely 12 o’clock
 If Now. Day()= 1 and Now.Hour() = 12 Then
    myVariable = Q2
 Endif
 ‘ What is myVariable.Value here?

AskiaScript is a compiler – it wants to know the type of things before it’s run… but in that example, myVariable. Value could be of a different type depending on the day and time it was run.

And what if we had something like Q1.NextVisibleQuestion.Value?

So we decided that as soon as you put a question into a variable then the variable becomes an “anonymous question”. All methods of an anonymous question would work but the Value property would be a variant…. And we also decided to make sure that CurrentQuestion was an anonymous question. Problem solved! Drinks anyone?

But then we had another huge back-compatibility problem. Let’s look at the following code:

 
 Dim myVariable = QNumeric
 Return myVariable + 1

In classic AskiaScript, the system would add an invisible “.Value” after QNumeric (we call that an implicit property). myVariable would be a number and we would return that number incremented by 1.

But with the introduction of anonymous questions, myVariable was now a question. Facing an operator (the +) we would add again the implicit property .Value. But now value would be a variant and we had no rule to add a variant to something else… up to now.

So we made sure that we had rules to add any variant to another Variant – or any basic type or array of basic types. Not just add but also subtract, multiply, divide, compare – including all the keywords like  Has, HasNone etc. In total, combining 4 operators, a dozen of comparators, with 6 basic types and 3 types of arrays (number, string and variants) that made a lot of decisions to take (and a lot of discussions) and many many unit tests.

Before we started this development, we had 1667 unit tests ensuring that all functions of the AskiaScript behave the same from one version to another.

For this, we had to add 2231 (!) more unit tests. Once they all passed successfully, we added the whole thing to Suite 5.4.8 and we hope you’ll like it.

Enough Quant Tricks, we’ll be in the pub for a swift one – we deserved it.

“Simplicity is complexity well done” by DemoSCOPE

The following is a transcription of an article published by Stefan Klug, Head of Production at DemoSCOPE. It explains DemoSCOPE’s innovative approach to programming complex surveys, using Askia software. 


While many market research projects today are created using DIY Software, DemoSCOPE usually solves technically demanding survey tasks for its clients.
Traditional market research programming has always been, and still is to some extent, to transfer a paper questionnaire into a CAWI, CATI or CAPI application – question by question, sequence by sequence and language by language. We have had several times the opportunity to gain insight into such questionnaire programs and transfer them to our questionnaire software and modern programming technology.

What is the difference?

In a modern survey software, programming as outlined above is not necessary any longer. The aim is to avoid redundancies, as with the construction of databases or process optimization. Survey parts are not repeated, consistent parts are programmed and called only once in all methods and can therefore be programmed in the preparation time and data storage.
All the more attention is paid to user-oriented programming, whereby we do not only mean responsive web design for online surveys, but also for CATI interviewers we mean intuitive screen displays that are suitable for CAPI surveys and much more. In various surveys we were able to show the potential of these solutions. For example, the transition from CATI to CAWI as a data collection method is carried out seamlessly and without any need for insight into the underlying management software. There are no more opening browser windows, as it would be the case with a simple call of a website from a CATI survey.

For what types of surveys?

With this technique we can also carry out real mixed-mode surveys, for example, in which the same questionnaire can be interrupted at any point and either be resumed several times with the help of interviewers (CATI, CAPI) or as a self-completion interview by the interviewee (CAWI).We call this “mixed-mode multi-node” because it allows more than one node for the method- change. This enables, e.g., the interaction of different survey methods in complex, chronologically staggered survey sections. Of course, only one questionnaire script in all languages is required for the different survey channels. The choice of language or a change of language can be made at any time at all points of the interview and an interview can also be conducted in several languages.

For longitudinal data collection, this means, for example, that the repetition frequency of the survey becomes a questionnaire component. Arbitrary longitudinal section sequences can be stored and called up from the questionnaire without having to define the different sequences or the maximum number of repetitions in the program beforehand.
Survey contents can be structured modularly and the call of modules can be called up, for example, depending on survey results or randomized.
It is also possible to conduct such complex survey programmes, e. g. in a company, to interview several persons (or multiple persons in a household) on different or the same topics sequentially or at the same time, without ever losing track of the progress of the survey or the current status of processing.

Challenge us

The exemplary processes described above can be combined in any combination. Do you have ideas for implementing complex surveys?
Get in touch with us and challenge us. We have not yet been able to fully explore the limits of our software solutions (as well as capabilities and solution orientation) and are curious to see how far we can go or whether there is an end to the universe of feasibility.

Askia’s point of view: “Pure programming magic”

Jérome Sopoçko, founder of the Survey Software Company Askia states:

Jérome Sopoçko, ASKIA
Jérome Sopoçko, ASKIA

“I am always surprised how our survey system is used – after all it was designed to ask a few questions and calculate a couple of percentages. But what DemoSCOPE has done with some surveys is not far from pure programming magic. For example: the system call a household in CATI and spawns different web surveys for each person living there – exchanging information between processes through a SQL Server database, waiting for all data to be collected to restart. The system is designed so that you can follow the history of each household from call to call over the years. The final survey is elegant, efficient and stable. Askia Survey programming has been turned into dark art – and some of the best craftmasters work at DemoSCOPE.”

Stefan Klug is Head of production at DemoSCOPE and a member of the extended management board
DemoSCOPE news 2 / 2017

Translated to English and Reprinted with friendly permission of DemoSCOPE AG, Adligenswil, Switzerland.

Askia at Insight Show 2018

The Insight Show, one of the leading industry’s events in Europe, returns to London on 7 & 8 March and Askia will of course be there!

This year, in addition to our presence on stand IC24, we will be on the Insight Showcase stage to talk about Data Visualisation together with our partner and stand neighbor E-Tabs. It’s happening on Day 1, 7th March, at 14.40pm.

Registration is free for all Market Research professionals so there is no excuse to miss this opportunity to discover the latest industry trends, catch-up with your peers and stop by the Askia stand to talk about software in general and in particular: automation, APIs, community management and revolutionary dashboarding.

Interested? Then read the full program for more details and get in touch now to schedule an introductory chat or a demo with our team!

FACTS:

Insight Show, 7-8 March 2018, Olympia Central, London

Insight18_05854_Static Banners AW2

Come and meet us on Stand IC24

Presentation by Jérôme Sopoçko, Head of Development and Benjamin Rietti, CEO, E-Tabs on 7th March 14:40 – 15:00 on the Insight Showcase Stage: “Easy Visualisation of Market Research Data – The Quest for the Holy Grail”

Panel providers, unite – the speech at the ASC

On the 9th of November, the ASC invited some panel providers to attend a discussion on panel harmonisation. The discussion was orchestrated by Tim Macer.

Here was my speech – the written version at least as I may have ad-libbed a few unscripted things.

ASCPanel

Market Research is changing. You have heard it a million times – not in the way that Ray Pointer announced. There will be more surveys in 10 years than ever. That’s the good news. The bad news is that most of them won’t be run by MR institutes. The goose with the golden eggs is dead – client now run their own surveys which means MR companies – just to stay in business – have to be more competitive.

Goose with the golden eggs (before / after)

before-after

They started to delocalise in India, in Romania or in Ukraine. But that was not enough. To save more money, they have started to use automation.

This has its advantages – of course the surveys were a little bit more formatted… but Millward Brown had done that successfully for years. But once the bugs are eradicated, it’s efficient, fast and most of all cheap. And no blockade by disgruntled employees – although that’s more a French problem.

PresentationBrands

The problem is that end-clients are following up the trend – they can do automation too! They are using Zappi Store and Wizer… and SurveyMonkey and SurveyGizmo and ConfirmIt (and Askia). And ToLuna. And SSI self-serve. And Lucid. And Cint.

I have mentioned it at the ASC’s last conference: we have entered a golden age. The age of the API. A golden age for geeks like me at least: the internet is changing into a gigantic API where information is exchanged through web services. Everything is interconnected and uses the same interfaces.

IoT

I do not know if any of you have used IFTTT – If This Then That. It’s an app where you define a condition and an action. If I get near the house, put the lights on. If the temperature gets below 17 at night, put the heating on. If I enter the kitchen in the morning, put the radio on and start the coffee machine. If I have no milk in the fridge, order some. The IoT – the internet of things – is happening through one common interface through web services… and all industries are playing ball because they want their share of that big cake of a connected world.

oil-rig

I know we all have panel providers on stage so they might disagree with me. But panel data is no longer the only oil on planet Research. Customer databases are increasingly used because they can be energised by communities. And there is all sort of big data available at large – aggregated or not. It could be a loyalty card data, www foot prints or mobile phone data.

wine-glass

And just like for a good Bordeaux wine, to get quality you need to master the art of Blend. The merlot a bit dry and earthy – that will be your panel data. There is some cheap Merlot and very good Merlot too. And the Cabernet Sauvignon with its fruity flavours – that will be your behavioural data.

But unlike the IoT industry, Market Research providers have not decided to play ball. There are the ones who do not facilitate automation because they are afraid of losing control and burning panel. And there are the ones who do but work in isolation.

I do not believe there can be one company that will fill all the needs in Panel data. ToLuna is posturing itself as a one stop for all MR needs: the software, the panel and the behaviour. SSI is doing something similar and the merge with ResearchNow is going to be very interesting. The Leonard Murphy analysis about that on GreenBlog was great btw. And it won’t be scraps left for the others – because the need for data is growing – the need for specialised quality data will be growing too.

babel

But we need a common language. A common grammar. What is a social grade? How do I define national representativity? And how do I trigger a soft launch? How do I notify that a quota is full?

But there is another side to this discussion. If we let anyone access a survey which is tedious, long, repetitive, with grids, 2 max-diff exercises and one 20 minute trade-off, how do we reward the dedicated weirdos that filled that nightmare of a survey? How do we warn them that they are in for the long run? Because we might lose another goose with golden eggs. How can we stop the cull of panellists and the ever drop in response rates?

tediousness

I suggest we build metrics: number of questions, number of responses in a question. And then number of words per question, number of similar questions, number of mandatory open-ended questions… and then build a model.

$(Survey) => (Length(Survey) x TotalTediousness(Survey))-1

And then remunerate the panellists (and their providers) accordingly.

While I was preparing this discussion with all of you, most of you mentioned of how slow moving our industry was. It’s not just that: it’s protective, short-sighted and technologically unaware. And that’s everything the ASC is not. It’s at the ASC that triple-S, a format to exchange survey data between competing survey software was created and promoted. It’s two of my competitors, Steve Jenkins and Keith Hughes, who patiently showed my errors and taught me how to write a proper triple-S file. Let’s all be a little bit more like them and a little bit less like Apple who introduces a new plug and a new format with each new version.

chinese-propaganda

That’s my manifesto – a call for arms… please discuss and let’s move it forward.

Richard Collins becomes Askia’s first Chief Customer Officer

In this board-level role, Richard will manage and develop Askia’s international client base, as well as take overall responsibility for Askia UK office.

Richard Collins

Richard (pictured above) has built a unique track record in the Market Research industry playing key roles in leading companies. Most recently he was Chief Customer Officer for Big Sofa Technologies and before that he founded the first international office for Decipher Inc. in London (known as Decrypt and acquired by FocusVision). Prior to that, he has also held senior positions with Confirmit, Pulse Train and SPSS/IBM.

Patrick George-Lassale, Askia CEO, comments:  It’s the perfect fit at the perfect time! We are ready to take our global business development to the next level. Richard has inspiring skills and experience and we share the same values: we simply had to work together.”

Richard adds:  “It is an extremely exciting time to be joining Askia. We have some important announcements that we are preparing to share over the coming months that will see the company change significantly: both from an organisational and a technological point of view.”

Stay tuned for further details.

MaxDiff grows!

This article provides an in-depth explanation of AskiaDesign‘s built-in capacity to manage MaxDiff data collection & analysis methodologies. For those of you who, like me, need a short reminder of what MaxDiff is; this is the definition provided by Wikipedia:

The MaxDiff is a long-established academic mathematical theory with very specific assumptions about how people make choices: it assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance. It may be thought of as a variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us on five of six possible implied paired comparisons:

A > B,  A > C,  A > D, B > D, C > D

The only paired comparison that cannot be inferred is B vs. C. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons.

MaxDiff table

We have recently added a new ADC to our offering that allows you to easily create MaxDiff tables in AskiaDesign. This article covers the setup process and usage for such comparison tables:

MaxDiff table ADC

 

This Askia Design Control allows you to easily create the required screen format for MaxDiff surveys. Add the ADC to your resources, drag it on to your Most response block, set any captions you want to appear in the headers of your grid and select the Least question it should be connected to. As with most ADCs, this survey control allows you to customise many parameters, such as:

  • Least Question: when you drag the ADC on to the response block for your ‘Most’ question, this is where you define which ‘Least’ question it relates to.
  • Most Caption: the caption you want to appear in the ‘Most’ column header.
  • Least Caption: the caption you want to appear in the ‘Least’ column header.
  • Centre Caption: the caption you want to appear in the centre column header e.g. this can be information about the loop iteration or screen number.

You can play around with this survey control in the following demos:

Alternatively, you can download (or even contribute) the MaxDiff ADC from Github!

MaxDiff interactive library

When conducting MaxDiff methodology you have a number of different parameters to consider and produce programming instructions for. At Askia, we have used the R software environment to do this for the different parameters and a large range of the options for each. We have created an interactive library in Design which asks you what option you want for each parameter. The result is a greatly simplified process for producing any MaxDiff design with Askia.

The available parameters are:

  • Number of questions: also known as the number of arrangements or number of screens. This is the number of screens the respondent will see during the course of the MaxDiff section.
  • Number of selectable items: this is the number of options to choose between per screen.
  • Number of items: this is the number of attributes or statements you want to include overall in the MaxDiff design.

As from version 5.4.6 of AskiaDesign, you can now use our Interactive Library feature to easily create and setup your MaxDiff design with the help of the above parameters:

MaxDiff interactive library

 

Check out the full article for more in-depth information & resources.

Adaptive MaxDiff

As we have seen in the above, the key point with standard MaxDiff is that the arrangements on screen are pre-set and do not adapt to the responses given in interview. In addition, the number of selectable options on screen is a constant.

However, in adaptive MaxDiff, the number of selectable options will change. Each round of screens, the items selected as Least are removed from the next round of screens. The number of items on screens therefore diminishes until you get to the start of the last round where you are asked to pick between all those you chose as Most.

The advantages of adaptive MaxDiff are that greater discrimination between items of importance is achieved. The disadvantages? Well, it could be argued that, since your initial answers create the upcoming arrangements, you do not have as much opportunity to change your mind about items you have rated least important in previous rounds.

This article details these differences, provides an example questionnaire to showcase the setup of this methodology with Askia as well as instructions on using and updating the example file for your own list of items.

New KB article roundup

This article aims to provide you with the best of our most recently published articles on our Help Centre, these range from AskiaDesign and AskiaSurf to AskiaWeb.

Redirect out of an Askia survey and back again

Sometimes it’s required to leave an Askia survey to take part in an external exercise and return to the survey to complete it. In such cases, it may be required to take parameters from the Askia survey to the external application or page. This article will show an example of these requirements using AskiaDesign.

Check out the full article for more details, access to the example survey and download all the attached resources.

Survey router

This article shows how to route a respondent from a main survey to two follow-up surveys out of a possible six depending on their initial selection and remaining SQL quotas. The seven surveys are set up such that the respondent will always be taken back to the correct position in any of their surveys if they close the browser and then click on the original link again.

The original article contains a link to a demo survey as well as an example questionnaire file in order to help you setup this methodology.

Quota logic examples in Design

This in-depth article provides a detail explanation of how to automatically manage quotas during fieldwork, specifically for complex quotas and/or for edge cases such as:

  • Sending an over-quota respondent to a short survey
  • Least Filled quotas

Quota logic example

Each case is fully detailed and provides example surveys to help you adapt the chosen method to your needs!

Local Storage

The Web Storage API provides mechanisms by which browsers can store key/value pairs, in a much more intuitive fashion than using cookies. This API provides two mechanisms:

  • Session Storage: maintains a separate storage area for each given origin that’s available for the duration of the page session (as long as the browser is open, including page reloads and restores)
  • Local Storage: does the same thing, but persists even when the browser is closed and reopened.

This article covers the use of localStorage as it is often used in CAPI surveys, where you want the agent to avoid re-entering the same data twice. A typical use case is an agent interviewing passengers on a single bus line. Once the agent has entered the bus line during the 1st interview, we want to pre-fill this question for new interviews, while leaving the possibility for the agent to edit at a later stage.

Check out the article for more details and access to the example questionnaires.

Capture browser’s user agent after every survey screen

The User Agent is basically an application that acts on behalf of a user. In the case of web browsers, it provides to the website / web application information concerning which browser, browser version, operating system, …

Askia only captures one instance of the browser’s UserAgent inside of the SQL database, meaning anytime you use the “Browser.UserAgent” keyword, it references the UserAgent that was captured in the database (which is the last device to enter into the survey). This Askia keyword does not keep track of which devices/UserAgents partook in the survey itself. Again, it only records the UserAgent of the last device that entered the survey or answered a question. If you want to keep track of which UserAgent was used to answer which question, you’ll need to use the snippet of JavaScript included in the article to pull in the UserAgent into an open-ended variable after every screen.

Improve speed of large Surf set-ups

This article sets out the steps needed for using askia Analyse & Surf to improve the (metadata) speed of Surf set-ups with a large number of .qes (wave) files.

We already had some more general tips to improve such rendering that would be useful for standalone datasets. However, these would not suffice in the case of complex AskiaSurf set-ups that comprise a large number of waves. The article therefore details the use of AskiaSurf’s Improve Metadata Speed feature.