AskiaField 5.4 is ready for you

We are thrilled to announce that Askia’s entire software suite is ready for deployment in version 5.4.

Among the list of features of this major version, we want to highlight the very reason that made us switch to a new version track.

Quota revamp

quota54

5.4 introduces a total revamp of the quota system. Among the main features are

  • Quotas can now be set up on multi-coded questions and numeric questions.
  • Minimum and maximum quota targets.
  • Easy crossed-quotas.
  • Quota definition, monitoring and quota breakdown by interviewing mode are now on one single view.
  • Copy-paste large quota structures from Excel.
  • New keywords to master least-filled setups with a single routing.

This means a lot more control during the crucial moments of the fieldwork : balancing the last interviews so that all quota targets are filled up together, neatly, and fast.

You can check all new quota specs in the Knowledge Base.

And for the curious crowd, here is a good read if you’re interested in discovering what’s behind the scene, from a previous blog post.

CAWI to CATI, back and forth

5.4 is our first fully bi-directional multimode version. You can now natively switch interviews from CAWI to CATI and vice versa.

multimode

Field API

5.4 lays the ground stones for extensibility with Field API, allowing you to build ambitious end-to-end automation systems. Upon checking an extra checkbox during the setup, AskiaField can now receive orders from the outside. This also leaves an open-end for various integrations (with CRMs, CMSs, productivity systems….you decide).

Here is our developer reference if you’re interested.

Getting started

Like what you see ? Check out our full KB section to start mastering your new toolbox, and contact our support team to schedule the installation

Askia is back at Insight Show

The Insight Show strikes back in 2017: new date – 8th & 9th March, new location – Olympia West, and a brand new event organisation. Askia couldn’t miss this opportunity to come back to this major event as an exhibitor once again.

The timing couldn’t be better as we have a wealth of new developments to share with visitors. For starters, we’ll be showcasing our brand new stand layout, reflecting Askia’s new identity; we will also bring our uber-cool Engager for a refreshingly interactive experience; our fantastic sales team will be on hand for live demos and to discuss Askia’s latest product releases and developments.

Here’s some info about Insight. Askia will be on stand ID 604 right at the entrance of the show. Get in touch now to schedule an introductory chat or a demo with our team.

askiavista 6.0.3.9

As the askiavista user base keeps growing, vista Administrators have asked if we could provide broader and more flexible usage statistics. Well, as we start this new year, we’re delivering a brand new Activity monitoring with vastly improved monitoring capabilities.

Available by default to askiavista Administrators, it will enable them to monitor: surveys, users, groups, companies, servers, errors, as well as the different actions and types of analyses that their end-users are running on the platform. Furthermore, for those who are administering multi-server askiavista instances, the ability to see statistics per server(s) will further improve your ability to manage performance with survey and user loads, as well as plan for maintenance or further server provisioning to scale up.

Check out the following screenshot to get an idea of the information you can expect to see on your server:

AskiaVista Activity monitor

And the good news doesn’t stop here… We decided to enable all these new Activity monitoring capabilities in the API, which means developers can include these Activity statistics in their web applications and dashboards!

For a full overview of the options available for the Activity monitoring please refer to the specifications; developers can refer to the updated API documentation.

Another area we’ve worked on in 6.0.3.9 is load-balancing for multi-server architectures. When an end-user logs in, askiavista can now “automagically” assign the user’s session’s analytical work to be handled by the “best” AVS-server available in the pool. Basically, askiavista monitors the work-load of the servers it has available, and will assign a user session’s analyses to the server which at that point in time happens to be the least solicited, and thereby significantly improve the response time for returning those analyses results to the user.

Finally, 6.0.3.9 brings many new features, and various bug fixes (see our version history for full details)

  • Added support for Nested Edges
  • Added support to pick-up a question’s Long caption when the Short caption is empty
  • Added support for dynamic Universe captions for ??U??
  • Added a select|deselect all button in the Create filter / response window
  • Added the option Appears like a stat calculation for Calculations by script
  • Added a place-holder page which is displayed when accessing deleted Portfolios
  • Added support to show Raw Data for Date variables
  • Added a new Activity page to monitor application usage statistics (accessible via the API)
  • Added support to set a Survey’s Properties’ default Level to: {Row level} or {Column level}
  • Added a warning message when switching from AskiaScript‘s Advanced mode to Assisted mode
  • Improved fail-over and load-balancing features for AVS farming
  • Improved Calculation by script to show a Column total for a variable outside the Level (Wave), ie. return the Interview Level Column total
  • Improved Number formatting, by ensuring it is not applied on Significance and Column Significativity
  • Improved support for Cleanup scriptssearch & replace functionality
  • Updated the URL-link from within askiavista to our online help documentation
  • Updated askia.config documentation
  • Upgraded LDAP SSO from v5 to v6

New in Askiaface for iOS

We’re pleased to announce that we have released a major update to askiaface for iOS on the App Store; version 3.3.0 is jam-packed with exciting new features and many dead bugs:

Askiafield 5.4 quota management

With askiafield 5.4, we have implemented four new Quota keywords which will dramatically simplify the setting of your surveys when you want balanced or least filled quota.

  1. AvailableQuota: returns the indexes of the responses of the TargetQuestion still available (to do > 0) and sorted from the max to do to the min to do using the count for the sort.
  2. AvailableBalancedQuotas:  returns the indexes of the responses of the TargetQuestion still available (to do > 0) and sorted from the max to the min using the following formula : (Target% - Observed%) / Target%
  3. QuotaList: returns the complete indexes of the responses of the TargetQuestion sorted from the max to do to the min to do using the count for the sort.
  4. BalancedQuotaList: returns the complete indexes of the responses of the TargetQuestion sorted from the max to the min using the following formula: (Target% - Observed%)

Askiafield 5.3.5 quota management

We’ve finally added support  for manual quota management! You can now design your mobile Askia surveys while leveraging the usefulness of such keywords as QuotaToDo, MaxQuotaToDo and other IsQuotaFullFor in order to better apply your quota methodology within your questionnaires!

Added Crashlytics support

We have integrated the Crashlytics engine in order to improve our error logging. Crashlytics allows us to able to locate, down to the exact line of code, any issue that causes application instabilities. This will greatly improve our response time and efficiency when dealing with pesky bugs!

New 5.4 AskiaScript

We have added support for askiadesign’s 5.4 AskiaScript! You can now make use of many new goodies such as:

  • Run Askia Script: with this action, it is now possible to run a script to complete multiple actions in one go. These are the new methods for question objects.
  • HasParentChapter: Check whether the question has a parent chapter
  • ParentChapter: Will return a string containing name of the Parent chapter
  • AgentID: Identifier of the interviewing Agent
  • AgentName: Name of the interviewing Agent
  • EndTime: Indicates the finished date/time of the current interview
  • StartTime: Indicates the started date/time of the current interview
  • Language: Returns the current respondent language
  • Scenario: Returns the current respondent scenario
  • and many more!

Improved memory management

The application has been completely transitioned to Automatic Reference Counting (ARC) in order to better memory management. Simply put, this allows askiaface for iOS to improve the way it allocates and deallocates objects automatically without obliging our developers from manually releasing unused or obsolete objects from memory.

For you, this will translate in a smoother and snappier experience! Win :)

Miscellaneous fixes

We have also took the time to fix some issues some of our users were experiencing:

  • Fixed various issues with routings
  • Fixed issue with ADCs not rendering
  • Fixed issue when syncing interviews with quota data
  • Fixed an issue with AWS resource upload
  • Fixed a memory leak when syncing interviews with AWS resources
  • Fixed an issue where interview description was no longer displayed in Modify interviews
  • Fixed missing interview date in Modify interviews
  • Fixed an issue that would prevent the interview file name to be displayed when Askiaface Description was missing

You can download askiaface for iOS now or update directly from your iPhone or iPad!

Of Askia Scripts and Functions

Introduction: What are Askia Scripts for? Or should I say what are their function?

AskiaScripts were designed to evaluate conditions within a survey – at first to branch the survey and then to set values to (often dummy) questions. They needed to be easy to write (and re-read!) and the user should know at creation time if the script was going to succeed or not.

The needs to improve AskiaScripts came as our clients’ surveys became incredibly complex – and that we used our language to produce our ADC.

Lately, AskiaScripts have been used to run very complex routings – like the post-codification of open-ended responses. We had a request to optimise a routing which had hundreds of lines…

AskiaScripts are also used in Tools to verify the quality of data at the end of collection. It’s here that the demand of functions came loudest where there is a need to norm the way straight-lining is evaluated for grid questions for instance. Here again we have seen scripts which have thousands of lines.

Finally, AskiaScripts are also used in Analyse to achieve increasingly complex calculations on the fly – and aggregating data while being at interview level.

From the feedback we received, we believe 2.0 is a success although the in-take has been slow (even internally).

I believe AskiaScripts will be used for weird custom adaptive conjoint, very complex calculations at run-time (segmentation) – I think it will also be used in defining and running super portfolios at a later stage.

Let’s summarise what the core values of AskiaScripts are – knowing they could be antonymic:

  • Simplicity
  • Adapted to survey research
  • Reliable: minimise the likelihood of runtime errors
  • Powerful: the competition often uses JavaScript (which does not have the 3 previous points)
  • And finally extensible – by Askia and by users

Functions: extending Askia Scripts

Rather than us adding functions whenever they are needed (which will still happen), we have decided to let users create their own functions. Teach a man to fish and you have saved yourself a fish.

A function is a piece of code that you can call with different parameters.

By default, the parameters of a function will be passed by value for our basic atomic types: numbers, strings, and dates. The arrays (and all complex objects) will be passed by reference.

Script Value Reference screenshot

If we want to change the way the parameters work, we can use the keyword ByVal or ByRef to force passing the parameter by value or by reference respectively.

Script By Val By Ref screenshot

Let’s talk about scope, baby

Scopes of variables screenshot

A scope defines where a variable is available.

Variable1 is available throughout your script. Referring to Variable2 will generate an error if it’s after the Else statement.

AskiaScript hit the same problem that most scripting languages have had (JavaScript, old VBA, … ). Every variable created is global – unless it’s within a For or an If – or a function.

This might not be a problem when you write a routing condition. It will be if you write an Adaptive Conjoint or a full-on survey analyser. You will need to remember which variables you have already used and name them differently and it will make it very hard to re-use code (the holy grail of any programmer). It also makes IntelliSense (automatic code completion) absolutely unusable.

Every language came up with a different solution to that problem. The original 1960 languages had global variables. Then functions were invented (with parameters passed by value or by address). Then classes and name spaces were invented. JavaScript went another way – it used nested functions to make sure that variables (and sub-functions) were not visible everywhere.

To be or not to be typed, that is the question…

Any variable or method in Askia is strongly typed – this means that at compilation time, we already know the type of the variable. This allows us to know if you can use a method or not for every object.

For questions, this means that we we know that Gender.Value is a number (1, 2 or DK) and that FavouriteNewspapers.Value is an array of numbers.

But if we have a function that takes a question as a parameter, we do not know the type of its value: it could be a number, an array of numbers, a string or a date…

Script Typed Question screenshot

Within the function, we say that the question is anonymous. And we have defined its Value to be a Variant. A variant is an object whose type we only know at run-time. For this, you have a few properties that you can use to convert a Variant into something more useful.

A variant has the property InnerType which indicates what it holds. You can convert any Variant into something else with the following methods: ToNumber(), ToString(), ToDate(), ToNumberArray().

Script Variant To String screenshot

Mods rule!

After a lot of internal discussion, we have decided to define Modules – or name spaces. You will be able to put together a set of variables and functions together. By default – and unless you specify it – these variables and functions will not be accessible from outside of the modules – in Object-Oriented Programming, this is called encapsulation.

You will be able to make some of the variables and functions available from outside the module – they will need to be prefixed by the keyword Export.

To clarify everything, let’s have some sample code:

Script Module screenshot

Inside the module, you can refer to the variables MaxAnswers and Pi from every-where. And you can call any function defined in there.

Outside the module, you will have to write SampleModule1::DoTheCalculation or SampleModule1::MaxAnswers to access the public members.

The default way to create a module is with Module XX / EndModule. You can either include the definition of your module in your condition script OR write it in a file that you add as a resource. These files must have a .asx extension (Askia Script eXtension). To use a module in a routing, you need to call Import + name of the module.

Script Import Module screenshot

Note that a call to SampleModule::PI or SampleModule::DoTheCalculation would return an error.

When Import SampleModule1 is called, all the code which is outside of the function will be run – that is everything in Initialisation a) and Initialisation b) in the example above.

AskiaScripts evolve all the time… and we might create a function which conflicts with a user defined one. The user defined one should still work (and be called) once the new version is released – back compatibility is important.

One side effect of what we have decided to do with modules is that variable declared in the main scope will be global in a whole script if modules are not used. We are hoping we won’t regret this in the future but the aim of AskiaScripts is not to build full on applications… yet!

Conclusion

Functions and modules will be available in 5.4.5 – released in askiafield in February 2017. We will – at a later stage – introduce newer concepts – true OOP, lambda functions. Imagine two instantiated similar modules, it’s pretty much like two objects! We might have something like Dim myObject As Module1 somewhere down the line.

I also believe that we will like to add methods to Askia objects: example Array.RemoveDuplicates().

Script Remove Duplicates screenshot

Note the possible keywords Extends and This (should we call it This or Self?)

But in the meantime, what we have added should make most advanced users happier. We’d love to hear what you think and you suggest what we do next.

Quota: sticking to the script

Nobody likes quota. They have the off-putting echo of a well-wishing community reluctantly leaving Apartheid behind. If researchers mention quota, it’s because you did not hit the targets. If a financial director mentions them, it’s to tell you how you went over and blew the budget. You do not like quota – and us, programmers, well, it was never our favourite part of the job.

But with askiafield 5.4, we have put that behind us and made quota sexy. We have rebuilt the quota interface and the quota distribution engine.  Upgrading an interface – although time consuming – is rarely a problem. Well, we made it look cool which was quite a bit of work.

Changing the entire quota engine is not something that one should approach lightly. We did it with extra care: we put together hundreds of unit tests (where we predict and verify the output of code) and integration tests (where a full automated run of CCA is monitored and the results analysed).

This refactoring had a few goals:

  • Simplify interface(s): quota definition and the quota monitoring could be done in the same window
  • Add functionality: multiple questions, numeric, grouping responses, remove all limitations on the size of the quota tree
  • Expose through an API: the quota can be defined and monitored from a web interface – or automated from an external system (like Platform One)
  • Clarify quota scripting

This article does not focus on the actual functionalities of the quota – they are documented here – but on the impact of scripting quota through routing.

Why script quota?

Scripts are not usually used for screen-out quotas. These are usually dealt automatically (by the dialler in CATI or by the automatic settings in quota). You want 500 males in region X – once you have them, the interview is simply terminated.

Typically you need script when you want have to take a decision about which concept(s) you want to test. You first ask which ads they have seen and you decide to randomly pick 2 of them and question about them.

Ideally you want to select the ones that are the least filled – the ones furthest away in counts or lowest compared to the target percentage. And you might have weird priorities to take into account (always test your client’s brand against another one, etc…).

The rules can be complicated but we have provided simple functions for this.

5.3: the unbearable weakness of strings

In 5.3, you had the possibility of querying the state of the quota by using IsQuotaFullFor, QuotaToDo, MaxQuotaToDo, and AvailableQuota.

It did the trick for a while but there were problems:

  • It was dependent on a string (e.g, QuotaDoTo(“Region:1; Product”)). It was easy to spell it wrong and only realise that you had misspelled a question near the end of fieldwork.
  • It assumed you knew your quota tree – if you had not nested the Product within the Region (or decided to relax the rules near the end), you would get the wrong result.
  • The returned result was only looking at one quota row at a time.
  • The target counts were not taken in account to prioritise your selection.

Quota in 5.4? Sorted!

Enter 5.4 – well 5.4.4 really. We have introduced new keywords: they are methods of questions instead of functions. In other words, you write something like Gender.AvailableQuota() instead.

  • AvailableQuota: returns an ordered list of responses for the quota which are still open. The ordering is done according to the count: the first element is the response where the highest number of interviews are to be found.
  • AvailableBalancedQuota: Same as AvailableQuota but the ordering is done by the difference between targets and observed.
  • QuotaList: Same as AvailableQuota but all responses are returned (even the one ones over quota).
  • BalancedQuotaList: Same as AvailableBalancedQuota but all responses are returned (even the ones over quota).

If you want to specify some additional information about the tree you can: its works like this: Product.AvailableQuota (Gender: 1, Region :3). This means no more spelling mistakes would get in the way as the compiler would pick on the fact that you specify an incorrect question.

Another thing: if the gender and the region are specified in the interview, you do not need to indicate them but you could get information about another region for instance.

But from now on, if you need to pick 2 products to test and regardless of the nightmare of a quota tree you may have defined, you should simply write:

Dim arrProductsToTest = Product.AvailableBalancedQuota()

Return {} + arrProductsToTest[1] +  arrProductsToTest[2]

Back compatibility – what is it good for?

You know we care about it. We really wanted it to make sure that scripted surveys would work as usual. But we wanted to ensure that the old weaknesses were gone. So all previous quota functions will work with the old string… but we also took the liberty of sorting the result for your convenience… and to check the whole quota tree in case a priority at top level interfered with one of the nested quota.

So we have back-compatibility but not quite: it’s simply better and more flexible – and when the old quota tree was failing, you will get the expected results. We hope you agree.

Quota categories

The algorithm to know if a quota target applies to a given interview is actually quite complicated but we are going to explain it as simply as we can… feel free to skip this (and trust us).

Let’s imagine we have a quota tree like:

Root TO DO
1 Male 50
2 Product A 40
3 Product B 0
4 Female 40
5 Product A 15
6 Region1 10
7 Region 2 5
8 Product B 15

 

Let’s look how we run the following Product.AvailableQuota (Gender: 2) call:

  1. We will look for the availability of the first modality (then second…) – so first we will look at Product A.
  2. We count the number of targets we need to attain: one for the question object and one for each of the questions passed as a parameter (Product.AvailableQuota (Gender: 2) would mean 2 targets, Product.AvailableQuota (Gender: 1, Region 2) would mean 3).
  3. We create a quota category where we set the Product (according to step 1) and we also set the parameters
  4. For all questions used in the quota, we look in the interview to see if we have data and we set it in the category.
  5. We are going to iterate through the tree – starting at the root
  6. When we hit a response for a question that’s defined in the quota category, we either explore the sub-tree or skip the branch. For example, for Product.AvailableQuota (Gender:2), when we arrive at row 2, we would skip the entire tree and continue at row 4
  7. We count the number of questions we have found which are part of our targets (as defined in step 1). If we are looking for product A in Product.AvailableQuota (Gender: 2) we would hit that target on row 5
  8. Once we have hit the target we add all the sub-quota rows. So for product A in Product.AvailableQuota (Gender: 2) we would select the following rows 5,6,7. All the quota rows? Not quite! If the region 1 was set in the interview, we would not add row 7
  9. Once the whole tree is scanned, if we have selected 0 rows, we remove one of the targets (like Gender or Region in the Step 2 example) and start again at Step 2
  10. We would go through all the selected rows, and we would return the To Do with the most constraining value (the maximum of the minimum To Do and the minimum of the maximum To Do). Yes you might have to re-read that last sentence.
  11. Do the next response (product B) and re-start at Step 1)

There is added complexity for groups… if a response is in a group and has no targets, we use the first parent group who has a target.

If a response does not have a target, we assume that the To Do value is 1.

That’s it folks!

Conclusion

We think that AvailableQuota and AvailableBalancedQuota should cover 99% of the scripting needs. We’d love to have your feedback on this of course. We might later introduce a quota object where you will be able to query the actual min and max target or the priority… let us know when you need that and how you think it should work!

Big Data with just one digit

I know some of you think I only attend conferences for the free food, the drinks and the social scene. They are right – no point in me denying. But in-between parties, I tend to heal my hang-overs in the semi-darkness of conferences.

Coming back from the ASC and ESOMAR, there are a few new tendencies in the Autumn/Winter MRX fashion. Forgotten MROC, gamification, mobile research, Big Data – that’s so last year… it’s main stream, dude.

These days the cool kids talk about Automation, Data fusion, Artificial Intelligence… and the Tinderisation of research.

Automation – if you’re an assiduous reader of this blog, you know it’s coming and fully available at a software provider near you. I am not going to ramble anymore about this for now but watch this space.

Artificial Intelligence is the next big thing in Research. It has been successfully used to post-code and (less successfully) to measure sentiment in open-ended questions and tweets. It’s also good at recognising logos and objects on pictures and films, building accurate predictive models and beating me at Go (well the latter is not news and not strictly research)… but now AI is also used to merge data. There is an inconvenient truth about convenience panels… and MR data in general. If your survey is 40 minutes long (or 20 minute on a mobile device), the resulting data will be awful: the participants are either too unusual to be trusted or they don’t care because they are not incentivised.

Although there is no evidence of the length of surveys diminishing (according to SSI), every-one agrees that it needs to happen. One way is to… well… make up data. You do not ask all the questions to every-one and you copy the data around for similar looking interviews – this is called ascription (it has been around for some time). For you stat geeks out there, it’s traditionally done using the Mahalanobis distance. The new thing is to use machine learning to infer missing data. Mike Murray and James Eldridge from Research Now had a great paper about automating the splitting of surveys in chunks from their XML definition. Annelies Verhaeghe from Insites and John Colias from Decision Analyst also presented two great papers about enriching surveys with open big data.

And finally after the uberisation of research which has seen the arrival of monkeys, gizmos, nuts & limes, the new trend is the tinderisation of research. Millenials (there were boos whenever the term was used – and that was every 47 seconds) take decisions with their index. Left means no, right means yes… and survey research should follow. It’s easy to understand, fast to answer and it’s your system 1 talking… And the index is not just for decisions… the navigation of a survey should be done through flicks of the index. Almost being a millennial myself (the NSA has the names of those who are laughing), I see the attraction… and we are soon to release something code-named Jupiter that might just turn (or keep) Askia the best software for the Generations Y and Z.

Latest resources for our users

Even though we’ve been on holidays, some of our dedicated support staff have been working hard to provide you with some great resources for your upcoming surveys: help articles, new survey controls and more!

Here’s an overview of all the new goodies you’ll be able to find on our help centre:

Universe settings

This short article covers all the basics on Universes, an often misunderstood feature of askiaanalyse. It’s indeed often ambiguously connected to another Filters / Sub-populations.

Universe settings screenshot

The article in detail how Universes do not change the counts but change the percentages in your table and details each of the associated values that can be assigned to a Universe for a given table / set of tables:

  1. All interviews
  2. Use selected responses
  3. Use question base
  4. Use answering base

Check out the full article.

Simplify your data analysis with myView

This comprehensive article describes the purpose and use of the myView feature in askiaanalyse. This feature allows users to create an alternative view of the questionnaire tree. In this view you can:

  • Re-order questions / responses
  • Hide questions / responses
  • Change overall structure / indent & create chapters
  • Change captions of questions and responses
  • Create grouped or calculated responses which are assigned to questions in the myView questionnaire tree

Setting up a myView comes in handy in cases such as:

  • Survey files with long structure e.g. when the data file has lots of loops or historic questions and has become difficult to navigate through
  • Files shared between a data processing (DP) team and researchers can contain many variables that are not needed by the researchers.
  • A data file might have sections of it allocated to different researchers e.g. country specific sections. A different .mlv (myView definition file) file can be supplied to each team to show only the profile of questions they will be dealing with.

Read the full article here.

Programming Col Sig (part 4)

Part 4 of the meticulous series of articles on on Column Significance is once again a very thorough piece on this calculation and it’s various settings.

This article is specific to the Student Test using estimator & efficiency coefficient. It demonstrates how to create a table which shows the pre-set Col Sig calculation side by side with the programmed calculation for the same test. It provides all steps as well as an example questionnaire and dataset that you can access directly in the source article.

Cross video survey control

This survey control is similar to our Video control but provides extra support for all browsers and will even fallback to Flash video for those still using legacy browsers.

Cross Video ADC 2.0 header image

You can check out the demo or go grab the survey control!

Light gallery

This brand new ADC 2.0 survey control allows you to add lightboxed image galleries within your surveys. This mobile friendly gallery is fully packed with features such as: zoom in, zoom out, full-screen, keyboard controls, …

Lightbox image gallery

Go take a look at the demo or head to the article to download it!

Target

Another brand new ADC 2.0, Target provides a playful survey control for numerical loops to your respondents that will allow them to drag and drop elements on a target to assign values to each.

Target survey control

You can play around with the demo or go check the article for more information!

Enter the automation era!

It’s not new, Market Research is doing badly.

A few years back, to improve profitability, most major MR institutes have been sub-contracting Survey Programming and Data Processing to Eastern Europe or Asia. This has not been enough. The next step to increase productivity is automation. The successful launch of Zappi Store has made every one acutely aware of this.

Zappi Store uses Millward Brown or Brainjuicer’s methodology to run very formatted studies, entirely automated at unbeatable costs. They have a survey with a few customisable parameters – say the name of the brand, the logo and a list of competitors. With that, they purchase the sample and produce a PowerPoint presentation with all the key (automated) findings.  Who needs researchers and analysts anymore? Actually you only need them once – to design the methodology.

At Askia, we have always known that automation was key to improving performance. At our last user conference, we presented what clients did to automate our system… and our system was always very easy to integrate in a larger enterprise ecosystem because not all of our clients use our full range. Some just collect data, some just analyse data with Askia. So we always conceived our software like bricks in a very heterogeneous Market Research wall.

The cement to these bricks is import and export to open standards but also to produce and document APIs: Application Programming Interface – that is entry points for your own geeks to play with our toys. And if you do not have your own geeks, don’t despair: some independent geeks have decided to integrate our APIs so you can automate Askia tasks.

With the new version of AskiaField (its very sexy name is 5.4.2), we have pushed automation to a new level. You can entirely control AskiaField from a custom made application – standalone app or even a web application. Eventually this means that the AskiaField Supervisor will be a web page. But in the meantime, you can write a piece of software that creates a survey script in XML, uploads it to AskiaField, makes it live as a web survey, creates a list from your customers database, email them, cleans the data by removing speeders with AskiaTools, analyses the data and produces a report that you can email every morning to your stakeholders or provides them with a dashboard.
Automation is the future – you are going to need a few more geeks… oh and some air freshener!

AskiaAnalyse: team me up, Scotty!

The software in the Askia range have been designed to work alongside each other.  If you know how your survey tree looks in Design, you will not be surprised by how it looks in Analyse. That’s the whole point of an integrated suite.

Analyse has mainly been designed to work with a single user, creating their weightings, calculated variables and filters in their QES file.

When we realised that a lot of our users were working on continuous surveys, we introduced Surf files: the analysis definitions were stored in the Surf files so there was no problem whenever you were adding more data.

However, there were two types of user for which Askia Analyse was not performing well:

1. People importing their data from an external source (e.g. triple-S, SAV, Dimensions etc)

They would import their data, maybe create a question tree structure in Design and create multiple questions or loops in Tools. Then they would create variables, weightings and sub-populations in Analyse and save their portfolios.

Now if there was any problem with the source data, such as, additional data or cleaning errors, the whole import would have to be done again. Of course, we provided ways to improve the speed of the process:

…but we wanted to make things quicker.

2. People within large teams

In large teams, a portfolio could be created by someone and run by someone else. It’s not always the same people who create the Surf file and use it. Again we had made sure that the portfolios can be shared. If someone created a sub-population within a portfolio, it would be re-created in Analyse but what about calculated variables, recodes and weightings?

To please all these users, we have implemented a new range of developments:

myView

Firstly, we have implemented myView: it lets users re-organise and hide variables as well as hide responses and automatically associate calculated or grouped responses to questions. It’s something we have given a lot of TLC to. Since the ‘my limited view’ definition is stored outside the QES or QEW (in an .mlv file), if your survey changes (after a re-import), the myView file can be re-used (and opened automatically).

Portfolio improvements

Secondly, we have stored a few more things in the portfolio. The weightings and even the calculated variables & recodes. This means that if you use any weighting or portfolio (as well as tab-template and sub-population), their definition will be saved in the portfolio. If you re-open that portfolio in a different QES or QEW, all these definitions will be re-created automatically! – If these already exist in the survey; the system will warn you if they are different.

Creating loops

Finally, an ambitious development scheduled for 5.3.5.5. We have decided to let people create loops in Analyse (well they are called levels, aren’t they?). This means even if you have data in Surf, you will easily be able to bring questions into loops without writing edits or transforming several files in Tools.