Case Study - designing new architecture to allow real-time querying across historical data assets

The client had decades of accumulative data for a particular insights product that was sold to multiple clients. Each of their clients had bespoke reporting requirements and intervals. 

The client was managing their processes and data using a combination of legacy technology and tools including; IBM SPSS Data Collection (Dimensions), IBM SPSS Base Professional, SPSS files, Quantum, Sawtooth, SQL, Excel, PowerPoint, Tableau. 

The system was inflexible and inefficient. Files had become too large to manage, looped data structures slowed processes down, adding constructions took 10+ hours and it was impossible to scale the process. FTEs were increasing and staff time was spent performing ‘low value’ tasks. 

The client had previously engaged IT experts to overhaul the system however scope creep due to unforeseen complexity of survey data led to the project being abandoned. The client had made some internal attempts to experiment with alternative solutions including Snowflake’s data cloud however these environments did not support the inherently unique requirements questionnaires and their related data outputs.

The client’s brief was to:

  • Create a centralized repository of all historical and future data 

  • Automate data cleaning and weighting

  • Automate data integration 

  • Create the ability to run real time queries across the entire data repository

  • Remove the ETL process for conversion to necessary formats including;  Excel, SPSS, SQL, JSON 

  • Create a cloud based dashboard system

  • Automate PowerPoint outputs 

The Sequence team helped the client to design new architecture from the ground up with DATA(Q) technology at the heart of the process and delivered the solution, including the necessary developments.

Previous
Previous

Case Study - a new revenue channel from modularized products sold as data as a service (DaaS)