Getting data activation off the ground is still harder than it should be. Despite having access to an ever-growing range of smart and specialized solutions, data implementations regularly hit a capability wall when it comes to meeting specific company and individual needs.
So, is the old classic of user error at play here, or are tools actually to blame?
The answer is both. Technology that only delivers on some requirements clearly isn’t good enough, but it’s also important for those managing data pipelines to consider construction more carefully from the start — specifically by asking the right questions before sinking in resources.
Prioritization isn’t driving productivity
With more CEOs recognizing that the right mix of data, technology, and people are essential to drive productivity and growth, ensuring insight-driven efficiency is a near-universal priority.
But while most organizations have strived to do so by leveraging some form of data funnel or transformation tool, expected benefits still aren’t materializing. Just 5% of the same CEOs have achieved this winning blend, while other studies show only 39% of senior executives, including chief data and analytics officers, say their company is using data as a business asset.
Issues scuppering data ambitions are diverse. Firms may find their system fails to support a vital data cleansing method, can’t keep up with data schema changes, or lacks the scalability to run transformations at scale. Typically, however, challenges indicate one key problem: after a sizeable investment of time and resources, data structures have hit a capability wall.
Often, the reasons for that are twofold. Firstly, firms haven’t scrutinized potential purchases against all of the capabilities and features needed to realize data goals. And secondly, they have also neglected to ensure basic structural setup is sound.
Piling pressure onto data experts
The boom in high capacity, cloud-based processing and hosting has spawned dozens of niche software solutions for almost every data use case. Although this specialization shift has given companies wider choice, picking tools by feature risks collating multiple micro-services that do one thing extremely well… and little else. For many, addressing the disorder and divisions this produces involves adopting an ELT (extract, load and transform) system, or relying on data professionals — usually reaching a similarly disappointing destination with both routes.
Where ELT systems may seem like the ideal way to boost flexibility and tackle transformation bottlenecks, the majority require intensive work to sort through their muddled amalgamated output and produce useable insights. So, data experts are placed under just as much pressure as they would have been if tasked with full manual management from the get-go: struggling to juggle implementation and maintenance with increasingly overwhelming stack shuffling, alongside parsing and organizing ever-growing volumes of incoming data.
As a result, data stacks are riddled with capability gaps and there simply isn’t time to plug them effectively; with many analysts and engineers taking the easy route of resolving today’s problems with further bespoke tools and creating new future challenges in the process.
Mapping a route to better efficiency
Before blowing technology budgets, the obvious first step is asking targeted questions. This includes determining whether tools have the scope to meet broader necessities like applying transformations across hundreds of profiles without breaking, as well as unique user needs. Additionally, it’s equally critical to check if they can prevent time-consuming complications, such as offering built-in features for flagging and consolidating inconsistently labelled data.
Insight gained about specific requirements can also be put to smarter use for strengthening existing data frameworks. While composition will vary, designing and developing pipelines to cater for each cross-company use case will deliver better value for all; with a pre-configuration model (aka the ETL approach) ensuring individuals can access the right performance metrics, order fulfilment rates, or customer satisfaction scores to make accurate and swift decisions.
Covering certain fundamentals will play an important role in facilitating this efficiency. For instance, building systems that automatically cleanse, merge, connect, and transform multi-source data (instead of just onboarding and dumping it into data lakes and warehouses) will make tailored delivery easier. Moreover, as operations evolve and different needs inevitably emerge, having pipelines that can be adjusted and updated in an instant is also useful to allow room for growth.
It’s not hard to see why the rising buzz around modern data tools is bewildering businesses. Suffering from the paralysis of constantly expanding choice, plumping for features they know (and understand) seems to make sense. Such a piecemeal method, however, means firms are engineering their own problems and bear much of the blame for poor tech choices.
Pausing to consider the full picture of organizational data requirements will give them a much better chance of picking the right tools for the job. Moreover, taking care to define specific needs and cultivate a robust base layer of integrated, streamlined, and flexible management will go some way towards avoiding the creation of potential capability walls.
About the Author
Cameron Benoit is Director of Solutions Consulting US at Adverity, joining in May 2020. Prior to joining, Cameron provided consulting services for some of the world’s biggest brands, overseeing large-scale process mining projects focused on minimising manual tasks by pinpointing opportunities for enhanced automated efficiency. His work on bolstering technology-powered productivity has allowed clients to concentrate on fulfilling work and achieve more profound insights.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW