Covering Disruptive Technology Powering Business in The Digital Age

Home > DTA news > Blog > Magic 8—ball – Data Predictions
Magic 8—ball – Data Predictions
May 7, 2018 Blog Data Prediction

by Johannes Sundén, Regional Presales Manager, Qlik

The Magic 8-ball was a popular toy when I was young. You’d ask a question, shake the ball, and get an answer. “Will I do well for my presentation in school today?” – “It is decidedly so”.

Of course, there was little actual prediction happening and best case, you’d get a laugh out of the answer.

These days, we’re looking at data and concepts like machine learning and AI to help us make smart and meaningful predictions. There are many brilliant data scientists in the world who are fine tuning algorithms and making well planned and statistically supported decisions on what data to analyze and how to interpret the output.

No doubt we’re making fantastic strides in processing data and identifying patterns to help us run our businesses, like predicting what telco customers are likely to discontinue their subscriptions or what might be the next best offer to suggest to an existing customer.

With business users needing statistical information based on input data with ever increasing turnaround times from an overloaded team of data scientists, this has in many places become an insight bottleneck. This is like the shift from traditional to modern BI where IT organizations were bogged down with report generation before moving to a more self-service based BI environment.

Several software companies are trying to bridge this gap of democratizing machine learning and AI with AutoML tools that allow for testing many algorithms and parameters against a set of data to see what combination yields the best prediction.

From data scientists I’ve talked to there’s both a like and a dislike for these tools. They’re great for testing out several combinations to narrow in on an area of algorithmic use for a problem. At the same time, they run the risk of trivializing problems and ignoring the main chunk of work data scientists generally engage in, thinking about a problem, planning an approach, preparing data and considering biases and other things that can muddle the results. At worst, an algorithm provides you all the confidence needed to make a potentially bad decision.

I do believe we’re quickly commoditizing areas of business, like churn and next best offer solutions, where over time, we’ll be able to buy more off the shelf than having to rely on specialist in-house capabilities to create one-off solutions.

The real value of putting these algorithms and tools in the hands of knowledge workers is the democratization, where a user can quickly narrow down the set of data she wants to process instead of relying on a preprocessed scenario. For example, when predicting the risk of attrition or staff leaving, wouldn’t it amazing if an analyst can choose to filter down on certain years of data and choose to omit certain parameters, like age or pay, when trying to answer a data question, without having to ask for a data refresh? As more people become data literate and as machine learning improves, this can soon be a reality.