When machine learning goes wrong

When machine learning goes wrong

Tom Weiss, Fri 08 February 2019

We read (and have written) a lot about how companies have used machine learning successfully - but what about when companies have used machine learning unsuccessfully? There are a lot of examples of the latter. We've all been there. Like anything in life, data science is something we get better at via practice and trial and error - and the entire concept of data science is that we test things repeatedly, including multiple things that fail, until we get it right.

So as an antidote to believing too much in popular memes and boot camp thinking about the magical power of machine learning, we thought we'd look at some of the reasons we have seen machine learning projects fail - and more importantly, how we can learn from them

Why machine learning fails

Our data science consulting team think there are two broad types of machine learning failures: those your data science team are aware of, and those which they aren't. The former is part and parcel of the machine learning process, and of the learning process of the data science team in charge of the machine learning. The latter can be much more severe.

When you test your model, and it has not worked, you should (hopefully) be able to learn what has gone wrong - with either your model, or the data, or the questions you are asking. At any rate, you know that there are problems, and can begin the task of fixing them. Whereas if everything appears to be performing fine, a team will go on making the same mistakes, your machine learning will be flawed, and your model won't do what you want it to do.

In short, there are dozens of reasons projects can fail. Sometimes it starts at the very beginning - by not prioritizing the right business questions. The differences between a good and bad question can be subtle. "I want to improve retention" and "My churn is on the rise, and I need to stop it" are similar statements - but one is more reactive than the other, and approaching this same problem from these two different start points will result in two entirely different models.

Sometimes it's about the data science team asking the right questions. Unfortunately, a lot of data science boot camps, while excellent at showing students how to proceed once a question has been asked, fail when it comes to teaching how to ask the right questions.

Very similar to asking the wrong question is trying to use machine learning to solve the wrong problems. The typical reason for this is not focussing on business values. Unless the problem you are trying to solve has an obvious value to the business, it is almost certainly going to be a waste of time and energy. Let's look at retention as an example again. If companies only have a limited retention budget each year, they should focus a model on retaining higher ARPU subscribers, rather than necessarily trying to retain lots of subscribers. Some of these customers may either by lower ARPU subscribers, or subscribers that are fundamentally disloyal (e.g. people who subscribe for a sweetener deal, and then unsubscribe when that deal expires). Focusing on trying to save the highest number of customers, rather than the best customers, is a classic example of a team addressing a problem in a way that looks sensible, but that doesn't solve the issue as a whole.

We've seen plenty of examples of data science teams using the wrong tool or technology to solve a problem. Neural networks are a classic example of this - they're de rigeur for data science teams today, but they don't fix everything. They don't work well for subscriber analytics projects, because the business is interested in the inputs for the model (why is subscriber X rated as more likely to churn than subscriber Y, and what does this say about our retention strategy and customer base overall?)

But the most common mistake of all? You guessed it - bad data. Either there is not enough data, too much data (overfitting at the modeling stage by modeling on the entire base), the wrong data, or data of such poor quality that it is unusable. The real issue with dirty data is that it may produce reasonable-looking results, and it might be months or years until someone realizes that something is wrong.

Machine learning may look like it can produce "magical" results, but its the much less magical good quality data pipeline that is the basis for any successful model. You need to be confident the data have data quality before testing the models only high-quality data will let you know if the model actually works.

Need help? Get in touch...

Sign up below and one of our data consultants will get right back to you

Other articles about Data Science


Dativa is a global consulting firm providing data consulting and engineering services to companies that want to build and implement strategies to put data to work. We work with primary data generators, businesses harvesting their own internal data, data-centric service providers, data brokers, agencies, media buyers and media sellers.

145 Marina Boulevard
San Rafael
California
94901

Registered in Delaware

Thames Tower
Station Road
Reading
RG1 1LX

Registered in England & Wales, number 10202531