Should You Use Machine Learning As A Hammer For Every Nail?

  • Milos Dunjic, AVP, Payments Innovation Technology at TD Bank Group

  • 19.02.2018 07:00 am
  • Machine Learning , Artificial Intelligence , ML , AI , Milos Dunjic is a technology executive fascinated with and focused on digital payments innovation. He currently leads and is responsible for TD Bank’s Payments Innovation Technology practice. All opinions are purely his own and not of his past and current employers.

Should You Use Machine Learning As A Hammer For Every Nail?

Machine learning (as an important subfield of broader field of artificial intelligence), is clearly in vogue and all the rage these days. It is undeniably becoming shiny darling of the techno press. Every day we hear about a new hot startup, proudly having ‘machine learning’ and / or ‘artificial intelligence’ as part of their name, which has been either acquired, or showered with head spinning amount of fresh VC funding. It seems that even blockchain, after several years of being in the prime spotlight on its own, can’t compete – especially after showing signs of struggle to visibly disrupt the traditional enterprises, mainly due to a lack of standards and complex set of integration requirements.

I believe that introducing the machine learning into an enterprise would prove to be much smoother journey than trying to introduce a fundamental change, such as distributed ledger technology. In my personal view, the machine learning should not face the same set of complex integration challenges as blockchain. Initial wave of machine learning applications could opportunistically and non-intrusively be deployed right beside the existing core enterprise systems, without impact, to simply consume the flow of data and ‘learn’ by fine tuning their behavior. As such, the machine learning applications could be viewed mostly as friendly additions, sitting in the ‘shadows’ of its bigger ‘legacy cousins’, patiently consuming available enterprise data and becoming ‘smarter’ over time.

Understanding Underlying Fundamentals Is The Key

However, as with any technology, understanding the fundamentals behind it is crucial for meaningful utilization. Being fully clear about the technology’s potential, but also be comfortable with its limitations, is the most important element of any technology rollout and adoption strategy. Lessons learned from blockchain space, clearly teach us that simply believing in ‘magic behind it’, does not help at all. Unfortunately, based on what we see and read on a daily basis, many AI and machine learning enthusiasts frequently fall into the similar narrative trap. The hard reality is that there is no ‘magic’ behind any technology. You either understand it or you do not.

Going back to blockchain example, hopefully by now it should have become obvious that any distributed consensus-based system will ALWAYS going to perform visibly slower than the centralized based equivalent solution. Does that mean that blockchain will have no place in our lives? No, not at all. On the contrary, there are areas where blockchain may be clear winner and most natural fit, like in trade finance, digital identity, etc. But in the areas where high throughput and low latency are primary requirements (like retail and transport payments, mobile telephony, stock exchange matching engines, etc), blockchain just may not be the right tool for the job – and there is nothing wrong with admitting that.

Similarly, understanding the fundamentals behind machine learning will help you to make educated decisions about whether machine learning is appropriate solution (or not) for business problems and use cases you have at hand. Many business problems can be better and efficiently solved / automated, by using the traditional deterministic programming techniques. Again, the very same examples of retail and transportation payments, mobile telephony or stock exchange matching engines immediately come to mind here as well. The efficient set of clear and deterministic business rules (of course this is over simplified J) is already doing a perfectly good job in these cases.

On the other hand, the machine learning may be the best technology choice and option, for:

  • Tasks whose underlying rules are too complex to be clearly understood, specified and programmed: There are large number of routine tasks that we perform subconsciously on a daily basis, but which are extremely hard, if not impossible, for an analyst to specify in a set of well-defined steps and conditions that could be coded in software. Everyday examples of such human tasks are walking, running, car driving, speech recognition, and image interpretation / vision. Those activities are simply based on too many interdependent external environmental factors and sensory inputs, connected via large number of often overlapping and not so obvious rules. In such fuzzy and fluid environment, it can be impossible for human mind to fully grasp and capture the problem domain in a set of deterministic specifications, which can be accurately coded. On the other hand, today’s advanced machine learning programs, that “learn from experience,” usually achieve quite acceptable results in these areas, after being trained with sufficiently large set of training data samples.
  • Tasks whose sheer volumes are exceeding human capabilities, i.e. where we simply cannot scale: Another area which is starting to benefit from machine learning is the analysis of very large and complex data sets. Even with clearly specified rules, we may only be able to manually sift through few hundreds of sample emails looking for spam. But this task will quickly become overwhelmingly tedious and impossible to accomplish manually, when we are faced with several million emails. Similarly, analyzing large astronomical data sets, turning the vast medical archives into structured medical knowledge, optimizing web search engines, and discovering and predicting the spending patterns using the large sets of historical electronic commerce transactions, are the perfect examples of where machine learning can be used very effectively. The vast amounts of available digitally recorded data today and in the future, represent the untapped and rich treasures of potentially meaningful information, which are way too large and too complex for humans to make sense of, without help of machine learning.
  • Systems with highly adaptive nature: One painfully limiting feature of deterministic software systems is their inherent inflexibility – once the software requirements have been understood, captured, designed, implemented and installed, they stay unchanged. For the exact same input data values, deterministic software will always accurately produce the exact same expected output, but it will not be able to flexibly handle slight variations of input parameters, unless its rules have been updated and reprogrammed. However, it is prohibitively expensive to change and retest software every time when new unseen input variation is encountered. Typical examples of such problems, where degree of behavioral flexibility is required, include interpreting handwritten text, recognizing speech, detecting spam, detecting mood and emotions, etc. Good news is that the machine learning systems, with their ability to adapt their behavior based on input data offer a potentially elegant solution to such challenges; i.e. they are, by their very nature, adaptive to reasonable changes in the input data they process.

What Are The Limits Of Machine Learning?

Even when machines can ‘learn’ by themselves, the preferred patterns of desirable behavior are still set and defined by humans, primarily by supplying the labeled training and test data sets. Majority of machine learning applications are therefore highly specialized for a specific domain and heavily influenced by skilled engineers who choose the set of input data features and ‘tell’ the machine what is ‘right’ and what is ‘wrong’. For example, the fine-tuned deep neural network that is pre-trained for face recognition, is going to be terrible at detecting email spam or financial fraud. It will at least have to be fully re-trained from scratch for the new problem – which is limiting our ability to randomly apply pre-trained neural network to brand-new problem domains.

Choosing a learning algorithm in fact means deciding at which patterns a machine will be good or bad. No universal learning algorithm exists. Same stands for human brain. A highly trained ‘English speaking’ human brain, immediately struggles when faced with task of interpreting / understanding the ‘Russian language’, despite both being instance of a well-known human language.

All in all, machine learning is very interesting and powerful systems development approach, which for some types of problem domains, especially those identified and listed above, vastly outperforms the traditional deterministic programs in both - accuracy and flexibility. That is excellent news for business and software engineering community. 

However, despite what you are being told by an over-enthusiastic futurist, random travelling salesman or an eloquent consultant, machine learning isn’t panacea nor universal remedy for every type of business problem. Its usefulness for a particular business domain is highly dependent on skill of your data scientists and engineers – i.e. those in charge of identifying the right set of input features, choosing the right type of learning algorithm and properly training your enterprise AI engines.

In other words, highly creative humans will still be required, going forward, even in the new world of advanced machine intelligence. Machine learning simply doesn’t eliminate demand for skilled engineers. Humans still need to understanding the nature of the problem first and then instruct the machine on how to teach itself, in order to deal with the problem. Despite recent hype, nothing fundamentally changes in the way professional software engineers will be approaching solving business problems in the future – only available tools will be different. And tell your HR department that the competition for real technology talent is only going to get worse.

Other Blogs

Nutanix Predictions for 2025
  • 2 weeks 3 days ago 08:00 am
The Key Steps to Ensuring DORA Compliance
  • 2 weeks 3 days ago 03:00 am