Unless you have been living under a rock you will know that machine learning, and more broadly artificial intelligence, is one of the hottest topics in IT right now. And while this topic is certainly overhyped there is actually some meat on the bone, i.e. there is something real underneath all the noise. And because of this, around the globe major tech players are starting to make “bet the company” style investments on the future being a highly AI centric world.
So we have all heard the dream of curing cancer, self-driving cars and you have probably heard commentary on the negative perspective about mass unemployment and social unrest. But what does machine learning mean to the enterprise now and the near future?
Firstly, let’s talk quickly about what machine learning actually is. If you based this off popular media you would think machine learning is sitting down with a computer, who looks like a robot, and teaching it how you do something like accounting. Once done you’re then free to go off and fire all your accountants (sorry accountants, nothing personal, just an example!). Certainly nothing exists in the AI space that I am aware that would even come close to this today. With the risk of being a downer, machine learning is actually a set of models that use complex mathematics to learn relationships and patterns from data for the purpose of future classification or prediction. That’s it! Machine learning is number crunching code that takes data in and spits numeric predictions/classifications out.
So why all the fuss about machine learning if it is just some form of psychic calculator?
Humans are good at programming computers to do complex things when those complex things can be broken down into a series of steps of of reduced complexity that we can "get our head around". However, increasingly, we are expecting more complexity from our computers. We want to talk to them and have them understand. We want them to be able to recognise images and classify them appropriately. We want them to help more effectively diagnose illnesses and eliminate risk and take burden out of our daily lives. To do these things we have deal with highly complex relationships that can’t easily be represented in conventional ways. Essentially we were trying to have the "human" solve the complexity problem first, then instruct the computer how to replicate our way of thinking so they can solve it too.
But when trying to translate any real world occurrence into something our computers understand our efforts have been good, but sometimes not really good enough for widespread use. The number of variables and relationships have been too complex, sometimes these problems have thousands or millions of variables, and our limited ability to comprehend leads us telling the computer how do things with inherit flaws and weaknesses. How many times have you used voice recognition which understands some things but acts like you are speaking gibberish at other times? Or you have written a document and the spell checker fails to find the correct spmelling of a word that like you're making up your own words as you go? How many times have you let your car drive itself and it has ended up in a paddock (ok, bad example)?
The machine learning revolution has come because we have thrown our hands up in the air and said it is “all too hard, you work it out” to our computers *. Instead of giving our computers specifically coded instructions, we are now giving them data and requesting they "learn" how to best predict the outcomes we need. And computers don't get confused when dealing with immense complexity with data which may have billions of items and thousands of variables – instead complexity translates into longer processing time. Enter clever optimisation methods and hardware (GPU/FPGA) acceleration and boom, you have a fundamental change in how we do things.
* Ok more correctly, machine learning builds on 40+ years of research and development, with modern advances in computing power and scalable algorithms making it a practical solution.
Machine learning is a generic approach that we can apply to a vast set of prediction problems where we have sufficient data available to train. And by prediction I don’t mean trying to guess the lotto numbers, but anytime a computer is trying to “understand” something this is a form of prediction. Spell checking is prediction, shopping recommendations is prediction, your credit risk is prediction, what link you will click on a site is prediction, what marketing offers you will respond to is prediction, the identification of fraudulent transactions is prediction. The list goes on and on including more subtle forms such as the accounting categorisation of a business transaction, the expected delivery time of an order, auto-completing search boxes and so on. All prediction. And by combining this prediction with new forms of input (sensors, devices, IOT) and outputs (automation, robotics) we can do some pretty cool things.
By giving the computer data and guidance and "letting them learn" we are often now able to produce a better outcome that if we had tried to program the specific logic ourselves. In some cases decades of research into specific problem related algorithms have been replaced (or enhanced) by generic machine learning capabilities. For example, in one online machine learning course one of your first projects is to create a handwriting recognition program which translates images of hand written letters into their equivalent ascii codes. In the old world this was a massively difficult problem, not understanding printed text but actual handwriting and being able to deal with the millions of variances between the way people write by hand. In the new world, armed with a large library of correctly label source images, we can train a machine learning model on this data that reliably translates new handwritten images to text. All in a few dozen lines of code.
To support the worlds desire for AI capabilities we are seeing major AI platform vendors start to commoditize machine learning. Commoditization basically means making it useable to a wider audience than a select group of phd’s, statisticians or qunats. Commoditization generally also means “black boxing” machine learning in ways that doesn’t require the implementer to understand in great detail why their machine learning themselves models work. Instead they just need to understand how to plug these models into their applications so they can learn from the data generated once deployed and use this to guide application functions. As I have mentioned before, this consumerization carries some risk related to the ethics and astuteness of those building/testing these black-box models, however my general feeling is that this is the way forward for many mainstream requirements.
Ok, so we have covered what this is, what steps will enterprise's take to implement machine learning in their organisations?
Do well… nothing
I believe many, if not most, organisations will start receiving the benefits of machine learning by doing nothing. Well, not absolutely nothing but no direct research or investment into machine learning or AI. Instead they will work with existing software vendors to update applications. Overtime it will be the software vendors that do the implementing mentioned above and provide unified machine learning capabilities within core application functionality.
Many apps in use today will add machine learning enhanced features and capabilities. Some of this will impact usability, the apps will seem to be more in-tune with what users do and how they do it, apps providing prediction, alerts and notifications and/or guidance will seem to become more accurate over time and give users less noise to deal with. Largely this will be transparent, other than the IT department reporting less monitors pushed off desks and keyboards thrown out windows in frustration. This will be fairly well universal across the spectrum of app classes, from ERP, CRM, financials, HR, Payroll and so on.
Over time these applications may take this integration further and start to pair machine learning with automation to provide smart workflows that start to fundamentally transform the way in which organisations do business. This is when things may start to become highly disruptive to the status quo and may begin changing jobs, eliminating some and creating others, however the focus of those implementing remains focused on the functional business outcomes rather than needing an in-depth understanding of the AI technology driving it.
Already major vendors from Microsoft, SAP, Salesforce to IBM are working to integrate AI into their existing product lines and it is this "AI inside” approach that I think is how most enterprise organisations are going being impacted by machine learning in the near term.
New Classes of Apps
Integrating machine learning with existing applications can start to drive improved usage and support better decision making, but new classes of applications are also coming available to the enterprise which are only possible because of the advancements to AI. These new classes of applications allow organisations to start driving new efficiencies, improving customer service, strengthening security as well as getting new product ideas to market faster and so on.
One of the key new classes of apps is Bots. A bot is an application that combines natural language processing (NLP), with machine learning to “understand” a request from a customer then “predict” the most likely correct answer. Bots can be set up to receive questions from a customer via email, web form etc. They process the message and work out their level of confidence in terms of ability to accurately understand the question. If it is high then the Chatbot may answer the question otherwise pass it through to the customer service team. This may include questions such as “What time do you close today”, “What’s your address” to more personalised “What’s my account balance?”. Chatbots can continuously learn from past interactions to improve their ability to answer more questions in the future more accurately. This has the potential to significantly reduce customer waiting time for simple questions and allow customer service teams to spend more time on customers with complex questions or issues.
More broadly new AI enhanced apps are coming available to support most key forms of enterprise decision making. From HR through marketing, finance, general productivity new application classes are being created to ensure that when decisions are made, they are the best, unbiased decisions given all the available information.
Some organisations may want to go further than above and look to start driving an enhanced competitive advantage using AI. Maybe the organisation is of such complexity that they are better served by in-house built solutions rather than implementing off the shelf product. Maybe these in-house applications have complex risk calculations, classification and/or segmentation of customers, credit risk, fraud detection, churn prediction, procurement and logistic planning and so on.
Benefit from machine learning may be achieved by taking another look at existing prediction logic that has been “programmed” in traditional ways using business rules and complex logic. However, machine learning is not a magic solution. To best solve problems you still need a detailed understanding of the what the problems are and the impact they cause, and this comes best from those with experience and domain knowledge in the business. Leveraging these people to hone in on where the real challenges are and pairing them will people who have skills in modern data science, in my opinion, could provide much benefit.
To support this vendors of enterprise infrastructure software and platforms are busy adding AI capabilities. Microsoft has already included R support into SQL Server and has recently announced upcoming Python support. Microsoft also has their Cortana and Azure AI services all orientated towards mainstream use and deployment. Amazon AWS has extensive AI platform capabilities including recently release their Alexa voice recognition capabilities for mainstream use. Products such as Matlab which organisations have been using for many years to understand data have been enhancing their AI capabilities. More broadly Python and R have already become defacto standards as the languages of choice for machine learning, and decent sized talent pools of people with skills are starting to form, either new graduates or existing bi/data professionals who have cross skilled to round out their data science capabilities.
For the most part we have been talking about AI technology supporting existing businesses and making them more effective in the marketplace. But what about enterprises who believes the future of the business is in their ability to find new insight in data, or in their ability to solve problems that haven’t been solvable before? Maybe their a drug company in a race to help cure/improve certain infliction's. Maybe they are a hedge fund where they always need to be one step ahead of the market. Then these may require a different approach to how machine learning is leveraged.
Organisations who want to go “all in” on machine learning may see a very different level of investment and return to the approaches I have indicated above. They may need to hire top global talent, build numerous data science teams, invest in data orientated solutions and may even in building products and services that have a primary purpose of generating relevant data for feeding into AI processes. I won't really go in more detail about this here, but needless to say they would have a critical need for strong teams and top down support.
Machine learning is coming to the enterprise and in some forms it is already here. Benefiting from machine learning does not necessarily mean building large teams of data scientists and making huge investments. Often machine learning will be implemented by software vendors who are continuously searching for ways to add value and improve the gains provided by their platforms. However establishing a leading competitive advantage through machine learning may be more involved and require careful introduction into existing applications and in some cases, shooting for the stars.