Artificial Intelligence vs Machine-Learning – What’s the Difference?
- Tom Myles, CTO at The ai Corporation
- 21.06.2017 07:15 am Artificial Intelligence
Artificial Intelligence (A.I) is a hot topic in many industries. Most of us, without realizing it, interact with it on a daily, if not an hourly basis. Robotics have been used in factories for decades, Siri has been telling us bad jokes for nearly 7 years and robot hoovers have been transporting cats around kitchens for, well, too long.
When people hear the term A.I, it might conjure up the image of a robotic child, longing to be a real boy (I’m looking at you Steven Spielberg), or of a robot army lined up in a warehouse waiting for Will Smith to appear.
While these Hollywood tales touch upon areas of A.I, what does A.I really mean and is machine learning the same thing?
Artificial Intelligence vs Machine-Learning
A.I is a platform or a solution that appears to be intelligent and can often exceed the performance of humans. It is a broad description of any device that mimics human or intellectual functions such as mechanical movement, reasoning or problem solving.
This starts to ring alarm bells in many people’s minds, but we should not be frightened, the machines are not about to take over.
Recent examples, such as the latest attempt by Google’s Deepmind Technologies division in May to over-throw the world champion of the Chinese board game, Go, have shown that the machine only narrowly beats the human; and other projects, such as self-driving cars or image recognition platforms, have seen issues with reliability and accuracy.
To be clear, machine-learning (ML) is one example of A.I. It is a statistical and data driven approach to creating A.I. For example, when a computer program learns from data to improve its performance in completing a task. As a result, ML is dependent on data and often with these approaches, the quality of the data or the process of creating the data is vital to the success of the machine.
If we look at a seemingly simple example of detecting one image amongst many others, we soon start to see the complications that can arise.
For instance, if we created a programme and told it to look for pictures of apples (because it’s arguably the best fruit) and compare it to pictures of all the other foods on a supermarket website, I would need to start to collect data points about apples (apple = green or red, round, stalk). Just as importantly, the programme would need to collect data points about foods that aren’t apples (banana = yellow, long and narrow or Pear = green, oblong, stalk), so it does not accidentally select the wrong fruit.
If this data in unavailable or a food item is miss-classified this would lead to errors and you could end up with a mango, an avocado or even worse – brussel sprouts.
Now that we have the data we need the machine that can start to tag or classify the images, like performing a move in a board game. ML starts off by making lots of mistakes by miss-classifying an image, or losing a game, then the machine learns not to make these mistakes again and improves its performance on the next attempt.
I like to think of ML as a cog of A.I. When a computer with ML capabilities is tasked with learning a problem, the first thing it will do is observe historical data, so it can adapt its own processes and understanding to predict future occurrences of a particular scenario. A computer that has learned something from data in this way can be considered intelligent and so is an example of A.I.
Learning from this historical data in this way is the most successful approach to generating many different types of A.I as it relies upon known information or knowledge which inevitably comes from a human.
The A.I effect
To some, the robot uprising I mentioned earlier is considered true A.I. because to them A.I. is a machine’s ability to do ‘whatever hasn’t been done yet.’
This is known as the AI effect. When A.I is successful at a particular task, such as Netflix’s ability to predict new films or shows to match your preferences; or Amazon starts recommending products and services by tracking what you have bought and browsed in the past, it becomes the standard.
This means the platform is not perceived as ‘intelligent’ (in the human sense) as users are educated to workings of the process and the platform is no longer considered A.I – after revealing the magician’s trick, it is no longer magic.
Progression of technology means that traditional A.I devices are no longer considered A.I. In the 1960’s, Hollywood imagined A.I to be talking robots or machines that serve our every need, we now have commodities such as Siri and Alexa which are deemed clever, but no longer A.I.
Another example would be large rule-sets being processed in milliseconds to identify patterns mimicking a human assessment of a scenario (and so is strictly A.I) but such tasks are now standard computerized tasks (so no longer appear intelligent, but routine).
So, what does this mean?
Machine Learning is an example of A.I and a piece of a much larger puzzle. The technology that my team and I have developed, and use on a daily basis, uses ML to create predictive rule-sets and models that identify patterns of behavior.
In summary, A.I is the term for any platform that appears to be intelligent; it can include ML solutions, as I’ve explained above, but can also include platforms that go beyond ML or even do not include ML at all. Examples include route planning, scheduling systems, assembly line robots and our own fraud platform, RiskNet®.
For me, the pinnacle of an A.I solution COULD look like the androids you see in Hollywood that are able to interact with you, recollect information relevant to the conversation, provide reasoned responses to questions, remember the conversation and its outcome and learn from these outcomes.
All of these components exist today. We have services and platforms all with varying degrees of sophistication and public acceptance, but there is still a lot more work to be done before WALL-E is bought in to clean up the trash.