No, it’s not the lead up to the punchline for a predictable joke, but rather a practical situation to give me a reference point to explore the differences between natural human or animal intelligence, artificial intelligence (AI), and machine learning (ML) – and, what the hell, all good stories start (or end up) in a bar right?

Let’s start with the idea that the three who enter this symbolic unifying venue, ready to imbibe, are set a challenge to decide if it’s a “good pub”.

This helps immediately highlight one of the key challenges with AI – define “good”, set the goals. In Computational Intelligence: A Logical Approach [1] (paraphrasing) “AI is a system that acts intelligently: doing what it does appropriate to its circumstances and its goals”.

So how do we define the goals? This is where perhaps there should have been a philosopher walking into the bar too, as a person’s goals are surely the subject of more than a short article!

We need a way to define goals, an unarguable way to measure success – be that binary goals where you meet them or you don’t, or a definitive scoring measuring success against a goal in order for AI to work in a demonstrable way.

It is this need – the need for goals – that have and will continue to be at the bleeding edge of AI, ML, – all non-natural intelligence systems.

This is perhaps why many early AI demonstrations of public notoriety focused on games like Chess – you win, or you lose. The goal or outcome is well defined.

But what about measuring whether our pub is good or not? There’s a great deal of subjectivity, morality, and even ethics in there. It might have great beer, a great atmosphere, and first-class food; but if the staff are poorly treated, is it ethical? So what defines good? What defines the goal? As with much of life; it’s complicated!

So, let’s digress and look at one of modern AI’s lofty and complex ambitions: Self-driving cars.

On the surface of it “You have arrived at your destination” is a binary goal, it’s inarguable, you got there or you didn’t. It’s also something that – thanks to GPS – is an easy outcome to test for.

But immediately there are side goals: don’t damage the car or (eek) hurt someone, ensure you stop at the traffic lights, obey the rules of the road. But even knowing the rules, as a human with 20+ years of experience of driving I still struggle with the whole “Priorite a Droite” rule when driving in France or the Czech Republic [2] because I don’t see it very often.

I haven’t been trained enough for that occasional scenario, and so it will be for machines doing the driving.

A typical human in the UK receives 45 hours of formal positive training before they pass their driving test. After this, they learn from both positive and negative experiences. I’ve seen cars navigate roundabouts incorrectly, suddenly change lanes without looking, and cars come barrelling down the outside lane at twice the speed limit. These are all things that violate the strict rules of the road but happen every day. Having driven thousands of miles I have developed some trained intuition about “in this circumstance, this is likely to happen, so take action to avoid it”.

This brings us nicely on to Machine Learning; training the machine with both positives and negatives, and input and goal scoring.

Imagine how good a well-trained self-driving car could become with enough pooled, collective experience of how every real-world road scenario unfolds. It would have the combined experience of every like-minded vehicle globally to lean upon for its analysis of what it’s seeing right now.

Machine Learning focuses on recognising patterns – often across broad and large datasets that human’s struggle to see the patterns in – and linking these patterns to the goals or outcomes being sought. Machine learning focuses on identifying correlating factors and using them to determine relationships that then help predict an outcome. This is a powerful and compelling technology, but…

…it’s not a fully cognitive, general intelligence. We don’t want our cars jamming on the breaks every time someone twitches at the steering wheel in the next lane. You have to learn the negatives as well as the positives.

Machine learning can be trained. Give it plenty of relevant data, and the outcomes that the data infers, and you can build some great models that are incredibly powerful. But these models are also potentially biased, and we need to ensure data is used with empathy [3].

Let’s leave the car at home and pop back down the pub (bring on the day when the car can drive you home). Looking at Vision AI , which identifies and classifies objects in an image, machine learning can perform statistical analysis of the patterns it finds in a quality wide set of data that does and doesn’t contain whatever we’re trying to train it on.

Sticking to our original story, when trained to understand “in this image there is a beer glass” and “in this image there is no beer glass” with a big enough sample of pictures with and without beer glasses, machine learning can build a pretty good predictor of whether the picture it’s looking at is a beer glass or a “not beer glass”.

But it might never have been shown a wine glass before, and if it braved a hipster bar in trendy parts of London serving cocktails in old jam jars, it would have no chance of testing the drinks.

You or I sat at the bar would still (until we’d maybe had one too many) have the cognitive realisation that we should take appropriate action to ensure that to meet our goal of assessing whether this pub is “good” or not, we should probably stick to drinking from the beer glasses, or other varying shaped vessels, but to steer clear of the vases.

So, as with many things in life, AI and machine learning will require an ever more sophisticated definition of the goals if it is to gain the sophistication and complexity that approaches a broad natural intelligence.

But when set with and well trained for the right objectives and goals, learning from both the positive and negative outcomes it’s seen before, AI has the potential to be a powerful tool that reliably and consistently acts to achieve the goals it’s been trained for. The art/science that is software development famously builds things are aren’t quite what the customer actually wanted, hopefully we’ll work out how to set the goals we actually want.

AI has taken a giant leap forward with the nexus of the amount of data and the low cost of processing in recent years. With well-defined goals, good data ethics, consideration of where lessons are learnt (training on the negative outcomes, not just the positive), and likely a few deep and meaningful conversations down the pub, AI, Machine Learning, and its brother and sister technologies should well be worthy of a clink of glasses and some “Cheers”!

[1] Computational Intelligence: A Practical Approach. Poole, Mackworth & Goebel 1998. Oxford University Press – http://www.cs.ubc.ca/~poole/ci.html The term AI and Computational Intelligence are used synonymously.

[2] http://www.vendee-guide.co.uk/priority-a-droite.htm

[3] Phil Harvey and Noelia Jiménez Martínez have proposed a book on Unbound examining the humanity of data – https://unbound.com/books/data/