I was inspired for this month’s article by this post (https://www.linkedin.com/feed/update/urn:li:activity:6415123302649466880) about an interesting technique that Porsche are experimenting with to apply AI to engine sound to diagnose issues.

It spurred me to look for analogies in other disciplines that could, should or perhaps are, exploring the benefits of this approach.

If we understand machine learning as the ability to train based upon a set of positive and negative known scenarios and use those to predict the presence or absence of some such scenarios, then for a given subtle variation in ‘normal’ we can infer what the root cause is (or might be).

In Porsche’s example, the variation in engine noise can give indications of wear and tear for proactive maintenance, warning of component failure – or potential failure – and point to where in the system the fault might lie.

This can help reduce diagnostic times of course, making it quicker to resolve a known problem or to predict a failure before it happens – excellent outcomes, but that overlooks an ingenious part of this approach.

Engines have long been becoming data generating devices with a plethora of sensors embedded throughout to give the car’s on-board diagnostics the data to manage itself and for mechanics to review come service time, so why not simply use these sensors?

Having thousands of sensors in every possible component becomes impractical at some point – they add weight, they consume power, they need connecting, they themselves can go wrong.

By looking more at the output than the low-level details and using AI to make sense of what we’re hearing we can potentially simplify the system itself, reducing the potential introduction of further points of failure and as I thought more about it, it became apparent that this is fundamentally a similar approach that your GP might take.

After all, we aren’t wired with sensors that tell the GP exactly what’s wrong (yet), they listen to our output, where it hurts, what it looks like and through their experience hypothesize and test to disprove or assert that the symptoms they’re seeing, hearing, testing for are possibly X or Y root cause.

Taking this approach of validated learning – learning based upon symptoms we’ve seen (be it a patient’s complaint or engine noise) and training someone or something to recognize the patterns -could potentially be applied to many disciplines.

Software development is a fine example – software typically has (today) instrumentation & telemetry data to augment the engineer’s intuition of what’s wrong, but outcome based predictions, pattern watching and intelligent analysis of changing trends and behaviours help empower that engineer to do more, faster and with the knowledge they’re barking up the right tree.

We’ve seen it done with human intelligence for centuries and are starting to see it applied through machine intelligence to augment that of the human today, but I wonder what other problems this technique could be applied to?

No matter how long or how extensive your mechanic, your doctor, your software engineer’s experience, they won’t have seen every scenario everywhere, and even if they have, cognitive dissonance may bias their objectivity[1]. If we can augment their experience with that of machine trained model, we can amplify their abilities.

This approach to AI and machine learning isn’t to do away with human input, but boy could it help make us more efficient, more effective, more enlightened and this is, in my humble opinion, where great uses of machine learning and AI will come from in the weeks, years and decades to come.

[1] See “Black Box Thinking by Matthew Syed” for thoughts on Cognitive Dissonance – http://www.matthewsyed.co.uk/books/