I try to focus my ramblings on current trends in data & AI and to avoid Sci-Fi projections and quotes that expose my inner-Trekkie. But, reading Martin Rees’ piece in the FT in September, entitled “The Next Giant Leap” made me think about the longer-term vision for where AI could be most valuable to achieving our loftiest goals.

Robotics has served us well in environments where risks are high, or people just can’t go. From bomb-disposal robots, to the internals of a nuclear power plant, to more daily workloads like manipulating parts through the paint oven in a car factory, robotics controlled locally are invaluable.

As we seek to gather more data about the environments beyond our immediate home- satellites, exploring the outer solar system and more down-to-Mars explorers like the Mars Rover represent (relatively) cost-effective ways to gather insights into the wider universe. Whilst there’s still risk – an error might cause us to lose the very expensive ‘bot – it doesn’t make quite the same moral dilemmas as the human disaster possible with real people onboard.

In environments that we’re so ill-evolved for, sending machines to take a peek before we go, to gather data and perhaps even do the slow and painstaking groundwork before we venture out in person makes absolute sense, but even here the immense realities of the scale of space change the game.

Where we might successfully ‘remotely’ manage a robot on Earth, or even in low orbit, the sheer distances of Mars and planets further afield make interactive controls impractical. A radio wave from Earth to Mars takes between four and twenty-four minutes each way[1], depending on where we are in our relative orbits – and that’s just our nearest neighbour. Not to mention the bandwidth and power consumption required to stream volumes of data back home constantly and the possibility of temporary, or permeant loss of communications.

So, if interactive centralised human controls are impractical, we need to be able to empower much of the routine decision making on site, at “the edge”. The greater autonomy the remote device has – be it a satellite, an exploration robot like the Mars Rover, or in years to come a construction or manufacturing bot stockpiling resources and infrastructure ready for their meat-based overlords to arrive – the greater its effectiveness will be.

In fact, the same principals are starting to be applied terrestrially today – pushing AI to the edge in remote, limited connectivity environments is helping increase the reach of AI. No longer needing permanent connectivity to the scale of cloud, AI models can be deployed remotely in scenarios from remote power line monitoring, to farmland monitoring, to supporting disaster relief efforts – Ruggedized Azure at the Edge – a precursor to our abilities to deploy AI where no man has gone before.

The next steps are to make the machinery more self-sustaining and self-replicating. To be clear, I’m not talking generalised intelligence or self-aware Skynet Sci-Fi, but the ability for a machine to tackle specific classes of situation it hasn’t been explicitly trained on, to diagnose and repair itself and to even assemble other robots to scale out production – without the driver pitching in – would be giant leaps forward.

If we want to explore, assess, understand and perhaps prepare environments that our short-lived, gravity-loving, radiation-hating, acceleration sensitive bodies aren’t ready to handle – the cost, risk and practical benefits of sending lighter, more hardy and less demanding devices with intelligence baked in, not to mention the ability to take a long low-power nap on the journey, is one that surely precedes our next giant leap.

For transparency, I work for Microsoft as a Cloud Solution Architect, so naturally I tend towards Microsoft’s cloud-scale services, IoT and Edge capabilities when giving examples.

[1] http://blogs.esa.int/mex/2012/08/05/time-delay-between-mars-and-earth/