Considering the long-held fascination humanity has with AI, it was no surprise that Ex Machina’s eerie portrayal of a sentient robot picked up the Best Visual Effects award at this year’s Oscars. As our demands on technology have grown, we have begun to desire technology that thinks in the same way as us, predicting, reacting and behaving in an intelligent manner.

Yet the idea of programmed equals both tantalises and horrifies us. It is a key theme in fiction, from works such as Mary Shelley’s Frankenstein to more modern treatments found in the likes of Star Trek and Mass Effect. The creation of systems with superior intellect to our own is nearly always considered as an existential threat. The message tends to be that we meddle in forces beyond our comprehension at our collective peril.

Even scientific luminaries and technology experts that you might expect to be fairly sanguine about the idea of artificial intelligence have expressed their alarm about the prospect. One such soothsayer is Professor Stephen Hawking, who commented in his Reddit AMA last year:

 

“The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

 

Despite these dire warnings, we all come into contact with ‘intelligent’ systems on a daily basis, through personal assistant smartphone tools such as Siri, personal recommendation in streaming platforms like Spotify or even when contacted by fraud detection systems operated by banks. There are undoubtedly efficiencies in having machines perform more intellectual tasks; the question is whether a complete self-awareness is wanted, or even required, for an artificially intelligent construct.

 

How close to sentience are we?

 

The last twelve months have seen several advances in terms of overcoming some of the ‘grand challenges’ that have been holding back AI development. Last October, Google’s AlphaGo programme became the first such entity to defeat a professional Go player. Rather than adopt the fairly crude method of computing millions of positions that had been used by the chess-playing machine Deep Blue, Alpha Go makes use of both an advanced search mechanism and several deep neural networks, which means that it learns from its experiences and reinforces its knowledge in order to be prepared better for future games.

Another breakthrough came from the Ransselaer Polytechnic Institute, which successfully created a robot that could pass an adapted version of a self-awareness induction problem. In the test, three robots were tapped on the head, silencing two of them. When asked whether they had been silenced, one robot was able to realise it could still speak by recognising its own voice.

 

 

Neither of these seems particularly difficult behaviour from a human perspective, yet they are significant steps towards the creation of systems that need to exist to create useful intelligent systems. This type of reactive system will be crucial for the development of systems such as driverless cars, which need to adapt to changing circumstances and illogical behaviour.

 

The singularity is not so near

 

In terms of developing computer programs that can truly mirror the human brain, however, there remains much to be done. The most visible progress made so far is by OpenWorm, an open source scientific mission building a complete computational model of a type of roundworm with a simple nervous system. Using the programme, a Lego robot was created with a simulated worm brain. This could detect and avoid objects, even though it had not been programmed to do so.

 

 

Another more ambitious project, Blue Brain, is attempting to reverse-engineer a synthetic mind using rodent brain mapping. It recently published a working digital simulation of approximately one third of a cubic millimetre of rat neocortex, encompassing around 31,000 neurons connected to 40 million synapses. This is an impressive achievement, yet placed in the context of a human brain, which contains roughly 100 billion neurons connected by 100 trillion synapses, plotting the latter is still a long way off.

Artificial intelligence, or artificially intelligent?

 

In many ways, our current inability to replicate our own brains is not a particularly large concern. For driverless cars, for example, the aim is to create a system that combines pure logical processes with ethical judgements and a protective mentality. Google’s self-driving car project proved this in February, when one of its vehicles was responsible for causing a low-speed crash. The error was a typically human one — assuming that a bus would slow and let it pull out past some sand bags — and Google’s response is to teach the system through deep learning processes that certain vehicles will be less likely to slow down for it.

 

Connected Lifestyle Google Driverless car

 

While clearly improving the ‘intelligence’ of the software, Google is not applying human-like behaviour into its vehicles; instead, it seeks to improve the logic of a driverless car above that of a normal road user, combining faultless logic with an awareness of the illogical decisions of others and the ability to counteract it. If such systems truly sought to mimic human behaviour, it would have an in-built instability and a propensity to occasionally make the wrong decision. Needless to say, such behaviour would be highly undesirable.

For all that it is an idea entrenched in the public psyche, we don’t need our artificially intelligent systems to be truly self-aware for the applications that are currently being devised. No matter whether we will rely on AI to control our homes, drive our vehicles, perform manual jobs or keep us safe, the same overriding technological benchmark remains: as long as it can effectively perform the task it is meant to do, then sentience is an irrelevance.