Showing posts with label Soul Machines. Show all posts
Showing posts with label Soul Machines. Show all posts

Tuesday, 12 December 2017

Robotic AI: key to utopia or instrument of Armageddon?

Recent surveys around the world suggest the public feel they don't receive enough science and non-consumer technology news in a format they can readily understand. Despite this, one area of STEM that captures the public imagination is an ever-growing concern with the development of self-aware robots. Perhaps Hollywood is to blame. Although there is a range of well-known cute robot characters, from WALL-E to BB-8 (both surely designed with a firm eye on the toy market), Ex Machina's Ava and the synthetic humans of the Blade Runner sequel appear to be shaping our suspicious attitudes towards androids far more than real-life projects are.

Then again, the idea of thinking mechanisms and the fears they bring out in us organic machines has been around far longer than Hollywood. In 1863 the English novelist Samuel Butler wrote an article entitled Darwin among the Machines, wherein he recommended the destruction of all mechanical devices since they would one day surpass and likely enslave mankind. So perhaps the anxiety runs deeper than our modern technocratic society. It would be interesting to see - if such concepts could be explained to them - whether an Amazonian tribe would rate intelligent, autonomous devices as dangerous. Could it be that it is the humanoid shape that we fear rather than the new technology, since R2-D2 and co. are much-loved, whereas the non-mechanical Golem of Prague and Frankenstein's monster are pioneering examples of anthropoid-shaped violence?

Looking in more detail, this apprehension appears to be split into two separate concerns:

  1. How will humans fare in a world where we are not the only species at our level of consciousness - or possibly even the most intelligent?
  2. Will our artificial offspring deserve or receive the same rights as humans - or even some animals (i.e. appropriate to their level of consciousness)?

1) Utopia, dystopia, or somewhere in the middle?

The development of artificial intelligence has had a long and tortuous history, with the top-down and bottom-up approaches (plus everything in between) still falling short of the hype. Robots as mobile mechanisms however have recently begun to catch up with fiction, gaining complete autonomy in both two- and four-legged varieties. Humanoid robots and their three principal behavioural laws have been popularised since 1950 via Isaac Asimov's I, Robot collection of short stories. In addition, fiction has presented many instances of self-aware computers with non-mobile extensions into the physical world. In both types of entity, unexpected programming loopholes prove detrimental to their human collaborators. Prominent examples include HAL 9000 in 2001: A Space Odyssey and VIKI in the Asimov-inspired feature film called I, Robot. That these decidedly non-anthropomorphic machines have been promoted in dystopian fiction runs counter to the idea above concerning humanoid shapes - could it be instead that it is a human-like personality that is the deciding fear factor?

Although similar attitudes might be expected of a public with limited knowledge of the latest science and technology (except where given the gee-whiz or Luddite treatment by the right-of-centre tabloid press) some famous scientists and technology entrepreneurs have also expressed doubts and concerns. Stephen Hawking, who appears to be getting negative about a lot of things in his old age, has called for comprehensive controls around sentient robots and artificial intelligence in general. His fears are that we may miss something when coding safeguards, leading to our unintentional destruction. This is reminiscent of HAL 9000, who became stuck in a Moebius loop after being given instructions counter to his primary programming.

Politics and economics are also a cause for concern is this area. A few months' ago, SpaceX and Tesla's Elon Musk stated that global conflict is the almost inevitable outcome of nations attempting to gain primacy in the development of AI and intelligent robots. Both Mark Zuckerberg and Bill Gates promote the opposite opinion, with the latter claiming such machines will free up more of humanity - and finances - for work that requires empathy and other complex emotional responses, such as education and care for the elderly.

All in all, there appears to be a very mixed bag of responses from sci-tech royalty. However, Musk's case may not be completely wrong: Vladimir Putin recently stated that the nation who leads AI will rule the world. Although China, the USA and India may be leading the race to develop the technology, Russia is prominent amongst the countries engaged in sophisticated industrial espionage. It may sound too much like James Bond, but clearly the dark side of international competition should not be underestimated.

There is a chance that attitudes are beginning to change in some nations, at least for those who work in the most IT-savvy professions. An online survey over the Asia Pacific region in October and November this year compiled some interesting statistics. In New Zealand and Australia only 8% of office professionals expressed serious concern about the potential impact of AI. However, this was in stark contrast to China, where 41% of interviewees claimed they were extremely concerned. India lay between these two groups at 18%. One factor these four countries had in common was the very high interest in the use of artificial intelligence to free humans from mundane tasks, with the figures here varying from 87% to 98%.

Talking of which, if robots do take on more and more jobs, what will everyone do? Most people just aren't temperamentally suited to the teaching or caring professions, so could it be that those who previously did repetitive, low-initiative tasks will be relegated to a life of enforced leisure? This appears reminiscent of the far-future, human-descended Eloi encountered by the Time Traveller in H.G. Wells' The Time Machine; some wags might say that you only have to look at a small sample of celebrity culture and social media to see that this has already happened...

Robots were once restricted to either the factory or the cinema screen, but now they are becoming integrated into other areas of society. In June this year Dubai introduced a wheeled robot policeman onto its streets, with the intention of making one quarter of the police force equally mechanical by 2030. It seems to be the case that wherever there's the potential to replace a human with a machine, at some point soon a robot will be trialling that role.

2) Robot rights or heartless humans?

Hanson Robotics' Sophia gained international fame when Saudi Arabia made her the world's first silicon citizen. A person in her own right, Sophia is usually referred to as 'she' rather than 'it' - or at least as a 'female robot' - and one who has professed the desire to have children. But would switching her off constitute murder? So far, her general level of intelligence (as opposed to specific skills) varies widely, so she's unlikely to pass the Turing test in most subjects. One thing is for certain: for an audience used to the androids of the Westworld TV series or Blade Runner 2049, Sophia is more akin to a clunky toy.

However, what's interesting here is not so much Sophia's level of sophistication as the human response to her and other contemporary human-like machines. The British tabloid press have perhaps somewhat predictably decided that the notion of robots as individuals is 'bonkers', following appeals to give rights to sexbots - who are presumably well down the intellectual chain from the cutting edge of Sophia. However, researchers at the Massachusetts Institute of Technology and officers in the US military have shown aversion to causing damage to their robots, which in the case of the latter was termed 'inhumane'. This is thought-provoking since the army's tracked robot in question bore far greater resemblance to WALL-E than to a human being.

A few months' ago I attended a talk given by New Zealand company Soul Machines, which featured a real-time chat with Rachel, one of their 'emotionally intelligent digital humans'. Admittedly Rachel is entirely virtual, but her ability to respond to words (both the tone in which they are said as well as their meaning) as well as to physical and facial gestures, presented an uncanny facsimile of human behaviour. Rachel is a later version of the AI software that was first showcased in BabyX, who easily generated feelings of sympathy when she became distraught. BabyX is perhaps the first proof that we are well on the way to creating a real-life version of David, the child android in Spielberg's A.I. Artificial Intelligence; robots may soon be able to generate powerful, positive emotions in us.

Whilst Soul Machines' work is entirely virtual, the mechanical shell of Sophia and other less intelligent bipedal robots shows that the physical problem of subtle, independent movement has been almost solved. This begs the question, when Soul Machines' 'computational model of consciousness' is fully realised, will we have any choice but to extend human rights to them, regardless of whether these entities have mechanical bodies or only exist on a computer screen?

To some extent, Philip K. Dick's intention in Do Androids Dream of Electric Sheep? to show that robots will always be inferior to humans due to their facsimile emotions was reversed by Blade Runner and its sequel. Despite their actions, we felt sorry for the replicants since although they were capable of both rational thought and human-like feelings, they were treated as slaves. The Blade Runner films, along with the Cylons of the Battlestar Galactica reboot, suggest that it is in our best interest to discuss robot rights sooner rather than later, both to prevent the return of slavery (albeit of an organic variety) and to limit a prospective AI revolution. It might sound glib, but any overly-rational self-aware machine might consider itself the second-hand product of natural selection and therefore the successor of humanity. If that is the case, then what does one do with an inferior predecessor that is holding it up its true potential?

One thing for certain is that AI robot research is unlikely to be slowing down any time soon. China is thought to be on the verge of catching up with the USA whilst an Accenture report last year suggested that within the next two decades the implementation of such research could add hundreds of billions of dollars to the economies of participating nations. Perhaps for peace of mind AI manufacturers should follow the suggestion of a European Union draft report from May 2016, which recommended an opt-out mechanism, a euphemistic name for a kill switch, to be installed in all self-aware entities. What with human fallibility and all, isn't there a slight chance that a loophole could be found in Asimov's Three Laws of Robotics, after which we find out if we have created partners or successors..?