Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Saturday, 3 March 2018

Hi-tech roadblock: is some upcoming technology just too radical for society to handle?

Many people still consider science to be a discipline wholly separate from other facets of human existence. If there's one thing I've learnt during the eight years I've been writing this blog it's that there are so many connections between STEM and society that much of the scientific enterprise cannot be considered in isolation.

Cutting-edge theories can take a long time to be assimilated into mainstream society, in some cases their complexity (for example, quantum mechanics) or their emotive value (most obviously, natural selection) meaning that they get respectively misinterpreted or rejected. New technologies emerge out of scientific principles and methodology, if not always from the archetypal laboratory. STEM practitioners are sometimes the driving force behind new devices aimed at the mass market; could it be that their enthusiasm and in-depth knowledge prohibits them from realising that the world isn't yet ready for their brainchild? In some cases the "Hey, wow, cool, look what we can do!" excitement masks the elaborate web of socio-economic factors that mean the invention will never be suitable for a world outside the test environment.

There are plenty of examples of pioneering consumer-oriented technology that either could never fit into its intended niche (such as the UK's Sinclair C5 electric vehicle of the mid-1980s), or missed public demand, the Sony Betamax video recorder having been aimed at home movie makers rather than audiences just wanting to watch pre-recorded material (hence losing out to the inferior-quality VHS format).

At the opposite pole, mobile phone manufacturers in the early 1990s completely underestimated the public interest in their products, which were initially aimed at business users. Bearing in mind that there is considerable worldwide interest in certain new radical technologies that will presumably be aimed at the widest possible market, I thought I'd look at their pros and cons so as to ascertain whether non-STEM factors are likely to dictate their fortunes.

1) Driverless automobiles

There has been recent confirmation that in the next month or so vehicle manufacturers may be able to test their autonomous cars on California's state highways. With Nissan poised to test self-driving taxis in time for a 2020 launch, the era of human drivers could be entering its last few decades. Critics of the technology usually focus on the potential dangers, as shown by the system's first fatality in May 2016.

But what of the reverse? Could the widespread introduction of driverless road vehicles - once the public is convinced of their superior safety attributes - be opposed by authorities or multinational corporations? After all, in 2016 almost 21% of drivers in the USA received a speeding ticket, generating enormous revenue. Exact figures for these fines are unknown, but estimates for annual totals usually centre around six billion dollars. In addition to the fines themselves adding to national or local government coffers (for all sorts of traffic misdemeanours including parking offences), insurance companies benefit from the increase in premiums for drivers with convictions.

Whether vested interests would find the economic losses suitably offset by the prevention of thousands of deaths due to driver error remains to be seen. This stance might seem unjustly anti-corporate, but when the past half-century's history of private profit ahead of public interest is examined (for example, the millions paid by the fossil fuel and tobacco industries to support their products) there are obvious precedents.

One key scientific method is parsimony, A.K.A. Occam's razor. According to this principle, the simplest explanation is usually the correct one, at least in classical science; quantum mechanics plays by its own rules. An example counter to this line of thought can be seen in the work of the statistician, geneticist and tobacco industry spokesman R.A. Fisher, a keen pipe smoker who argued that rather than a cause-and-effect between smoking and lung cancer, there was a more complicated correlation between people who were both genetically susceptible to lung disease and hereditarily predisposed to nicotine addiction! Cigarette, anyone?

As for relinquishing the steering wheel to a machine, I think that a fair proportion of the public enjoy the 'freedom' of driving and that a larger contingent than just boy racers won't give up manual control without a fight, i.e. state intervention will required to put safety ahead of individuality.

2) Extending human lifespan

It might seem odd that anyone would want to oppose technology that could increase longevity, but there would have to be some fairly fundamental changes to society to accommodate anything beyond the most moderate of extended lifespans. According to a 2009 report in The Lancet medical journal, about half of all children born since 2000 could reach their hundredth birthday.

Various reports state that from 2030-2050 - about as far in the future as anyone can offer realistic prognostication for - the proportion of retirees, including far greater numbers of Alzheimer and dementia sufferers, will require many times more geriatricians than are practicing today. The ratio of working-age population to retiree will also drop, from 5:1 to 3:1 in the case of the USA, implying a far greater pensions crisis than that already looming. Numerous companies are using cutting-edge biotech to find cell renewal techniques, including the fifteen teams racing for the Palo Alto Longevity Prize, so the chances of a breakthrough are fairly high.

Japan offers a hint of how developed nations will alter once extended lifespans are available on a widespread basis: one-third of the population are over sixty and one in eight over seventy-five. In 2016 its public debt was more double the GDP and Japan also faces low labour productivity compared to other nations within the OECD. Figures such as these show that governments will find it economically challenging to support the corresponding population demographics, even if many of the healthcare issues usually associated with the elderly are diminished.

However, unlike driverless cars it's difficult to conceive of arguments in favour of legislation to prevent extended lifespans. If all nations achieved equilibrium in economy, technology and demographics there would be far fewer issues, but the gap between developed and developing nations is wide enough to deem that unlikely for many decades.

Discussions around quality of life for the elderly will presumably become more prominent as the age group gains as a proportion of the electorate. There are already various types of companion robots for those living alone, shaped anything from cats to bears to anthropomorphic designs such as the French Buddy and German Care-O-bot, the latter to my mind resembling a giant, mobile chess piece.

3) Artificial intelligence

I've already looked at international attitudes to the expansion and development of AI but if there's one thing most reports discuss it is the loss of jobs to even semi-intelligent machines. Even if there is a lower proportion of younger people, there will still be a need to keep the populace engaged, constructive or otherwise.

Surveys suggest that far from reducing working hours, information technology has caused employees in developed nations to spend more time outside work still working. For example, over half of all American and British employees now check their work email while on holiday. Therefore will governments be able to fund and organise replacement activities for an obsolete workforce, involving for example life-long learning and job sharing?

The old adage about idle hands rings true and unlike during the Great Depression, the sophistication of modern technology doesn't allow for commissioning of large-scale infrastructure projects utilising an unskilled labour pool. Granted that AI will generate new jobs in novel specialisms, but these will be a drop in the ocean compared to the lost roles. So far, the internet and home computing have created work, frequently in areas largely unpredicted by futurists, but it seems doubtful the trend will continue once heuristic machines and the 'internet of things' become commonplace.

So is it possible that governments will interfere with the implementation of cutting-edge technology in order to preserve the status quo, at least until the impending paradigm shift becomes manageable? I could include other examples, but many are developments that are more likely to incur the annoyance of certain industries rather than governments or societies as a whole. One of the prominent examples used for the up-coming Internet of Things is the smart fridge, which would presumably reduce grocery wastage - and therefore lower sales - via its cataloguing of use-by dates.

Also, if people can buy cheap (or dare I mention pirated?) plans for 3D printing at home, they won't have to repeatedly pay for physical goods, plus in some cases their delivery costs. Current designs that are available to print items for use around the home and garage range from soap dishes to measuring cups, flower vases to car windscreen ice scrapers. Therefore it's obvious that a lot of companies producing less sophisticated household goods are in for a sticky future as 3D printers become ubiquitous.

If these examples prove anything, it's that scientific advances cannot be treated in isolation when they have the potential of direct implementation in the real world. It's also difficult to predict how a technology developed for a single purpose can end up being co-opted into wholly other sectors, as happened with ferrofluids, designed to pump rocket fuel in the 1960's and now used in kinetic sculptures and toys. I've discussed the problems of attempting to predict upcoming technology and its future implementation and as such suggest that even if an area of technological progress follows some sort of predictable development, the wider society that encapsulates it may not be ready for its implementation.

It may not be future shock per se, but there are vested interests who like things just the way they are - certain technology may be just too good for the public. Who said anything about how much fossil fuel industries have spent denying man-made climate change? Or could it be time to consider Occam's razor again?

Tuesday, 12 December 2017

Robotic AI: key to utopia or instrument of Armageddon?

Recent surveys around the world suggest the public feel they don't receive enough science and non-consumer technology news in a format they can readily understand. Despite this, one area of STEM that captures the public imagination is an ever-growing concern with the development of self-aware robots. Perhaps Hollywood is to blame. Although there is a range of well-known cute robot characters, from WALL-E to BB-8 (both surely designed with a firm eye on the toy market), Ex Machina's Ava and the synthetic humans of the Blade Runner sequel appear to be shaping our suspicious attitudes towards androids far more than real-life projects are.

Then again, the idea of thinking mechanisms and the fears they bring out in us organic machines has been around far longer than Hollywood. In 1863 the English novelist Samuel Butler wrote an article entitled Darwin among the Machines, wherein he recommended the destruction of all mechanical devices since they would one day surpass and likely enslave mankind. So perhaps the anxiety runs deeper than our modern technocratic society. It would be interesting to see - if such concepts could be explained to them - whether an Amazonian tribe would rate intelligent, autonomous devices as dangerous. Could it be that it is the humanoid shape that we fear rather than the new technology, since R2-D2 and co. are much-loved, whereas the non-mechanical Golem of Prague and Frankenstein's monster are pioneering examples of anthropoid-shaped violence?

Looking in more detail, this apprehension appears to be split into two separate concerns:

  1. How will humans fare in a world where we are not the only species at our level of consciousness - or possibly even the most intelligent?
  2. Will our artificial offspring deserve or receive the same rights as humans - or even some animals (i.e. appropriate to their level of consciousness)?

1) Utopia, dystopia, or somewhere in the middle?

The development of artificial intelligence has had a long and tortuous history, with the top-down and bottom-up approaches (plus everything in between) still falling short of the hype. Robots as mobile mechanisms however have recently begun to catch up with fiction, gaining complete autonomy in both two- and four-legged varieties. Humanoid robots and their three principal behavioural laws have been popularised since 1950 via Isaac Asimov's I, Robot collection of short stories. In addition, fiction has presented many instances of self-aware computers with non-mobile extensions into the physical world. In both types of entity, unexpected programming loopholes prove detrimental to their human collaborators. Prominent examples include HAL 9000 in 2001: A Space Odyssey and VIKI in the Asimov-inspired feature film called I, Robot. That these decidedly non-anthropomorphic machines have been promoted in dystopian fiction runs counter to the idea above concerning humanoid shapes - could it be instead that it is a human-like personality that is the deciding fear factor?

Although similar attitudes might be expected of a public with limited knowledge of the latest science and technology (except where given the gee-whiz or Luddite treatment by the right-of-centre tabloid press) some famous scientists and technology entrepreneurs have also expressed doubts and concerns. Stephen Hawking, who appears to be getting negative about a lot of things in his old age, has called for comprehensive controls around sentient robots and artificial intelligence in general. His fears are that we may miss something when coding safeguards, leading to our unintentional destruction. This is reminiscent of HAL 9000, who became stuck in a Moebius loop after being given instructions counter to his primary programming.

Politics and economics are also a cause for concern is this area. A few months' ago, SpaceX and Tesla's Elon Musk stated that global conflict is the almost inevitable outcome of nations attempting to gain primacy in the development of AI and intelligent robots. Both Mark Zuckerberg and Bill Gates promote the opposite opinion, with the latter claiming such machines will free up more of humanity - and finances - for work that requires empathy and other complex emotional responses, such as education and care for the elderly.

All in all, there appears to be a very mixed bag of responses from sci-tech royalty. However, Musk's case may not be completely wrong: Vladimir Putin recently stated that the nation who leads AI will rule the world. Although China, the USA and India may be leading the race to develop the technology, Russia is prominent amongst the countries engaged in sophisticated industrial espionage. It may sound too much like James Bond, but clearly the dark side of international competition should not be underestimated.

There is a chance that attitudes are beginning to change in some nations, at least for those who work in the most IT-savvy professions. An online survey over the Asia Pacific region in October and November this year compiled some interesting statistics. In New Zealand and Australia only 8% of office professionals expressed serious concern about the potential impact of AI. However, this was in stark contrast to China, where 41% of interviewees claimed they were extremely concerned. India lay between these two groups at 18%. One factor these four countries had in common was the very high interest in the use of artificial intelligence to free humans from mundane tasks, with the figures here varying from 87% to 98%.

Talking of which, if robots do take on more and more jobs, what will everyone do? Most people just aren't temperamentally suited to the teaching or caring professions, so could it be that those who previously did repetitive, low-initiative tasks will be relegated to a life of enforced leisure? This appears reminiscent of the far-future, human-descended Eloi encountered by the Time Traveller in H.G. Wells' The Time Machine; some wags might say that you only have to look at a small sample of celebrity culture and social media to see that this has already happened...

Robots were once restricted to either the factory or the cinema screen, but now they are becoming integrated into other areas of society. In June this year Dubai introduced a wheeled robot policeman onto its streets, with the intention of making one quarter of the police force equally mechanical by 2030. It seems to be the case that wherever there's the potential to replace a human with a machine, at some point soon a robot will be trialling that role.

2) Robot rights or heartless humans?

Hanson Robotics' Sophia gained international fame when Saudi Arabia made her the world's first silicon citizen. A person in her own right, Sophia is usually referred to as 'she' rather than 'it' - or at least as a 'female robot' - and one who has professed the desire to have children. But would switching her off constitute murder? So far, her general level of intelligence (as opposed to specific skills) varies widely, so she's unlikely to pass the Turing test in most subjects. One thing is for certain: for an audience used to the androids of the Westworld TV series or Blade Runner 2049, Sophia is more akin to a clunky toy.

However, what's interesting here is not so much Sophia's level of sophistication as the human response to her and other contemporary human-like machines. The British tabloid press have perhaps somewhat predictably decided that the notion of robots as individuals is 'bonkers', following appeals to give rights to sexbots - who are presumably well down the intellectual chain from the cutting edge of Sophia. However, researchers at the Massachusetts Institute of Technology and officers in the US military have shown aversion to causing damage to their robots, which in the case of the latter was termed 'inhumane'. This is thought-provoking since the army's tracked robot in question bore far greater resemblance to WALL-E than to a human being.

A few months' ago I attended a talk given by New Zealand company Soul Machines, which featured a real-time chat with Rachel, one of their 'emotionally intelligent digital humans'. Admittedly Rachel is entirely virtual, but her ability to respond to words (both the tone in which they are said as well as their meaning) as well as to physical and facial gestures, presented an uncanny facsimile of human behaviour. Rachel is a later version of the AI software that was first showcased in BabyX, who easily generated feelings of sympathy when she became distraught. BabyX is perhaps the first proof that we are well on the way to creating a real-life version of David, the child android in Spielberg's A.I. Artificial Intelligence; robots may soon be able to generate powerful, positive emotions in us.

Whilst Soul Machines' work is entirely virtual, the mechanical shell of Sophia and other less intelligent bipedal robots shows that the physical problem of subtle, independent movement has been almost solved. This begs the question, when Soul Machines' 'computational model of consciousness' is fully realised, will we have any choice but to extend human rights to them, regardless of whether these entities have mechanical bodies or only exist on a computer screen?

To some extent, Philip K. Dick's intention in Do Androids Dream of Electric Sheep? to show that robots will always be inferior to humans due to their facsimile emotions was reversed by Blade Runner and its sequel. Despite their actions, we felt sorry for the replicants since although they were capable of both rational thought and human-like feelings, they were treated as slaves. The Blade Runner films, along with the Cylons of the Battlestar Galactica reboot, suggest that it is in our best interest to discuss robot rights sooner rather than later, both to prevent the return of slavery (albeit of an organic variety) and to limit a prospective AI revolution. It might sound glib, but any overly-rational self-aware machine might consider itself the second-hand product of natural selection and therefore the successor of humanity. If that is the case, then what does one do with an inferior predecessor that is holding it up its true potential?

One thing for certain is that AI robot research is unlikely to be slowing down any time soon. China is thought to be on the verge of catching up with the USA whilst an Accenture report last year suggested that within the next two decades the implementation of such research could add hundreds of billions of dollars to the economies of participating nations. Perhaps for peace of mind AI manufacturers should follow the suggestion of a European Union draft report from May 2016, which recommended an opt-out mechanism, a euphemistic name for a kill switch, to be installed in all self-aware entities. What with human fallibility and all, isn't there a slight chance that a loophole could be found in Asimov's Three Laws of Robotics, after which we find out if we have created partners or successors..?

Tuesday, 7 March 2017

Wrangling robots: encouraging engineers of the next generation

On hearing my daughters' regaling some of their activities and technology at school, I frequently lament 'I wish we had that when I was their age'. I was lucky enough as it was for the early 1980s; for example, my school year was the first to actually get computers in the computer science classroom!

But enough of the trip down memory lane. The British Government has recently announced that it is pledging over £17 million towards robotics and artificial intelligence (AI) research in universities. Of course the drive behind this is as much economic as a love of STEM: Accenture's 2016 report Why Artificial Intelligence is the Future of Growth states that AI could contribute up to £654 billion to the UK economy by 2035, if comprehensively integrated into industry and society. Sectors utilising cutting-edge technology such as pharmaceuticals and aerospace will be able to grow markedly thanks to AI and robotics, so now is indeed a great time for children to learn the necessary core skills.

New Zealand too is determined not to be left behind in the development of such technology, which it is hoped will create new jobs whilst stimulating economic growth. One such programme aimed in this direction is Kiwibots, home to New Zealand's contenders for the annual Vex Robotics World Championship. The largest international robotics competition, over thirty nations are taking part this year. New Zealand's national finals recently took place at Massey University in Albany, north of Auckland. The winning teams have been announced and among those qualifying for the World Championship in Kentucky next month is one from an all-girls school, which is great news.

My daughters attend another all-girls school that competed in the national championships, giving me the opportunity to examine one of their robots in person. Vex EDR primarily consists of metal components including perforated strips reminiscent of the Meccano toy building system I had as a child - and indeed their construction techniques are not dissimilar - although EDR incorporates battery-driven motors and elastic band 'muscles'. EDR is aimed at senior/high school students, but primary/elementary and intermediate schools are not left out, thanks to the mostly plastic-built Vex IQ system which is closer to the Lego Mindstorms/Technic ranges.


Vex EDR robot

Vex EDR robots can either be wheeled or tracked and include towers and arms with manipulators. They can be remote controlled or programmed using ROBOTC, a C-based programming language: not only do the students get to be engineers but computer programmers too. Younger roboteers can use a drag-and-drop interface to assemble code whilst older ones may write and test code using an editor. In order to aid code writing, Robot Virtual Worlds is, as the name suggests, a simulated environment for testing virtual robots, even including an underwater scenario (which is obviously not achievable with the real thing)!

To encourage more girls to participate in the traditionally male world of engineering, the Robotics Education and Competition Foundation has created Girl Powered, a series of challenges for EDR and IQ systems.

In addition to learning specific technical skills, the experience can generate enthusiasm for STEM subjects - after all, it's rather more exciting than most school lessons - whilst providing useful experience in general skills such as collaboration and problem-solving. The creativity and teamwork involved in Vex robotics shows that some elements of science and engineering are not overtly difficult, abstractly mathematical or plain boring. When I was an onlooker at the national finals, the looks of tension and joy on the roboteers' faces said it all.

As Vex themselves state: Think. Create. Build. Amaze.

What better way could there be to encourage children towards STEM careers, especially when AI and robotics will undoubtedly play an ever more important role in the coming decades?