Tuesday, 12 December 2017

Robotic AI: key to utopia or instrument of Armageddon?

Recent surveys around the world suggest the public feel they don't receive enough science and non-consumer technology news in a format they can readily understand. Despite this, one area of STEM that captures the public imagination is an ever-growing concern with the development of self-aware robots. Perhaps Hollywood is to blame. Although there is a range of well-known cute robot characters, from WALL-E to BB-8 (both surely designed with a firm eye on the toy market), Ex Machina's Ava and the synthetic humans of the Blade Runner sequel appear to be shaping our suspicious attitudes towards androids far more than real-life projects are.

Then again, the idea of thinking mechanisms and the fears they bring out in us organic machines has been around far longer than Hollywood. In 1863 the English novelist Samuel Butler wrote an article entitled Darwin among the Machines, wherein he recommended the destruction of all mechanical devices since they would one day surpass and likely enslave mankind. So perhaps the anxiety runs deeper than our modern technocratic society. It would be interesting to see - if such concepts could be explained to them - whether an Amazonian tribe would rate intelligent, autonomous devices as dangerous. Could it be that it is the humanoid shape that we fear rather than the new technology, since R2-D2 and co. are much-loved, whereas the non-mechanical Golem of Prague and Frankenstein's monster are pioneering examples of anthropoid-shaped violence?

Looking in more detail, this apprehension appears to be split into two separate concerns:

  1. How will humans fare in a world where we are not the only species at our level of consciousness - or possibly even the most intelligent?
  2. Will our artificial offspring deserve or receive the same rights as humans - or even some animals (i.e. appropriate to their level of consciousness)?

1) Utopia, dystopia, or somewhere in the middle?

The development of artificial intelligence has had a long and tortuous history, with the top-down and bottom-up approaches (plus everything in between) still falling short of the hype. Robots as mobile mechanisms however have recently begun to catch up with fiction, gaining complete autonomy in both two- and four-legged varieties. Humanoid robots and their three principal behavioural laws have been popularised since 1950 via Isaac Asimov's I, Robot collection of short stories. In addition, fiction has presented many instances of self-aware computers with non-mobile extensions into the physical world. In both types of entity, unexpected programming loopholes prove detrimental to their human collaborators. Prominent examples include HAL 9000 in 2001: A Space Odyssey and VIKI in the Asimov-inspired feature film called I, Robot. That these decidedly non-anthropomorphic machines have been promoted in dystopian fiction runs counter to the idea above concerning humanoid shapes - could it be instead that it is a human-like personality that is the deciding fear factor?

Although similar attitudes might be expected of a public with limited knowledge of the latest science and technology (except where given the gee-whiz or Luddite treatment by the right-of-centre tabloid press) some famous scientists and technology entrepreneurs have also expressed doubts and concerns. Stephen Hawking, who appears to be getting negative about a lot of things in his old age, has called for comprehensive controls around sentient robots and artificial intelligence in general. His fears are that we may miss something when coding safeguards, leading to our unintentional destruction. This is reminiscent of HAL 9000, who became stuck in a Moebius loop after being given instructions counter to his primary programming.

Politics and economics are also a cause for concern is this area. A few months' ago, SpaceX and Tesla's Elon Musk stated that global conflict is the almost inevitable outcome of nations attempting to gain primacy in the development of AI and intelligent robots. Both Mark Zuckerberg and Bill Gates promote the opposite opinion, with the latter claiming such machines will free up more of humanity - and finances - for work that requires empathy and other complex emotional responses, such as education and care for the elderly.

All in all, there appears to be a very mixed bag of responses from sci-tech royalty. However, Musk's case may not be completely wrong: Vladimir Putin recently stated that the nation who leads AI will rule the world. Although China, the USA and India may be leading the race to develop the technology, Russia is prominent amongst the countries engaged in sophisticated industrial espionage. It may sound too much like James Bond, but clearly the dark side of international competition should not be underestimated.

There is a chance that attitudes are beginning to change in some nations, at least for those who work in the most IT-savvy professions. An online survey over the Asia Pacific region in October and November this year compiled some interesting statistics. In New Zealand and Australia only 8% of office professionals expressed serious concern about the potential impact of AI. However, this was in stark contrast to China, where 41% of interviewees claimed they were extremely concerned. India lay between these two groups at 18%. One factor these four countries had in common was the very high interest in the use of artificial intelligence to free humans from mundane tasks, with the figures here varying from 87% to 98%.

Talking of which, if robots do take on more and more jobs, what will everyone do? Most people just aren't temperamentally suited to the teaching or caring professions, so could it be that those who previously did repetitive, low-initiative tasks will be relegated to a life of enforced leisure? This appears reminiscent of the far-future, human-descended Eloi encountered by the Time Traveller in H.G. Wells' The Time Machine; some wags might say that you only have to look at a small sample of celebrity culture and social media to see that this has already happened...

Robots were once restricted to either the factory or the cinema screen, but now they are becoming integrated into other areas of society. In June this year Dubai introduced a wheeled robot policeman onto its streets, with the intention of making one quarter of the police force equally mechanical by 2030. It seems to be the case that wherever there's the potential to replace a human with a machine, at some point soon a robot will be trialling that role.

2) Robot rights or heartless humans?

Hanson Robotics' Sophia gained international fame when Saudi Arabia made her the world's first silicon citizen. A person in her own right, Sophia is usually referred to as 'she' rather than 'it' - or at least as a 'female robot' - and one who has professed the desire to have children. But would switching her off constitute murder? So far, her general level of intelligence (as opposed to specific skills) varies widely, so she's unlikely to pass the Turing test in most subjects. One thing is for certain: for an audience used to the androids of the Westworld TV series or Blade Runner 2049, Sophia is more akin to a clunky toy.

However, what's interesting here is not so much Sophia's level of sophistication as the human response to her and other contemporary human-like machines. The British tabloid press have perhaps somewhat predictably decided that the notion of robots as individuals is 'bonkers', following appeals to give rights to sexbots - who are presumably well down the intellectual chain from the cutting edge of Sophia. However, researchers at the Massachusetts Institute of Technology and officers in the US military have shown aversion to causing damage to their robots, which in the case of the latter was termed 'inhumane'. This is thought-provoking since the army's tracked robot in question bore far greater resemblance to WALL-E than to a human being.

A few months' ago I attended a talk given by New Zealand company Soul Machines, which featured a real-time chat with Rachel, one of their 'emotionally intelligent digital humans'. Admittedly Rachel is entirely virtual, but her ability to respond to words (both the tone in which they are said as well as their meaning) as well as to physical and facial gestures, presented an uncanny facsimile of human behaviour. Rachel is a later version of the AI software that was first showcased in BabyX, who easily generated feelings of sympathy when she became distraught. BabyX is perhaps the first proof that we are well on the way to creating a real-life version of David, the child android in Spielberg's A.I. Artificial Intelligence; robots may soon be able to generate powerful, positive emotions in us.

Whilst Soul Machines' work is entirely virtual, the mechanical shell of Sophia and other less intelligent bipedal robots shows that the physical problem of subtle, independent movement has been almost solved. This begs the question, when Soul Machines' 'computational model of consciousness' is fully realised, will we have any choice but to extend human rights to them, regardless of whether these entities have mechanical bodies or only exist on a computer screen?

To some extent, Philip K. Dick's intention in Do Androids Dream of Electric Sheep? to show that robots will always be inferior to humans due to their facsimile emotions was reversed by Blade Runner and its sequel. Despite their actions, we felt sorry for the replicants since although they were capable of both rational thought and human-like feelings, they were treated as slaves. The Blade Runner films, along with the Cylons of the Battlestar Galactica reboot, suggest that it is in our best interest to discuss robot rights sooner rather than later, both to prevent the return of slavery (albeit of an organic variety) and to limit a prospective AI revolution. It might sound glib, but any overly-rational self-aware machine might consider itself the second-hand product of natural selection and therefore the successor of humanity. If that is the case, then what does one do with an inferior predecessor that is holding it up its true potential?

One thing for certain is that AI robot research is unlikely to be slowing down any time soon. China is thought to be on the verge of catching up with the USA whilst an Accenture report last year suggested that within the next two decades the implementation of such research could add hundreds of billions of dollars to the economies of participating nations. Perhaps for peace of mind AI manufacturers should follow the suggestion of a European Union draft report from May 2016, which recommended an opt-out mechanism, a euphemistic name for a kill switch, to be installed in all self-aware entities. What with human fallibility and all, isn't there a slight chance that a loophole could be found in Asimov's Three Laws of Robotics, after which we find out if we have created partners or successors..?

Tuesday, 28 November 2017

Research without borders: why international cooperation is good for STEM

I've just finished reading Bryan Sykes' (okay, I know he's a bit controversial) The Seven Daughters of Eve, about the development of mitochondrial DNA research for population genetics. One chapter mentioned Dr Sykes' discovery of the parallel work of Hans-Jürgen Bandelt, who's Mathematics Genealogy Project provided a structure diagram perfectly suited to explaining Sykes' own evolutionary branching results. This discovery occurred largely by chance, suggesting that small research groups must rely either on serendipity or have knowledge of the latest professional papers in order to find other teams who's work might be useful.

This implies that the more international the character of scientific and technological research, the more likely there will be such fortuitous occurrences. Britain's tortuous path out of the European Union has led various organisations on both sides of the Channel to claim that this can only damage British STEM research. The Francis Crick Institute, a London-based biomedical research centre that opened last year, has staff originating from over seventy nations. This size and type of establishment cannot possibly rely on being supplied with researchers from just one nation. Yet EU scientists resident in Britain have felt 'less welcome' since the Brexit referendum, implying a potential loss of expertise in the event of a mass withdrawal.

In recent years, European Union research donations to the UK have exceeded Britain's own contributions by £3 billion, meaning that the additional £300 million newly announced for research and development over the coming four years is only ten percent of what the EU has provided - and the UK Government is clearly looking to the private sector to make up the shortfall. It should also be recognised that although there are high numbers of non-British nationals working in Britain's STEM sector, the country also has a fair number of its own STEM professionals working overseas in EU nations.

The United Kingdom is home to highly expensive, long-term projects that require overseas funding and expertise, including the Oxfordshire-based Joint European Torus nuclear fusion facility. British funding and staff also contribute to numerous big-budget international projects, from the EU-driven Copernicus Earth observation satellite programme to the non-EU CERN. The latter is best-known for the Large Hadron Collider, the occasional research home of physicist and media star Brian Cox (how does he find the time?) and involves twenty-two key nations plus researchers from more than eighty other countries. Despite the intention to stay involved in at least the non-EU projects, surveys suggest that post-Brexit there will be greater numbers of British STEM professionals moving abroad. Indeed, in the past year some American institutions have actively pursued the notion of recruiting more British scientists and engineers.

Of course, the UK is far from unique in being involved in so many projects requiring international cooperation. Thirty nations are collaborating on the US-based Deep Underground Neutrino Experiment (DUNE); the recently-successful Laser Interferometer Gravitational-Wave Observatory (LIGO) involves staff from eighteen countries; and the Square Kilometre Array radio telescope project utilises researchers of more than twenty nationalities. Although the USA has a large population when compared to European nations, one report from 2004 states that approaching half of US physicists were born overseas. Clearly, these projects are deeply indebted to non-nationals.

It isn't just STEM professionals that rely on journeying cross-border, either. Foreign science and technology students make up considerable percentages in some developed countries: in recent years, over 25% of the USA's STEM graduate students and even higher numbers of its master's degree and doctorate students were not born there. Canada, Australia, New Zealand and several European countries have similar statistics, with Indian and Chinese students making up a large proportion of those studying abroad.

As a small nation with severely limited resources for research, New Zealand does extremely well out of the financial contributions from foreign students. Each PhD student spends an average of NZ$175,000 on fees and living costs, never mind additional revenue from the likes of family holidays, so clearly the economics alone make sense. Non-nationals can also introduce new perspectives and different approaches, potentially lessening inflexibility due to cultural mind sets. In recent years, two New Zealand-based scientists, microbiologist Dr Siouxsie Wiles and nanotechnologist Dr Michelle Dickinson (A.K.A. Nanogirl) have risen to prominence thanks to their fantastic science communication work, including with children. Both were born in the UK, but New Zealand sci-comm would be substantially poorer without their efforts. Could it be that their sense of perspective homed in on a need that locally-raised scientists failed to recognise?

This combination of open borders for STEM professionals and international collaboration on expensive projects proves if anything that science cannot be separated from society as a whole. Publically-funded research requires not only a government willing to see beyond its short-term spell in office but a level of state education that satisfies the general populace as to why public money should be granted for such undertakings. Whilst I have previously discussed the issues surrounding the use of state funding for mega-budget research with no obvious practical application, the merits of each project should still be discussed on an individual basis. In addition, and as a rule of thumb, it seems that the larger the project, the almost certain increase in the percentage of non-nationals required to staff it.

The anti-Brexit views of prominent British scientists such as Brian Cox and the Astronomer Royal, Lord Rees of Ludlow, are well known. Let's just hope that the rising xenophobia and anti-immigration feeling that led to Brexit doesn't stand for 'brain exit'. There's been enough of that already and no nation - not even the USA - has enough brain power or funding to go it alone on the projects that really need prompt attention (in case you're in any doubt, alternative energy sources and climate change mitigation spring to mind). Shortly before the Brexit referendum, Professor Stephen Hawking said: "Gone are the days when we could stand on our own, against the world. We need to be part of a larger group of nations." Well if that's not obvious, I don't know what is!

Thursday, 9 November 2017

Wonders of Creation: explaining the universe with Brian Cox and Robin Ince

As Carl Sagan once you said: "if you wish to make an apple pie from scratch, you must first invent the universe." A few nights' ago, I went to what its' promoters bill as ‘the world's most successful and significant science show', which in just over two hours presented a delineation of the birth, history, and eventual death of the universe. In fact, it covered just about everything from primordial slime to the triumphs of the Cassini space probe, only lacking the apple pie itself.

The show in question is an evening with British physicist and presenter Professor Brian Cox. As a long-time fan of his BBC Radio show The Infinite Monkey Cage I was interested to see how the celebrity professor worked his sci-comm magic with a live audience. In addition to the good professor, his co-presenter on The Infinite Monkey Cage, the comedian Robin Ince, also appeared on stage. As such, I was intrigued to see how their combination of learned scientist and representative layman (or 'interested idiot' as he styles himself) would work in front of two thousand people.

I've previously discussed the trend for extremely expensive live shows featuring well-known scientists and (grumble-grumble) the ticket's to Brian Cox were similarly priced to those for Neil deGrasse Tyson earlier this year. As usual, my friends and I went for the cheaper seats, although Auckland must have plenty of rich science fans, judging by the almost packed house (I did a notice a few empty seats in the presumably most expensive front row). As with Professor Tyson, the most expensive tickets for this show included a meet and greet afterwards, at an eye-watering NZ$485!

When Cox asked if there were any scientists in the audience, there were very few cheers. I did notice several members of New Zealand's sci-comm elite, including Dr Michelle Dickinson, A.K.A. Nanogirl, who had met Ince on his previous Cosmic Shambles LIVE tour; perhaps the cost precluded many STEM professionals from attending. As I have said before, such inflated prices can easily lead to only dedicated fans attending, which is nothing less than preaching to the converted. In which case, it's more of a meet-the-celebrity event akin to a music concert than an attempt to spread the wonder - and rationality - of science.

So was I impressed? The opening music certainly generated some nostalgia for me, as it was taken from Brian Eno's soundtrack for the Al Reinert 1983 feature-length documentary on the Apollo lunar missions. Being of almost the same age as Professor Cox, I confess to having in my teens bought the album of vinyl - and still have it! Unlike Neil deGrasse Tyson's show, the Cox-Ince evening was an almost non-stop visual feast, with one giant screen portraying a range of photographs and diagrams, even a few videos. At the times, the images almost appeared to be 3D, seemingly hanging out of the screen, with shots of the Earth and various planets and moons bulging onto the darkened stage. I have to admit to being extremely impressed with the visuals, even though I had seen some of them before. Highlights included the Hubble Space Telescope's famous Ultra-Deep Field of the earliest galaxies and the montage of the cosmic microwave background taken by the WMAP probe.

The evening (okay, let's call it a cosmology lecture with comic interludes) began as per Neil deGrasse Tyson with the age and scale of the universe, then progressed through galaxy formation and a few examples of known extra-solar planets. However, the material was also bang up to date, as it included the recent discoveries of gravitational waves at LIGO and the creation of heavy elements such as gold and platinum in neutron star collisions.

Evolution of the universe

Our universe: a potted history

Professor Cox also took us through the future prospects of the solar system and the eventual heat death of the universe, generating a few "oohs" and "aahs" along the way.  Interestingly, there was little explanation of dark matter and dark energy; perhaps it was deemed too speculative a topic to do it justice. Black holes had a generous amount of attention though, including Hawking radiation. Despite having an audience of primarily non-STEM professionals (admittedly after a show of hands found a large proportion of them to be The Infinite Monkey Cage listeners), a certain level of knowledge was presupposed and there was little attempt to explain the basics. Indeed, at one point an equation popped up - and it wasn't E=MC2. How refreshing!

Talking of which, there was a brief rundown of Einstein's Special and General Theories of Relativity, followed by the latter's development into the hypothesis of the expanding universe and eventual proof of the Big Bang model. Einstein's Cosmological Constant and his initial dismissal of physicist-priest Georges Lemaître's work were given as examples that even the greatest scientists sometimes make mistakes, showing that science is not a set of inviolable truths that we can never improve upon (the Second Law of Thermodynamics excluded, of course). Lemaître was also held up to be an example of how science and religion can co-exist peacefully, in this case, within the same person.

Another strand, proving that Cox is indeed deeply indebted to Carl Sagan (aren't we all?) was his potted history of life on Earth, with reference to the possibility of microbial life on Mars, Europa and Enceladus. The lack of evidence for intelligent extra-terrestrials clearly bothers Brian Cox as much as it did Sagan. However, Cox appeared to retain his scientific impartiality, suggesting that - thanks to the 3.5 billion year plus gap between the origin of life and the evolution of multi-cellular organisms - intelligent species may be extremely rare.

For a fan of crewed space missions, Cox made little mention of future space travel, concentrating instead on robotic probes such as Cassini. The Large Hadron Collider also didn't feature in any meaningful way, although one of the audience questions around the danger of LHC-created black holes was put into perspective next to the natural black holes that might be produced by cosmic ray interactions with the Earth's atmosphere; the latter's 108 TeV (tera electron volts) far exceed the energies generated by the LHC and we've not been compressed to infinity yet.

Robin Ince's contributions were largely restricted to short if hilarious segments but he also made a passionate plea (there's no other word for it) on the readability of Charles Darwin and his relevance today. He discussed Darwin's earthworm experiments and made short work of the American evangelicals'  "no Darwin equals no Hitler" nonsense, concluding with one of his best jokes: "no Pythagoras would mean no Toblerone".

One of the friends I went with admitted to learning little that was new but as stated earlier I really went to examine the sci-comm methods being used and their effect on the audience. Cox and Ince may have covered a lot of scientific ground but they were far from neglectful of the current state of our species and our environment. Various quotes from astronauts and the use of one of the 'pale blue dot' images of a distant Earth showed the intent to follow in Carl Sagan's footsteps and present the poetic wonder of the immensity of creation and the folly of our pathetic conflicts by comparison. The Cox-Ince combination is certainly a very effective one, as any listeners to The Infinite Monkey Cage will know. Other science communicators could do far worse than to follow their brand of no-nonsense lecturing punctuated by amusing interludes. As for me, I'm wondering whether to book tickets for Richard Dawkins and Lawrence Krauss in May next year. They are slightly cheaper than both Brian Cox and Neil deGrasse Tyson. Hmmm…

Saturday, 28 October 2017

Counting keruru: can public surveys and competitions aid New Zealand conservation?

Whilst some other countries - the UK, for example - have dozens of general and specialised wildlife surveys undertaken by members of the public, New Zealand has comparatively few. Whilst this might seem odd, considering the Kiwi penchant for the great outdoors (not to mention the little matter of the endangered status of so many native species) it should be remembered that the nation has a rather small (human) population. In addition, New Zealand is no different from other developed countries, wherein environmentalists often appear at loggerheads with rural landowners, especially farmers.

Since agriculture forms a fundamental component of the New Zealand economy, any anti-farming sentiment can quickly escalate into unpleasantness, as even a cursory look at agriculture versus environmentalists news stories will confirm. Farmers are often reported as resenting what they deem as unrealistic or uninformed opinions by wildlife campaigners. But lest farmers consider this particular post being yet another piece of anti-farming propaganda, it should be noted that campaigns are usually driven by a perceived need for action in the face of government inactivity: after all, New Zealand is second only to Hawaii in the number of introduced species, many of which are in direct competition with, or predate upon, native ones.

Talking of competitions, this year's Bird of the Year contest has just been won by the cheeky, intelligent kea, the world's only alpine parrot. Run by Forest and Bird* and now in its thirteenth year, it aims to raise publicity for the plight of New Zealand's native birds and the wider environment they rely upon. With over 50,000 votes cast, this means approximately 1% of New Zealand citizens and residents entered the competition (assuming of course that non-Kiwis didn't participate).

The international level of awareness about the competition seems to be on the increase too, with the kea's victory even being reported on the website of the UK's The Guardian newspaper, albeit in an article written by a New Zealand-based journalist. The competition doesn't appear to offer anything to science, except a potential – if not unobvious - theory that the public's fondness for particular wildlife species is based upon their aesthetic qualities, with drab birds for example getting less attention than colourful ones. Then again, perhaps Forest and Bird are more interested in spreading their message rather than the results; as the old adage goes, there's no such thing as bad publicity. Indeed, the story of a Christchurch-based who tried to rig the vote in favour of the white-faced heron was reported by the BBC.

Another prominent example of the New Zealand's public involvement in environmental matters is the Annual Garden Bird survey, which began in 2007 and is run by the Government-owned Landcare Research. This more obvious example of citizen science states that the results are used to analyse population trends for both native and introduced bird species and so aid pest control programmes. However, it would be difficult to ascertain the validity of the observations, since less than 0.3% of the nation's gardens (or rather their owners) participate.

Whilst 5000 entries might be considerably more than could be achieved by other means, there are probably all sorts of details that are missed with this level of coverage. I have participated for three years now and have found that my observations do not agree with the reported trends. For example, last year's results show that the silvereye, blackbird and song thrush have declined in my area, whereas I have not noticed any such a drop-off for these birds -  and it's not as if I particularly encourage the latter two (non-native) species.

A more specific example of bio-recording was last month's Great Kereru Count, which claims to be New Zealand's biggest citizen science project. Clearly, they don't consider the Bird of the Year competition as science! Various organisations run this survey, which gained around 7000 reports this year. There are also continuous monitoring schemes, such as for monarch butterflies (which is interesting, as this is a far-from-endangered, recently self-introduced creature) whilst NatureWatch NZ allows anyone to supply a record of a plant or animal species, or indeed to request identification of one. The latter might not sound particularly necessary, but judging by how little some New Zealanders seem to know about their own environment (for example I've met Kiwis who cannot identify such common organisms as a tree weta or cabbage trees) this resource is probably essential in understanding the spread of non-native species.

With native species protection in mind, there are other, more direct, citizen science projects in the country, with everything from the Great Kiwi Morning Tea fundraiser this month to allocation of funding for predator control tools and traps – including in urban gardens - via the independent trust Predator Free New Zealand.

For an even greater level of public involvement in science and technological research, in 2015 the New Zealand Government initiated the Participatory Science Platform to aid partnerships between professionals and community groups. Three pilot projects are currently under way, with Dr Victoria Metcalf as the National Coordinator (or Queen of Curiosity as she has been nicknamed.) These projects are exciting because they involve the public from project development through to conclusion, rather than just using non-scientists as data gatherers. In addition, the ability to gain first-hand experience on real-world undertakings may even encourage children from lower decile areas to consider STEM careers. That's no bad thing.

Back to surveys. Although science communication (sci-comm) is in vogue, my own feeling is that participation is key to promoting science – the methods as well as the facts – to the wider public. Yes, some science is very difficult to understand, but there's plenty that is also easy to grasp. This includes the dangers facing species pushed to the brink of extinction by habitat loss, pollution, and introduced organisms. By actively involving entire communities, surveys and competitions can also play a part in preserving species whilst allowing a sustainable level of development.

Of course this requires a government with vision, but with New Zealand's Green Party gaining positions in the Jacinda Ardern-led coalition, perhaps the newly-formed New Zealand Government will pick up the slack after years of prevarication and inactivity. That way our grandchildren will be able to experience the cheeky kea and company for real, rather than just via old recordings. How can that fail to make sense? After all, at the lower end of the bio-recording spectrum, all it requires is for someone to make a few taps on their keyboard or smartphone. It's certainly not rocket science!

*Forest and Bird have actively lobbied the New Zealand Government in numerous cases to prevent environmental degradation via land swaps, mining and hydro-electric schemes. They have produced a volume on environmental law and a mobile app called the Best Fish Guide. All in all, they perform an immensely valuable contribution to ensure that development in New Zealand is sustainable and that the public are made aware of schemes that might impact the wider environment.

Thursday, 12 October 2017

The zeal in Zealandia: revealing a lost continent

From an outsider's standpoint, geology appears to be a highly conservative science. As I have mentioned on numerous occasions, it seems astonishing that it took over four decades for Alfred Wegener's continental drift hypothesis to be formalised - via the paradigm-shifting discovery of sea floor spreading - into the theory of plate tectonics. I suppose that like evolution by natural selection, the mechanism, once stated, seems blindingly obvious in hindsight.

Regardless, the geological establishment appears to have been stubbornly opposed to the ideas of an outsider (Wegener was a meteorologist) who was unable to provide proof of an exact mechanism. This was despite the fact that the primary alternative, hypothetical submerged (but extremely convenient) land bridges, appear even more far-fetched.

Over the past few decades geophysical data has been accumulating that should generate rewrites of texts from the most basic level upwards. Namely, that the islands making up New Zealand are merely the tip of the iceberg, accounting for just six per cent of a mostly submerged 'lost' continent. Once part of the Southern Hemisphere's Gondwana, in 1995 the newly discovered continent was given the name Zealandia. Approximately five million square kilometres in size, it broke away from the Australasian region of Gondwana around 70-80 million years ago.

After a decade or two of fairly lacklustre reporting, 2017 seems to be the year in which Zealandia is taking-off in the public domain. First, the Geological Society of America published a paper in February. stating that Zealandia should be officially declared as a continent. Then in July the drill ship Joides Resolution began the two month long Expedition 371, a research trip under the International Ocean Discovery Programme (IODP). Scientists from twelve countries undertook deep sea drilling, gaining data on plate tectonics, palaeontology and climate history as well as research directly relevant to understanding the geology of the newest continent.

It is surprising then to learn that geologists first mooted the idea as early as the 1960s but that apart from some marine core samples collected in 1971, no-one undertook the necessary ocean-based research until very recently. Earth resources satellites have helped somewhat, but nothing could replace the evidence that emerged with deep drilling of the seabed. Therefore I wonder what has sparked the sudden interest in an idea that has been around for so long?

One possibility is the large amount of data that the international geological community required to prove the theory beyond doubt, coupled with the fact that this sort of research has little in the way of an obvious immediate practical benefit. It is extremely expensive to undertake deep sea drilling and few vessels are equipped for the purpose. Joides Resolution itself will be forty years old next year, having undergone several years' of refit to keep it going. Those areas of sea bed with potential oil or gas deposits may gain high-fidelity surveying, but compared to fossil fuels, fossil biota and sea bed strata research are very much at the whim of international project funding. In the case of the IODP, governments are cutting budgets on what are deemed non-essential projects, so it remains to be seen whether the intended follow-up trips will occur.

It would be disappointing if there was no further research as despite the acceptance of Zealandia, there is still a great deal of disagreement about what is known as the Oligocene Drowning. I first came across the notion of an eighth continent in the excellent 2007 book In Search of Ancient New Zealand, written by geologist / palaeontologist Hamish Campbell and natural history writer Gerard Hutching. The reason that over ninety per cent of Zealandia is underwater is due to the lack of thickness of its continental land mass - only 20-30km - making it far less buoyant than other continents.

But has this submerged percentage varied during the past eighty million years? There are some very divided opinions about this, with palaeontologists, geneticists and other disciplines taking sides with different camps of geologists. These can be roughly summarised as Moa's Ark versus the Oligocene Drowning, or to be more precise, what percentage, if any, of New Zealand's unique plants and animals are locally-derived Gondwanan survivors and how many have arrived by sea or air within the past twenty or so million years?

The arguments are many and varied, with each side claiming that the other has misinterpreted limited or inaccurate data. If Zealandia has at any time been entirely submerged, then presumably next to none of the current fauna and flora can have remained in situ since the continent broke away from Gondwana. The evidence for and against includes geology, macro- and micro-fossils, and genetic comparisons, but nothing as yet provides enough certainty for a water-tight case in either direction. In Search of Ancient New Zealand examines evidence that all Zealandia was under water around twenty-three million years ago, during the event known as the Oligocene Drowning. However, Hamish Campbell's subsequent 2014 book (co-written with Nick Mortimer) Zealandia: Our continent revealed discusses the finding of land-eroded sediments during this epoch, implying not all the continent was submerged.

It's easy to see why experts might be reticent to alter their initial stance, since in addition to the conservative nature of geology there are other non-science factors such as patriotism at stake. New Zealand's unusual biota is a key element of its national identity, so for New Zealand scientists it's pretty much a case of damage it at your own peril! In 2003 I visited the predator-free Karori Wildlife Reserve in Wellington. Six years later it was rebranded as Zealandia, deliberately referencing the eighth continent and with more than a hint of support for Moa's Ark, i.e. an unbroken chain of home-grown oddities such as the reptile tuatara and insect weta. With the nation's reliance on tourism and the use of the '100% Pure New Zealand' slogan, a lot rests on the idea of unique and long-isolated wildlife. If the flightless kakapo parrot for example turns out not to be very Kiwi after all, then who knows how the country's reputation might suffer.

What isn't well known, even within New Zealand, is that some of the best known animals and plants are very recent arrivals. In addition to the numerous species deliberately or accidentally introduced by settlers in the past two hundred years, birds such as the silvereye / waxeye (Zosterops lateralis) and Welcome swallow (Hirundo neoxena) are self-introduced, as is the monarch butterfly.

The volcanic island of Rangitoto in Auckland's Hauraki Gulf is only about six centuries old and yet - without any human intervention - has gained the largest pohutukawa forest in the world, presumably all thanks to seeds spread on the wind and by birds. Therefore it cannot be confirmed with any certainty just how long the ancestors of the current flora and fauna have survived in the locality. A number of New Zealand scientists are probably worried that some of the nation's best-loved species may have arrived relatively recently from across the Tasman; a fossil discovered in 2013 suggests that the flightless kiwi is a fairly close cousin of the Australian emu and so is descended from a bird that flew to New Zealand before settling into an ecological niche that didn't require flight.

Other paleontological evidence supports the Moa's Ark hypothesis: since 2001 work on a lake bed at St Bathans, Central Otago has produced a wide range of 16 million year-old fossils, including three bones from a mouse-sized land mammal. The diversity of the assemblage indicates that unless there was some uniquely rapid colonisation and subsequent speciation, there must have been above-water regions throughout the Oligocene. In addition, whereas the pro-underwater faction have concentrated on vertebrates, research into smaller critters such as giant land snails (which are unable to survive in salt water conditions) supports the opposite proposition.

So all in all, there is as yet no definitive proof one way or the other. What's interesting about this particular set of hypotheses is the way in which an array of disciplines are coming together to provide a more accurate picture of New Zealand's past. By working together, they also seem to be reducing the inertia that has led geology to overlook new ideas for far too long; Zealandia, your time has come!

Wednesday, 27 September 2017

Cow farts and climate fiddling: has agriculture prevented a new glaciation?

Call me an old grouch, but I have to say that one of my bugbears is the use of the term 'ice age' when what is usually meant is a glacial period. We currently live in an interglacial (i.e. warmer) era, the last glaciation having ended about 11,700 years ago. These periods are part of the Quaternary glaciation that has existed for almost 2.6 million years and deserving of the name 'Ice Age', with alternating but irregular cycles of warm and cold. There, that wasn't too difficult now, was it?

What is rather more interesting is that certain geology textbooks published from the 1940s to 1970s hypothesised that the Earth is overdue for the next glaciation. Since the evidence suggests the last glacial era ended in a matter of decades, the proposed future growth of the ice sheets could be equally rapid. Subsequent research has shown this notion to be flawed, with reliance on extremely limited data leading to over-confident conclusions. In fact, current estimates put interglacial periods as lasting anywhere from ten thousand to fifty thousand years, so even without human intervention in global climate, there would presumably be little to panic about just yet.

Over the past three decades or so this cooling hypothesis has given way to the opposing notion of a rapid increase in global temperatures. You only have to read such recent news items as the breakaway of a six thousand square kilometre piece of the Antarctic ice shelf to realise something is going on, regardless of whether you believe it is manmade, natural or a combination of both. But there is a minority of scientists who claim there is evidence for global warming - and an associated postponement of the next glaciation - having begun thousands of years prior to the Industrial Revolution. This then generates two key questions:

  1. Has there been a genuine steady increase in global temperature or is the data flawed?
  2. Assuming the increase to be accurate, is it due to natural changes (e.g. orbital variations or fluctuations in solar output) or is it anthropogenic, that is caused by human activity?

As anyone with even a vague interest in or knowledge of climate understands, the study of temperature variation over long timescales is fraught with issues, with computer modelling often seen as the only way to fill in the gaps. Therefore, like weather forecasting, it is far from being an exact science (insert as many smileys here as deemed appropriate). Although there are climate-recording techniques involving dendrochronology (tree rings) and coral growth that cover the past few thousand years, and ice cores that go back hundreds of thousands, there are still gaps and assumptions that mean the reconstructions involve variable margins of error. One cross-discipline assumption is that species found in the fossil record thrived in environments - and crucially at temperatures - similar to their descendants today. All in all this indicates that none of the numerous charts and diagrams displaying global temperatures over the past twelve thousand years are completely accurate, being more along the lines of a reconstruction via extrapolation.

Having looked at some of these charts I have to say that to my untrained eye there is extremely limited correlation for the majority of the post-glacial epoch. There have been several short-term fluctuations in both directions in the past two thousand years alone, from the so-called Mediaeval Warm Period to the Little Ice Age of the Thirteenth to Nineteenth centuries. One issue of great importance is just how wide a region did these two anomalous periods cover outside of Europe and western Asia? Assuming however that the gradual warming hypothesis is correct, what are the pertinent details?

Developed in the 1920s, the Milankovitch cycles provide a reasonable fit for the evidence of regular, long-term variations in the global climate. The theory states that changes in the Earth's orbit and axial tilt are the primary causes of these variations, although the timelines do not provide indisputable correlation. This margin of error has helped to lead other researchers towards an anthropogenic cause for a gradual increase in planet-wide warming since the last glaciation.

The first I heard of this was via Professor Iain Stewart's 2010 BBC series How Earth Made Us, in which he summarised the ideas of American palaeoclimatologist Professor William Ruddiman, author of Plows, Plagues and Petroleum: How Humans Took Control of Climate. Although many authors, Jared Diamond amongst them, have noted the effects of regional climate on local agriculture and indeed the society engaged in farming, Professor Ruddiman is a key exponent of the reverse: that pre-industrial global warming has resulted from human activities. Specifically, he argues that the development of agriculture has led to increases in atmospheric methane and carbon dioxide, creating an artificial greenhouse effect long before burning fossil fuels became ubiquitous. It is this form of climate change that has been cited as postponing the next glaciation, assuming that the current interglacial is at the shorter end of such timescales. Ruddiman's research defines two major causes for an increase in these greenhouse gases:

  1. Increased carbon dioxide emissions from burning vegetation, especially trees, as a form of land clearance, i.e. slash and burn agriculture.
  2. Increased methane from certain crops, especially rice, and from ruminant species, mostly cattle and sheep/goat.

There are of course issues surrounding many of the details, even down to accurately pinpointing the start dates of human agriculture around the world. The earliest evidence of farming in the Near East is usually dated to a few millennia after the end of the last glaciation, with animal husbandry preceding the cultivation of crops. One key issue concerns the lack of sophistication in estimating the area of cultivated land and ruminant population size until comparatively recent times, especially outside of Western Europe. Therefore generally unsatisfactory data concerning global climate is accompanied by even less knowledge concerning the scale of agriculture across the planet for most of its existence.

The archaeological evidence in New Zealand proves without a doubt that the ancestors of today's Maori, who probably first settled the islands in the Thirteenth Century, undertook enormous land clearance schemes. Therefore even cultures remote from the primary agricultural civilisations have used similar techniques on a wide scale. The magnitude of these works challenges the assumption that until chemical fertilisers and pesticides were developed in the Twentieth Century, the area of land required per person had altered little since the first farmers. In a 2013 report Professor Ruddiman claims that the level of agriculture practiced by New Zealand Maori is just one example of wider-scale agricultural land use in pre-industrial societies.

As for the role played by domesticated livestock, Ruddiman goes on to argue that ice core data shows an anomalous increase in atmospheric methane from circa 3000BCE onwards. He hypothesises that a rising human population led to a corresponding increase in the scale of agriculture, with rice paddies and ruminants the prime suspects. As mentioned above, the number of animals and size of cultivated areas remain largely conjectural for much of the period in question.  Estimates suggest that contemporary livestock are responsible for 37% of anthropogenic methane and 9% of anthropogenic carbon dioxide whilst cultivated rice may be generating up to 20% of anthropogenic methane. Extrapolating back in time allows the hypothesis to gain credence, despite lack of access to exact data.

In addition, researchers both in support and opposition to pre-industrial anthropogenic global warming admit that the complexity of feedback loops, particularly with respect to the role of temperature variation in the oceans, further complicates matters. Indeed, such intricacy, including the potential latency between cause and effect, means that proponents of Professor Ruddiman's ideas could be using selective data for support whilst suppressing its antithesis. Needless to say, cherry-picking results is hardly model science.

There are certainly some intriguing aspects to this idea of pre-industrial anthropogenic climate change, but personally I think the jury is still out (as I believe it is for the majority of professionals in this area).  There just isn't the level of data to guarantee its validity and what data is available doesn't provide enough correlation to rule out other causes. I still think such research is useful, since it could well prove essential in the fight to mitigate industrial-era global warming. The more we know about longer term variations in climate change, the better the chance we have of understanding the causes - and potentially the solutions - to our current predicament. And who knows, the research might even persuade a few of the naysayers to move in the right direction. That can't be bad!

Monday, 11 September 2017

Valuing the velvet worm: noticing the most inconspicuous of species

Most of the recent television documentaries or books I've encountered that discuss extra-terrestrial life include some description of the weirder species we share our own planet with. Lumped together under the term 'extremophiles' these organisms appear to thrive in environments hostile to most other life forms, from the coolant ponds of nuclear reactors to the boiling volcanic vents of the deep ocean floor.

Although this has rightly gained attention for these often wonderfully-named species (from snottites to tardigrades) there are numerous other lifeforms scarcely noticed by anyone other than a few specialists, quietly going about their unassuming business. However, they may provide a few useful lessons for all of us, including that we should acknowledge there may be unrecognised problems generated when we make rapid yet radical modifications to local environments.

There is a small, unassuming type of creature alive today that differs little from a marine animal present in the Middle Cambrian period around five hundred million years ago. I first read about onychophorans in Stephen Jay Gould's 1989 exposition on the Burgess Shale, Wonderful Life, and although those fossil marine lobopodians are not definitively onychophorans they are presumed to be ancestral. More commonly known by one genus, peripatus, or even more colloquially as velvet worms, there are at least several hundred species around today, possibly many more. The velvet component of their name is due to their texture, but they bear more resemblance to caterpillars than to worms. They are often described as the ‘missing link' between arthropods and worms but as is usually the case this is a wildly inappropriate term in this context of biological classification. The key difference to the Burgess Shale specimens is that today's velvet worms are fully terrestrial: there are no known marine or freshwater species.

Primarily resident in the southern hemisphere, the largely nocturnal peripatus shun bright light and requiring humid conditions to survive. Although there are about thirty species here in New Zealand, a combination of their small size (under 60mm long) and loss of habitat means they are rarely seen. The introduction of predators such as hedgehogs - who of course never meet peripatus in their northern hemisphere home territory - means that New Zealand's species have even more to contend with. Although I frequently (very carefully) look under leaf litter and inside damp logs on bush walks in regions known to contain the genus Peripatoides - and indeed where others have told me they have seen them - I have yet to encounter a single specimen.

There appears to be quite limited research, with less than a third of New Zealand species fully described. However, enough is known about two species to identify their population status as 'vulnerable'. One forest in the South Island has been labelled an 'Area of Significant Conservation Value' thanks to its population of peripatus, with the Department of Conservation relocating specimens prior to road development. Clearly, they had better luck locating velvet worms than I have had! It isn't just the New Zealand that lacks knowledge of home-grown onychophorans either: in the past two decades Australian researchers have increased the number of their known species from just seven to about sixty.

Their uncanny resemblance to the Burgess Shale specimens, despite their transition from marine to terrestrial environments, has led velvet worms to be described by another well-worn phrase, 'living fossils'. However, is this short-hand in any way useful, or is it a lazy and largely inaccurate term? The recent growth in sophisticated DNA analysis suggests that even when outward anatomy may be change little, the genome itself may vary widely. Obviously DNA doesn't preserve in fossils and so any such changes cannot be tracked from the Cambrian specimens, but the genetic variation found in other types of organisms sharing a similar appearance shows that reliance on just external anatomy can be deceptive.

Due to lack of funding, basic taxonomic research, the bedrock for cladistics, is sadly lacking. In the case of New Zealand, some of the shortfall has been made up for by dedicated amateurs, but there are few new taxonomists learning the skills to continue this work - which is often seen as dull and plodding compared to the excitement of, for example, genetics. Most people might say so what interest could there be in such tiny, insignificant creatures as peripatus? After all, how likely would you be to move an ant's nest in your garden before undertaking some re-landscaping? But as shown by the changing terminology from 'food chains' to 'food webs', in most cases we still don't understand how the removal of one species might generate a domino effect on a local ecosystem.

I've previously discussed the over-reliance on 'poster' species such as giant pandas for environmental campaigns, but mere aesthetics don't equate to importance, either for us or ecology as a whole. It is becoming increasingly clear that by weight the majority of our planet's biomass is microbial. Then come the insects, with the beetles prominent both by number of species and individuals. Us large mammals are really just the icing on the cake and certainly when it comes to Homo sapiens, the rest of the biosphere would probably be far better off without us, domesticated species aside.

It would be nice to value organisms for themselves, but unfortunately our market economies require the smell of profit before they will lift a finger. Therefore if their usefulness could be ascertained, it might help generate greater financial incentive to support the wider environment. Onychophorans may seem dull, but there are several aspects to them that is both interesting in itself and might also provide something fruitful for us humans.

Firstly, they have an unusual weapon in the form of a mechanism that shoots adhesive slime at prey. Like spider silk, is it possible that this might prove an interesting line of research in the materials or pharmaceutical industries? After all, it was the prickly burrs of certain plants that inspired the development of Velcro, whilst current studies of tardigrades (the tiny 'water bears' living amongst the mosses) are investigating their near indestructability. If even a single, tiny species becomes extinct, that genome is generally lost forever: who knows what insights it might have led to? Although museum collections can be useful, DNA does decay and contamination leads to immense complexities in unravelling the original organism's genome. All in all, it's much better to have a living population to work on than rely on what can be pieced together post-extinction.

In addition, for such tiny creatures, velvet worms have developed complex social structures; is it possible that analysis of their brains might be useful in computing or artificial intelligence? Of course it is unlikely - and extinction is nothing if not natural - but the current rate is far greater than it has been outside of mass extinctions. Losing a large and obvious species such as the Yangtze River dolphin (and that was despite it being labelled a ‘national treasure') is one thing, but how many small, barely-known plants and animals are going the same way without anyone noticing? Could it be that right now some minute, unassuming critter is dying out and that we will only find out too late that it was a vital predator of crop-eating pests like snails or disease vectors such as cockroaches?

It has been said that ignorance is bliss, but with so many humans needing to be fed, watered and treated for illness, now more than ever we need as much help as we can get. Having access to the complex ready-made biochemistry of a unique genome is surely easier than attempting to synthesise one from scratch or recover it from a long-dead preserved specimen? By paying minimal attention to the smallest organisms that lie all around us, we could be losing so much more than just an unobtrusive plant, animal or fungus.

We can't save every species on the current endangered list but more attention could be given to the myriad of life forms that get side-lined by the cute and cuddly flagship species, usually large animals. Most of us would be upset by the disappearance of the eighteen hundred or so giant pandas still left in the wild, but somehow I doubt their loss would have as great an impact on the surrounding ecosystem than that of some far less well known flora or fauna. If you think that's nonsense, then consider the vital roles that bees and dung beetles play in helping human agriculture.

Although the decimation of native New Zealand wildlife has led to protective legislation for all our vertebrates and a few famous invertebrates such as giant weta, the vast majority of other species are still left to their own devices. That's not to say that the ecosystems in most other countries are given far less support, of course. But without funding for basic description and taxonomy, who knows what is even out there, never mind whether it might be important to humanity? Could it be that here is a new field for citizen scientists to move into?

Needless to say, the drier climes brought on by rising temperatures will not do peripatus any favours, thanks to its need to remain in damp conditions. Whether by widespread use of the poison 1080 (in the bid to create a pest-free New Zealand by 2050) or the accidental importation of a non-native fungus such as those decimating amphibians worldwide and causing kauri dieback in New Zealand, there are plenty of ways that humans could unwittingly wipe out velvet worms, etal. So next time you watch a documentary on the demise of large, familiar mammals, why not spare a thought for all those wee critters hiding in the bush, going about their business and trying to avoid all the pitfalls us humans have unthinkingly laid for them?

Tuesday, 29 August 2017

Cerebral celebrities: do superstar scientists harm science?

One of my earliest blog posts concerned the media circus surrounding two of the most famous scientists alive today: British physicist Stephen Hawking and his compatriot the evolutionary biologist Richard Dawkins. In addition to their scientific output, they are known in public circles thanks to a combination of their general readership books, television documentaries and charismatic personalities. The question has to be asked though, how much of their reputation is due to their being easily-caricatured and therefore media-friendly characters rather than what they have contributed to human knowledge?

Social media has done much to democratise the publication of material from a far wider range of authors than previously possible, but the current generation of scientific superstars who have arisen in the intervening eight years appear party to a feedback loop that places personality as the primary reason for their media success. As a result, are science heroes such as Neil deGrasse Tyson and Brian Cox merely adding the epithet 'cool' to STEM disciplines as they sit alongside the latest crop of media and sports stars? With their ability to fill arenas usually reserved for pop concerts or sports events, these scientists are seemingly known far and wide for who they are as much as for what they have achieved. It might seem counterintuitive to think that famous scientists and mathematicians could be damaging STEM, but I'd like to put forward five ways by which this could be occurring:

1: Hype and gossip

If fans of famous scientists spend their time reading, liking and commenting at similarly trivial levels, they may miss important material from other, less famous sources. A recent example that caught my eye was a tweet by British astrophysicist and presenter Brian Cox, containing a photograph of two swans he labelled ‘Donald' and ‘Boris'. I assume this was a reference to the current US president and British foreign secretary, but with over a thousand 'likes' by the time I saw it I wonder what other, more serious, STEM-related stories might have been missed in the rapid ebb and flow of social media.

As you would expect with popular culture fandom the science celebrities' material aimed at a general audience receives the lion's share of attention, leaving the vast majority of STEM popularisations under-recognised. Although social media has exacerbated this, the phenomenon does pre-date it. For example, Stephen Hawking's A Brief History of Time was first published in 1988, the same year as Timothy Ferris's Coming of Age in the Milky Way, a rather more detailed approach to similar material that was left overshadowed by its far more famous competitor. There is also the danger that celebrities with a non-science background might try to cash in on the current appeal of science and write poor-quality popularisations. If you consider this unlikely, you should bear in mind that there are already numerous examples of extremely dubious health, diet and nutrition books written by pop artists and movie stars. If scientists can be famous, perhaps the famous will play at being science writers.

Another result of this media hubbub is that in order to be heard, some scientists may be guilty of the very hype usually blamed on the journalists who publicise their discoveries. Whether to guarantee attention or self-promoting in order to gain further funding, an Australian research team recently came under fire for discussing a medical breakthrough as if a treatment was imminent, despite having so are only experimented on mice! This sort of hyperbole both damages the integrity of science in the public eye and can lead to such dangerous outcomes as the MMR scandal, resulting in large numbers of children not being immunised.

2: Hero worship

The worship of movie stars and pop music artists is nothing new and the adulation accorded them reminds me of the not dissimilar veneration shown to earlier generations of secular and religious leaders. The danger here then is for impressionable fans to accept the words of celebrity scientists as if they were gospel and so refrain from any form of critical analysis. When I attended an evening with astrophysicist Neil deGrasse Tyson last month I was astonished to hear some fundamental misunderstandings of science from members of the public. It seemed as if Dr Tyson had gained a personality cult who hung on each utterance but frequently failed to understand the wider context or key issues regarding the practice of science. By transferring hero worship from one form of human activity to another, the very basis - and differentiation - that delineates the scientific enterprise may be undermined.

3: Amplifying errors

Let's face it, scientists are human and make mistakes. The problem is that if the majority of a celebrity scientist's fan base are prepared to lap up every statement, then the lack of critical analysis can generate further issues. There are some appalling gaffes in the television documentaries and popular books of such luminaries as Sir David Attenborough (as previously discussed) and even superstar Brian Cox is not immune: his 2014 book Human Universe described lunar temperatures dropping below -2000 degrees Celsius! Such basic errors imply that the material is ghost-written or edited by authors with little scientific knowledge and no time for fact checking. Of course this may embarrass the science celebrity in front of their potentially jealous colleagues, but more importantly can serve as ammunition for politicians, industrialists and pseudo-scientists in their battles to persuade the public of the validity of their own pet theories - post-truth will out, and all that nonsense.

4: Star attitude

With celebrity status comes the trappings of success, most usually defined as a luxury lifestyle. A recent online discussion here in New Zealand concerned the high cost of tickets for events featuring Neil deGrasse Tyson, Brian Greene, David Attenborough, Jane Goodall and later this year, Brian Cox. Those for Auckland-based events were more expensive than tickets to see Kiwi pop star Lorde and similar in price for rugby matches between the All Blacks and British Lions. By making the tickets this expensive there is little of chance of attracting new fans; it seems to be more a case of preaching to the converted.

Surely it doesn't have to be this way: the evolutionary biologist Beth Shapiro, author of How to Clone a Mammoth, gave an excellent free illustrated talk at Auckland Museum a year ago. It seems odd that the evening with Dr Tyson, for example, consisting of just himself, interviewer Michelle Dickinson (A.K.A. Nanogirl) and a large screen, cost approximately double that of the Walking with Dinosaurs Arena event at the same venue two years earlier, which utilised US$20 million worth of animatronic and puppet life-sized dinosaurs.

Dr Tyson claims that by having celebrity interviewees on his Star Talk series he can reach a wider audience, but clearly this approach is not feasible when his tour prices are so high. At least Dr Goodall's profits went into her conservation charity, but if you consider that Dr Tyson had an audience of probably over 8000 in Auckland alone, paying between NZ$95-$349 (except for the NZ$55 student tickets) you have to wonder where all this money goes: is he collecting ‘billions and billions' of fancy waistcoats? It doesn't look as if this trend will soon stop either, as Bill Nye (The Science Guy) has just announced that he will be touring Australia later this year and his tickets start at around NZ$77.

5: Skewing the statistics

The high profiles of sci-comm royalty and their usually cheery demeanour implies that all is well in the field of scientific research, with adequate funding for important projects. However, even a quick perusal of less well-known STEM professionals on social media prove that this is not the case. An example that came to my attention back in May was that of the University of Auckland microbiologist Dr Siouxsie Wiles, who had to resort to crowdfunding for her research into fungi-based antibiotics after five consecutive funding submissions were rejected. Meanwhile, Brian Cox's connection to the Large Hadron Collider gives the impression that even such blue-sky research as the LHC can be guaranteed enormous budgets.

As much as I'd like to thank these science superstars for promoting science, technology and mathematics, I can't quite shake the feeling that their cult status is too centred on them rather than the scientific enterprise as a whole.  Now more than ever science needs a sympathetic ear from the public, but this should be brought about by a massive programme to educate the public (they are the taxpayers, after all) as to the benefits of such costly schemes as designing nuclear fusion reactors and the research on climate change. Simply treating celebrity scientists in the same way as movie stars and pop idols won't help an area of humanity under siege from so many influential political and industrial leaders with their own private agendas. We simply mustn't allow such people to misuse the discipline that has raised us from apemen to spacemen.

Friday, 11 August 2017

From steampunk to Star Trek: the interwoven strands between science, technology and consumer design

With Raspberry Pi computers having sold over eleven million units by the end of last year, consumer interest in older technology appears to have become big business. Even such decidedly old-school devices as crystal radio kits are selling well, whilst replicas of vintage telescopes are proof that not everyone has a desire for the cutting-edge. I'm not sure why this is so, but since even instant Polaroid-type cameras are now available again - albeit with a cute, toy-like styling - perhaps manufacturers are just capitalising on a widespread desire to appear slightly out of the ordinary. Even so, such products are far closer to the mainstream than left field: instant-developing cameras for example now reach worldwide sales of over five million per year. That's hardly a niche market!

Polaroid cameras aside, could it be the desire for a less minimal aesthetic that is driving such purchases? Older technology, especially if it is pre-integrated circuit, has a decidedly quaint look to it, sometimes with textures - and smells - to match. As an aside, it's interesting that whilst on the one hand current miniaturisation has reduced energy consumption for many smaller pieces of technology from the Frankenstein laboratory appearance of valve-based computing and room-sized mainframes to the smart watch etal, the giant scale of cutting-edge technology projects require immense amounts of energy, with nuclear fusion reactors presumably having overtaken the previous perennial favourite example of space rockets when it comes to power usage.

The interface between sci-tech aesthetics and non-scientific design is a complicated one: it used to be the case that consumer or amateur appliances were scaled-down versions of professional devices, or could even be home-made, for example telescopes or crystal radios. Nowadays there is a massive difference between the equipment in high-tech laboratories and the average home; even consumer-level 3D printers won't be able to reproduce gravity wave detectors or CRISPR-Cas9 genome editing tools any time soon.

The current trend in favour - or at least acknowledgement - of sustainable development, is helping to nullify the pervasive Victorian notion that bigger, faster, noisier (and smellier) is equated with progress. It's therefore interesting to consider the interaction of scientific ideas and instruments, new technology and consumerism over the past century or so. To my mind, there appear to be five main phases since the late Victorian period:
  1. Imperial steam
  2. Streamlining and speed
  3. The Atomic Age
  4. Minimalism and information technology
  5. Virtual light

1) Imperial steam

In the period from the late Nineteenth Century's first generation of professional scientists up to the First World War, there appears to have been an untrammelled optimism for all things technological. Brass, iron, wood and leather devices - frequently steam-powered - created an aesthetic that seemingly without effort has an aura of romance to modern eyes.

Although today's steampunk/alternative history movement is indebted to later authors, especially Michael Moorcock, as much as it is to Jules Verne and H.G. Wells, the latter pair are only the two most famous of a whole legion of late Victorian and Edwardian writers who extolled - and occasionally agonised over - the wonders of the machine age.

I must confess I much prefer steam engines to electric or diesel locomotives, despite the noise, smuts and burning of fossil fuels. Although the pistons and connecting rods of these locomotives might be the epitome of the design from this phase, it should be remembered that it was not unknown for Victorian engineers to add fluted columns and cornucopia reliefs to their cast iron and brass machinery, echoes of a pre-industrial past. An attempt was being made, however crude, to tie together the might of steam power to the Classical civilisations that failed to go beyond the aeolipile toy turbine and the Antikythera mechanism.

2) Streamlining and speed

From around 1910, the fine arts and then decorative arts developed new styles obsessed with mechanical movement, especially speed. The dynamic work of the Futurists led the way, depicting the increasing pace of life in an age when humans and machines were starting to interact ever more frequently. The development of heavier-than-air flight even led to a group of 'aeropainters' whose work stemmed from their experience of flying.

Although scientific devices still had some of the Rube Goldberg/Heath Robinson appearance of their Nineteenth Century forebears, both consumer goods and vehicles picked up the concept of streamlining to suggest a sophisticated, future-orientated design. Items such as radios and toasters utilised early plastics, stainless steel and chrome to imply a higher level of technology than their interiors actually contained. This is in contrast to land, sea and aerial craft, whereby the practical benefits of streamlining happily coincided with an attractive aesthetic, leading to design classics such as the Supermarine seaplanes (forerunners of the Spitfire) and the world speed record-holding A4 Pacific Class steam locomotives.

3) The Atomic Age

By the 1950s practically anything that could be streamlined was, whether buildings that looked like ocean liners or cars with rocket-like tailfins and dashboards fit for a Dan Dare spaceship. However, a new aesthetic was gaining popularity in the wake of the development of atomic weapons. It seems to have been an ironic move that somewhere between the optimism of an era of exciting new domestic gadgets and the potential for nuclear Armageddon, the Bohr (classical physics) model of the atom itself gained a key place in post-war design.

Combined with rockets and space the imagery could readily be termed 'space cadet', but it wasn't the only area of science to influence wider society. Biological research was undergoing a resurgence, which may explain why stylised x-ray forms, amoebas and bodily organs become ubiquitous on textiles, furnishings, and fashion. Lighting fixtures were a standout example of items utilising designs based on the molecular models used in research laboratories (which famously gave Crick and Watson the edge in winning the race to understand the structure of DNA).

Monumental architecture also sought to represent the world of molecules on a giant scale, culminating in the 102 metre-high Atomium built in Brussels for the 1958 World's Fair. It could be said that never before had science- and technological-inspired imagery been so pervasive in non-STEM arenas.

4) Minimalism and information technology

From the early 1970s the bright, optimistic designs of the previous quarter century were gradually replaced by the cool, monochromatic sophistication of minimalism. Less is more became the ethos, with miniaturisation increasing as solid-state electronics and then integrated circuits became available. A plethora of artificial materials, especially plastics, meant that forms and textures could be incredibly varied if refined.

Perhaps a combination of economic recession, mistrust of authority (including science and a military-led technocracy) and a burgeoning awareness of environmental issues led to the replacement of exuberant colour with muted, natural tones and basic if self-possessed geometries. Consumers could now buy microcomputers and video games consoles; what had previously only existed in high-tech labs or science fiction became commonplace in the household. Sci-fi media began a complex two-way interaction with cutting-edge science; it's amazing to consider that only two decades separated the iPad from its fictional Star Trek: The Next Generation predecessor, the PADD.

5) Virtual light

With ultra high-energy experiments such as nuclear fusion reactors and the ubiquity of digital devices and content, today's science-influenced designs aim to be simulacra of their professional big brothers. As stated earlier, although consumer technology is farther removed from mega-budget science apparatus than ever, the former's emphasis on virtual interfaces is part of a feedback loop between the two widely differing scales.

The blue and green glowing lights of everything from futuristic engines to computer holographic interfaces in many Hollywood blockbusters are representations of both the actual awesome power required by the likes of the Large Hadron Collider and as an analogy for the visually-unspectacular real-life lasers and quantum teleportation, the ultimate fusion (sorry, couldn't resist that one) being the use of the real National Ignition Facility target chamber as the engine core of the USS Enterprise in Star Trek: Into Darkness.

Clearly, this post-industrial/information age aesthetic is likely to be with us for some time to come, as consumer-level devices emulate the cool brilliance of professional STEM equipment; the outer casing is often simple yet elegant, aiming not to distract from the bright glowing pixels that take up so much of our time. Let's hope this seduction by the digital world can be moderated by a desire to keep the natural, material world working.

Friday, 28 July 2017

Navigating creation: A Cosmic Perspective with Neil deGrasse Tyson


I recently attended an interesting event at an Auckland venue usually reserved for pop music concerts. An audience in the thousands came to Neil deGrasse Tyson: A Cosmic Perspective, featuring the presenter of Cosmos: A Spacetime Odyssey and radio/tv show StarTalk. The 'Sexiest Astrophysicist Alive' presented his brand of science communication to an enormous congregation (forgive the use of the word) of science fans aged from as young as five years old. So was the evening a success? My fellow science buffs certainly seemed to have enjoyed it, so I decided it would be worthwhile to analyse the good doctor's method of large-scale sci-comm.

The evening was split into three sections, the first being the shortest, a primer as to our location in both physical and psychological space-time. After explaining the scale of the universe via a painless explanation of exponents, Dr Tyson used the homespun example of how stacking the 'billions' (which of course he declared to be Carl Sagan's favourite word) of Big Macs so far sold could be stacked many times around the Earth's circumference and even then extend onwards to the Moon and back. Although using such a familiar object in such unusual terrain is a powerful way of taking people outside their comfort territory, there was nothing new about this particular insight, since Dr Tyson has been using it since at least 2009; I assume it was a case of sticking to a tried-and-trusted method, especially when the rest of the evening was (presumably) unscripted.

Billions of Big Macs around the Earth and moon

Having already belittled our location in the universe, the remainder of the first segment appraised our species' smug sense of superiority, questioning whether extra-terrestrials would have any interest in us any more than we show to most of the biota here on Earth. This was a clear attempt to ask the audience to question the assumptions that science fiction, particularly of the Hollywood variety, has been popularising since the dawn of the Space Age. After all, would another civilisation consider us worthy of communicating with, considering how much of our broadcasting displays obvious acts of aggression? In this respect, Neil deGrasse Tyson differs markedly from Carl Sagan, who argued that curiosity would likely be a mutual connection with alien civilisations, despite their vastly superior technology. Perhaps this difference of attitude isn't surprising, considering Sagan's optimism has been negated by both general circumstance and the failure of SETI in the intervening decades.

Dr Tyson also had a few gibes at the worrying trend of over-reliance on high technology in place of basic cognitive skills, describing how after once working out some fairly elementary arithmetic he was asked which mobile app he had used to gain the result! This was to become a central theme of the evening, repeated several times in different guises: that rather than just learning scientific facts, non-scientists can benefit from practising critical thinking in non-STEM situations in everyday life.

Far from concentrating solely on astrophysical matters, Dr Tyson also followed up on topics he had raised in Cosmos: A Spacetime Odyssey regarding environmental issues here on Earth. He used Apollo 8's famous 'Earthrise' photograph (taken on Christmas Eve 1968) as an example of how NASA's lunar landing programme inspired a cosmic perspective, adding that organisation such as the National Oceanic and Atmospheric Administration and the Environmental Protection Agency were founded during the programme. His thesis was clear: what began with political and strategic causes had fundamental benefits across sectors unrelated to space exploration; or as he put it "We're thinking we're exploring the moon and we discovered the Earth for the first time."

The second and main part of the event was Tyson's discussion with New Zealand-based nanotechnologist and science educator Michelle Dickinson, A.K.A. Nanogirl. I can only assume that there aren't any New Zealand astronomers or astrophysicists as media-savvy as Dr Dickinson, or possibly it's a case of celebrity first and detailed knowledge second, with a scientifically-minded interviewer deemed to have an appropriate enough mindset even if not an expert in the same specialisation.

The discussion/interview was enlightening, especially for someone like myself who knows Neil deGrasse Tyson as a presenter but very little about him as a person. Dr Tyson reminisced how in 1989 he accidentally become a media expert solely on the basis of being an astrophysicist and without reference to him as an Afro-American, counter to the prevailing culture that only featured Afro-Americans to gain their point of view.

Neil deGrasse Tyson: A Cosmic Perspective

Dr Tyson revealed himself to be both a dreamer and a realist, the two facets achieving a focal point with his passion for a crewed mission to Mars. He has often spoken of this desire to increase NASA's (comparatively small) budget so as reinvigorate the United States via taking humans out from the humdrum comfort zone of low earth orbit. However, his understanding of how dangerous such a mission would be led him to state he would only go to Mars once the pioneering phase was over!

His zeal for his home country was obvious - particularly the missed opportunities and the grass roots rejection of scientific expertise prevalent in the United States - and it would be easy to see his passionate pleas for the world to embrace Apollo-scale STEM projects as naïve and out-of-touch. Yet there is something to be said for such epic schemes; if the USA is to rise out of its present lassitude, then the numerous if unpredictable long-term benefits of, for example, a Mars mission is a potential call-to-arms.

The final part of the evening was devoted to audience questions. As I was aware of most of the STEM and sci-comm components previously discussed this was for me perhaps the most illuminating section of the event. The first question was about quantum mechanics, and so not unnaturally Dr Tyson stated that he wasn't qualified to answer it. Wouldn't it be great if the scientific approach to expertise could be carried across to other areas where people claim expert knowledge that they don't have?

I discussed the negative effects that the cult of celebrity could have on the public attitude towards science back in 2009 so it was extremely interesting to hear questions from several millennials who had grown up with Star Talk and claimed Neil deGrasse Tyson as their idol. Despite having watched the programmes and presumably having read some popular science books, they fell into some common traps, from over-reliance on celebrities as arbiters of truth to assuming that most scientific theories rather than just the cutting edge would be overturned by new discoveries within their own lifetimes.

Dr Tyson went to some lengths to correct this latter notion, describing how Newton's law of universal gravitation for example has become a subset of Einstein's General Theory of Relativity. Again, this reiterated that science isn't just a body of facts but a series of approaches to understanding nature. The Q&A session also showed that authority figures can have a rather obvious dampening effect on people's initiative to attempt critical analysis for themselves. This suggests a no-win situation: either the public obediently believe everything experts tell them (which leads to such horrors as the MMR vaccine scandal) or they fail to believe anything from STEM professionals, leaving the way open for pseudoscience and other nonsense. Dr Tyson confirmed he wants to teach the public to think critically, reducing gullibility and thus exploitation by snake oil merchants. To this end he follows in the tradition of James 'The Amazing' Randi and Carl Sagan, which is no bad thing in itself.

In addition, by interviewing media celebrities on StarTalk Dr Tyson stated how he can reach a far wider audience than just dedicated science fans. For this alone Neil deGrasse Tyson is a worthy successor to the much-missed Sagan. Let's hope some of those happy fans will be inspired to not just dream, but actively promote the cosmic perspective our species sorely needs if we are to climb out of our current doldrums.

Monday, 10 July 2017

Genius: portraying Albert Einstein as a human being, not a Hollywood stereotype

I recently watched the National Geographic docudrama series Genius, presenting a warts-and-all look at the life and work of Albert Einstein. In these post-truth times in which even a modicum of intellectual thought is often regarded with disdain, it's interesting to see how a scientific icon is portrayed in a high-budget, high-profile series.

A few notable examples excepted, Dr Frankenstein figures still inform much of Hollywood's depiction of STEM practitioners. Inventors are frequently compartmentalised as either patriotic or megalomaniac, often with a love of military hardware; Jurassic Park's misguided and naive Dr John Hammond seemingly a rare exception. As for mathematicians, they are often depicted with more than a touch of insanity, such as in Pi or Fermat's Room.

So does Genius break the mould or follow the public perception of scientists as freaky, geeky, nerdy or plain evil? The script is a fairly sophisticated adaptation of real life events, although the science exposition suffers as a result. Despite some computer graphic sequences interwoven with the live action, the attempts to explore Einstein's thought experiments and theories are suggestive rather than comprehensive, the tip of the iceberg when it comes to his scientific legacy. Where the series succeeds is in describing the interaction of all four STEM disciplines: science, technology, engineering and mathematics; and the benefits when they overlap. The appalling attitudes prevalent in the academia of his younger years are also brought to vivid life, with such nonsense as not questioning tutors piled onto the usual misogyny and xenophobia.

Albert Einstein

Contrary to the popular conception of the lone genius - and counter to the series' title - the role of Einstein's friends such as Marcel Grossmann and Michele Besso as his sounding boards and mathematical assistants is given a high profile. In addition, the creative aspect of science is brought to the fore in sequences that show how Einstein gained inspiration towards his special and general theories of relativity.

The moral dimension of scientific research is given prominence, from Fritz Haber's development of poison gas to Leo Szilard's persuasion of Einstein to both encourage and later dissuade development of atomic weapons. As much as the scientific enterprise might appear to be separate from the rest of human concern, it is deeply interwoven with society; the term 'laboratory conditions' applies to certain processes, not to provide a wall to isolate science from everything else. Scientists in Genius are shown to have the same human foibles as everyone else, from Einstein's serial adultery (admittedly veering to Hollywood family drama at times, paternal guilt complex etal) to Philipp Lenard's dismissal of Einstein's theories due to his anti-Semitism rather than any scientific evidence. So much for scientific impartiality!

The last few episodes offer a poignant description of how even the greatest of scientific minds lose impetus, passing from creative originality as young rebels to conservative middle age stuck-in-the-muds, out of touch with the cutting edge. General readership books on physics often claim theoretical physicists do their best work before they are thirty, with a common example being that Einstein might as well have spent his last twenty years fishing. Although not as detailed as the portrayal of his early, formative years, Einstein's obsessive (but failed) quest to find fault with quantum mechanics is a good description of how even the finest minds can falter.

All in all, the first series of Genius is a very noble attempt to describe the inspiration and background that led to some revolutionary scientific theories. The irony is that by concentrating on Einstein as a human being it might help the wider public gain a better appreciation, if not comprehensive understanding, of the work of scientists and role of STEM in society. Surely that's no bad thing, especially if it makes Hollywood rethink the lazy stereotype of the crazy-haired scientist seeking world domination. Or even encourages people to listen to trained experts rather than the rants of politicians and religious nutbars. Surely that's not a difficult choice?

Monday, 26 June 2017

The power of pond scum: are microalgae biofuels a realistic proposition?

I've previously discussed some very humble organisms but they don't get much humbler than microalgae, photosynthetic organisms that generate about half our planet's atmospheric oxygen. Imagine then what potential there might be for their exploitation in a world of genetic manipulation and small-scale engineering? The total number of algal species is unknown, but estimates suggest some hundreds of thousands. To this end, private companies and government projects around the world have spent the past few decades - and a not inconsiderable amount of funding - to generate a replacement for fossil fuels based on these tiny plants.

For anyone with even a microgram's worth of common sense, developing eco-friendly substitutes for oil, coal and gas is a consummation to be devoutly wished for, but behind the hype surrounding microalgae-derived fuel there is a wealth of opposing opinions and potential some shady goings-on. Whilst other projects such as creating ethanol from food crops are continuing, the great hope - and hype -that surrounded algae-based solutions appears to be grinding to a halt.

Various companies were forecasting that 2012 would be the year that the technology achieved commercial viability, but this now appears to be rather over-eager. Therefore it's worth exploring what happens when hope, high-value commerce and cutting-edge technology meet. There are some big names involved in the research too: ExxonMobil, Shell and BP each pumped tens to hundreds of millions of dollars into microalgae fuel projects, only to either make substantial funding cuts or shut them down altogether since 2011.
Microalgae-derived biofuel
Manufacturing giants such as General Electric and Boeing have been involved in research for new marine and aircraft fuels, whilst the US Navy undertook tests in 2012 whereby algae-derived fuel was included in a 50:50 blend with conventional fossil fuel for ships and naval aircraft. Even shipping companies have become interested, with one boffin-worthy idea being for large cruise ships to grow and process their own fuel on-board. Carriers including United Airlines, Qantas, KLM and Air New Zealand have invested in these kerosene-replacement technologies, with the first two of these airlines having trialled fuel blends including 40% algae derivative. So what has gone wrong?

The issue appears to be one of scale: after initial success with laboratory-sized testing, the expansion to commercial production has encountered a range of obstacles that will most likely delay widespread implementation for at least another quarter century.

The main problems are these:
  1. The algae growing tanks need to be on millions of acres of flat land and there are arguments there just isn't enough such land in convenient locations.
  2. The growing process requires lots of water, which means large transportation costs to get the water to the production sites. Although waste water is usable, some estimates suggest there is not enough of this - even in the USA - for optimal production.
  3. Nitrogen and phosphorus are required as fertiliser, further reducing commercial viability. Some estimates suggest half the USA's annual phosphorus amount would need to be requisitioned for use in this one sector!
  4. Contamination by protozoans and fungi can rapidly destroy a growing pond's entire culture.
In 2012 the US National Academy of Sciences appeared to have confirmed these unfortunate issues. Reporting on the Department of Energy goal to replace 5% of the nation's vehicle fossil fuel consumption with algae-derived biofuel, the Academy stated that this scale of production would make unfeasibly large impacts on water and nutrient usage, as well heavy commitments from other energy sources.

In a bid to maintain solvency, some independent research companies appear to have minimised such issues for as long as possible, finally diversifying when it appeared their funding was about to be curtailed or cut-off. As with nuclear fusion research, commercial production of microalgae fuels hold much promise, but those holding the purse strings aren't as patient as the researchers.

There may be a hint of a silver lining to all this, even if wide scale operations are postponed many decades. The microalgae genus Chlorella - subject of a Scottish biofuel study - is proving to be a practical source of dietary supplements, from vitamins and minerals to Omega-3. It only lacks vitamin B12, but is an astonishing 50-60% protein by weight. As well as human consumption, both livestock and aquaculture feed supplements can be derived from microalgae, although as usual there is a wealth of pseudoscientific nonsense in the marketing, such as the notion that it has an almost magical detox capability. Incidentally, Spirulina, the tablets and powder sold in health food outlets to make into green gloop smoothies, is not microalgae but a B12-rich cyanobacteria, colloquially - and confusingly - known as blue-green algae. Glad that's cleared that one up!

If anything, the research into microalgae-derived biofuels is a good example of how new technology and commercial enterprise uneasily co-exist; each needs the other, but gaining a workable compromise is perhaps just a tricky as the research itself. As for Government-funded projects towards a better future for all, I'll leave you to decide where the interests of our current leaders lie...