Sunday, 23 June 2019

Spray and walk away? Why stratospheric aerosols could be saviours or destroyers

My first scientific encounters with aerosols weren't particularly good ones. In my early teens, I read that the CFC propellants used as aerosols were depleting the ozone layer. Therefore, tiny atmospheric particles had negative connotations for me from my formative years. This was further enforced by Carl Sagan and Richard Turco's 1990 book A Path Where No Man Thought: Nuclear Winter and the End of the Arms Race, which discussed the potentially devastating effects of high-altitude aerosol's around the world following a nuclear attack. Strike two against these pesky particles!

Of course aerosols aren't just man-made. The stratospheric dust particles generated following the Chicxulub impact event 66 million years ago are known to have been instrumental in the global climate disruption that wiped out the dinosaurs and many other life forms. This would have been in addition to the thousands of years of environmental changes caused by sulfur aerosols from the Deccan Traps supervolcano. Rather more recently, the Mount Tambora volcanic eruption in 1815 led to starvation and epidemics around the world for up to three years.

Now that our civilisation is generating a rapid increase in global temperatures, numerous solutions are being researched. One of the most recent areas involves reducing the amount of solar radiation reaching the Earth's surface. Several methods have been suggested for this, but this year sees a small-scale experiment to actually test a solution, namely seeding the atmosphere with highly reflective particles in an artificial recreation of a volcanic event. The Stratospheric Controlled Perturbation Experiment (SCoPEx) is a solar geoengineering project involving Harvard University that will use a balloon to release calcium carbonate in aerosol form at about twenty kilometres above the Earth's surface, analysing the local airspace the following day to assess the effects.

This experiment is controversial for several reasons. Firstly, it doesn't lead to any reduction in greenhouse gases and particulate pollutants; if anything, by sweeping the issue under a stratospheric rug, it could allow fossil fuel corporations to maintain production levels and reduce investment in alternatives. If the recent reports by meteorologists that natural and non-intentional man-made aerosols are already mitigating global warming, then the gross effects of heat pollution must be higher than realised!

Next, this sort of minute level of testing is unlikely to pinpoint issues that operational use might generate, given the chaotic nature of atmospheric weather patterns. To date, numerous computer simulations have been run, but bearing in mind how inaccurate weather forecasting is beyond ten days, nothing can be as accurate as the real thing. Therefore at what point could a test prove that the process is effective and safe enough to be carried out on a global scale? Possibly it might require such a large scale experiment that it is both research and the actual process itself!

The duration that the aerosols remain aloft is still not completely understood, hinting that regular replenishment would be essential. In addition, could the intentionally-polluted clouds capture greater amounts of water vapour, at first holding onto and then dropping their moisture so as to cause drought followed by deluge? Clouds cannot be contained within the boundaries of the testing nation, meaning other countries could suffer these unintended side-effects.

It may be that as a back-up plan, launching reflective aerosols into the stratosphere makes sense, but surely it makes much more sense to reduce greenhouse gas emissions and increase funding of non-polluting alternatives? The main emphasis from ecologists to date has been to remove human-generated substances from the environment, not add new ones in abundance. I'm all for thinking outside the box, but I worry that the only way to test this technique at a fully effective level involves such a large scale experiment as to be beyond the point of no return. Such chemical-based debacles as ozone depletion via chlorofluorocarbons (CFCs) prove that in just a matter of decades we can make profound changes to the atmosphere - and badly effect regions furthest removed from the source itself.  So why not encourage more reducing, reusing and recycling instead?

Monday, 10 June 2019

Defrosting dangers: global warming and the biohazards under the ice

Despite frequent news reports on the thawing of polar and glacial ice, there appears to be less concern shown towards this aspect of climate change than many others. Perhaps this is due to so few humans living in these regions; lack of familiarity with something helps us to ignore its true importance. The most obvious effects of melting ice are said to be the increase in atmospheric carbon, rising sea levels and unpredictable weather patterns, but there is another threat to our species that is only just beginning to be noticed - and as yet has failed to generate any mitigation plans.

A report last year confirmed a frightening cause behind the deaths back in 2015 of approximately half the world's remaining saiga antelope population: thanks to warmer and more humid weather, a type of bacteria usually confirmed to their nose had spread to the antelopes' bloodstream. Although not the sort of news to attract much attention even from nature-lovers, this ecological David and Goliath scenario looks set to be repeated in colder environments around the globe. Microscopic and fungal life forms that have been trapped or dormant for long periods, possibly millennia, may be on the verge of escaping their frozen confines.

The various film adaptions of John W. Campbell's 1938 novella Who Goes There? show the mayhem caused by an alien organism that has escaped its icy tomb. The real-life equivalents to this fictional invader are unlikely to be of extra-terrestrial origin, but they could prove at least as perilous, should climate change allow them to thaw out. The problem is easy to state: there is an enormous amount of dormant microbial life trapped in ice and permafrost that is in danger of escaping back into the wider ecosystem.

In the first quarter of the Twentieth Century over a million reindeer were killed by anthrax, with subsequent outbreaks occurring sporadically until as late as 1993. Recent years have seen the death of both farmers and their cattle from infection related to the thawing of a single infected reindeer carcass. In various incidents in 2016, dozens of Siberian herders and their families were admitted to hospital while Russian biohazard troops were flown in to run the clean-up operations. One issue is that until recently the infected animals - domesticated as well as wild - have rarely been disposed of to the recommended safety standards. Therefore, it doesn't take much for reactivated microbes to spread into environments where humans can encounter them.

Of course, the numbers of people and livestock living near glaciers and the polar caps is relatively low, but there are enormous regions of permafrost that are used by herders and hunters. Meltwater containing pathogens can get into local water supplies (conventional water treatment doesn't kill anthrax spores), or even reach further afield via oceans - where some microbes can survive for almost two years. The record high temperatures in some of the Northern Hemisphere's permafrost zones are allowing the spread of dangerous biological material into regions that may not have seen them for centuries - or far longer.

Decades-old anthrax spores aren't the only worry. Potential hazards include the smallpox virus, which caused a Siberian epidemic in the 1890s and may be able to survive in a freeze-dried state in victim's corpses before - however unlikely - reviving due to warmer temperatures. In addition, it should be remembered that many of the diseases that infect Homo sapiens today only arose with the development of farming, being variants of bacteria and viruses that transferred across from our domestic livestock.

This would suggest that permafrost and ice sheets include ancient microbes that our species hasn't interacted with for centuries - and which we may therefore have minimal resistance to. Although natural sources of radiation are thought to destroy about half of a bacteria's genome within a million years, there have been various - if disputed - claims of far older bacteria being revived, including those found in salt crystals that are said to be 250 million years old. In this particular case, their location deep underground is said to have minimised cosmic ray mutations and thus ensured their survival. Sounds like one for the Discovery Channel if you ask me, but never say never...

Even if this improbable longevity turns out to be inaccurate, it is known that dormant spore-forming bacteria such those leading to tetanus and botulism could, like anthrax, be revived after decades of containment in permafrost. Fungal spores are likewise known to survive similar interments; with amphibian, bat and snake populations currently declining due to the rapid spread of fungal pathogens, the escape of such material shouldn't be taken lightly.

So can anything be done to prevent these dangers? Other than reversing the increase in global temperatures, I somehow doubt it. Even the location of some of the mass burials during twentieth century reindeer epidemics have been lost, meaning those areas cannot be turned into no-go zones. Anthrax should perhaps be thought of as only one of a suite of biohazards that melting permafrost may be about to inflict on a largely uninformed world. The death of some remote animals and their herders may not earn much public sympathy, but if the revived pathogens spread to the wider ecosystem, there could be far more at stake. Clearly, ignorance is no protection from the microscopic, uncaring dangers now waking up in our warming world.

Tuesday, 28 May 2019

Praying for time: the rise and fall of the New Zealand mantis


While the decline of the giant panda, great apes and various cetacean species have long garnered headlines, our scale prejudice has meant invertebrates have fared rather less well. Only with the worrying spread of colony collapse disorder (CCD) in bee hives have insect-themed stories gained public attention; yet most of the millions of other small critters remain on the sidelines. I've often mentioned that overlooking these small marvels could backfire on us, considering we don't know the knock-on effect their rapid decline - and possible near-future extinction - would have on the environment we rely on.

One such example here in New Zealand is our native praying mantis Orthodera novaezealandiae, which for all we know could be a key player in the pest control of our farms and gardens. Mantid species are often near the apex of invertebrate food webs, consuming the likes of mosquitoes, moth caterpillars and cockroaches. I admit that they are not exactly discriminating and will also eat useful species such as ladybirds or decorative ones like monarch butterflies. However, they are definitely preferable to pesticides, a known cause of CCD today and an acknowledged factor of insect decline since Rachel Carson's pioneering 1962 book Silent Spring.

Of course, we shouldn't just support species due to their usefulness: giant pandas aren't being conserved for any particular practical benefit. From a moral perspective it's much easier to convince the public that we should prevent their extinction than that of the rather uncuddly mantis. We still know so little about many insect species it's difficult to work out which need to be saved in order to preserve our agribusiness (versus all the others that of course should be preserved regardless). I’m not averse to careful extermination of plagues of locusts or mosquitoes, but indiscriminate destruction due to greed or stupidity is well, stupid, really.

Down but not out: the New Zealand praying mantis Orthodera novaezealandiae



Back to O. novaezealandiae. I've only seen New Zealand's sole native mantis species three times in the 'wild': twice in my garden in the past two years and once in my local reserve before that. What is particularly interesting is that since initial descriptions in the 1870's, hypotheses regarding its origin appear to have evolved due to patriotic trends as much as to factual evidence. Late Nineteenth Century accounts of its spread suggest an accidental importation from Australia by European sailing ship, since it is a clumsy, short-range flier and seabirds are unlikely to carry the insects - and certainly not their cemented oothecae (egg sacks) - on their feet.

However, some Victorian naturalists thought the insect was incorporated into Maori tradition, implying a precolonial existence. In contrast, A.W.B.Powell's 1947 book Native Animals of New Zealand refers to the native mantis as Orthodera ministralis (which today is only used to describe the Australian green mantis) and the author states it may well be a recent arrival from across the Tasman Sea. So the native species may not be particularly native after all! I find this fascinating, insomuch as it shows how little we understand about our local, smaller scale, wildlife when compared to New Zealand's birds, bats and even reptiles.

The specimens in my garden have lived up to their reputation for being feisty: they seem to size you up before launching themselves directly towards you, only for their wings to rapidly falter and force the insect into an emergency landing. After the most recent observation, I looked around the outside of the house and found three oothecae, two of which were under a window sill built in 2016. These finds are cheering, as it means that at least in my neighbourhood they must be holding their own.

Perhaps their chief enemy these days is the invasive Miomantis caffra. This inadvertently-introduced South African mantis was first seen in 1978 and is rapidly spreading throughout New Zealand's North Island. The intruder - frequently spotted in my garden - has several advantages over O. novaezealandiae: firstly, it is able to survive through winter. Second, it produces rather more nymphs per ootheca; combined with hatching over a longer period this presumably leads to a larger numbers of survivors per year. In addition, and most unfortunately, the native male appears to find the (cannibalistic) South African female more attractive than the female of its own species, frequently resulting in its own demise during mating.

Humans too have further aided the decline of the native mantis with the accidental introduction of parasitic wasps and widespread use of pesticides. After less than a century and a half of concerted effort, European settlers have managed to convert a large proportion of the best land in this corner of the Pacific into a facsimile of the English countryside - but at what cost to the local fauna and flora?

Working to the old adage that we won't save what we don't love and cannot love what we don't know, perhaps what is really required is an education piece disguised as entertainment. Promoting mammals in anthropomorphic form has long been a near-monopoly of children's literature (think Wind in the Willows) but perhaps it is about time that invertebrates had greater public exposure too. Gerald Durrell's 1956 semi-autobiographical best-seller My Family and Other Animals includes an hilarious battle in the author's childhood bedroom between Cicely the praying mantis and the slightly smaller Geronimo the gecko, with the lizard only winning after dropping its tail and receiving other injuries. Perhaps a contemporary writer telling tales in a similar vein might inspire more love for these overlooked critters before it is too late. Any takers?


Monday, 13 May 2019

Which side are you on? The mysterious world of brain lateralisation

There are many linguistic examples of ancient superstitions still lurking in open sight. Among the more familiar are sinister and dexterous, which are directly related to being left- and right-handed respectively. These words are so common-place that we rarely consider the pre-scientific thinking behind them. I was therefore interested last year to find out that I am what is known as 'anomalous dominant'. Sounds ominous!

The discovery occurred during my first archery lesson where - on conducting the Miles test for ocular dominance - I discovered that despite being right-handed, I am left-eye dominant. I'd not heard of cross-dominance before, so I decided to do some research. As Auckland Central City Library didn't have any books on the subject I had to resort to the Web, only to find plenty of contradictory information, often of dubious accuracy, with some sites clearly existing so as to sell their strategies for overcoming issues related to the condition.

Being cross-dominant essentially means it takes longer for sensory information to be converted into physical activity, since the dominant senses and limbs must rely on additional transmission of neurons between the hemispheres of the brain. One common claim is that the extra time this requires has an effect on coordination and thus affects sporting ability. I'm quite prepared to accept that idea as I've never been any good at sport, although I must admit I got used to shooting a bow left-handed much quicker than I thought; lack of strength on my left side proved to be a more serious issue than lack of coordination due to muscle memory.

Incidentally, when I did archery at school in the 1980s, no mention was ever made about testing for eye dominance and so I shot right-handed! I did try right-handed shooting last year, only to find that I was having to aim beyond the right edge of the sight in order to make up for the parallax error caused by alignment of the non-dominant eye.

Research over the past century suggests children with crossed lateralisation could suffer a reduction in academic achievement or even general intelligence as a direct result, although a 2017 meta-analysis found little firm evidence to support this. Archery websites tend to claim that the percentage of people with mixed eye-hand dominance is around 18%, but other sources I have found vary anywhere from 10% to 35%. This lack of agreement over so fundamental a statistic suggests that there is still much research to be done on the subject, since anecdotal evidence is presumably being disseminated due to lack of hard data.

There is another type of brain lateralisation which is colloquially deemed ambidextrous, but this term covers a wide range of mixed-handedness abilities. Despite the descriptions of ambidextrous people as lucky or gifted (frequently-named examples include Leonardo da Vinci, Beethoven, Gandhi and Albert Einstein) parenting forums describe serious issues as a result of a non-dominant brain hemisphere. Potential problems include dyspraxia and dyslexia, ADHD, even autism or schizophrenia.

While the reporting of individual families can't be considered of the same quality as professional research, a 2010 report by Imperial College London broadly aligns with parents' stories. 'Functional disconnection syndrome' has been linked to learning disabilities and slower physical reaction times, rooted in the communications between the brain's hemispheres. There also seems to be evidence for the opposite phenomenon, in which the lack of a dominant hemisphere causes too much communication between left and right sides, generating noise that impedes normal mental processes.

What I would like to know is why there is so little information publicly available? I can only conclude that this is why there is such a profusion of non-scientific (if frequently first-hand) evidence. I personally know of people with non-dominant lateralisation who have suffered from a wide range of problems from dyslexia to ADHD, yet they have told me that their general practitioners failed to identify root causes for many years and suggested conventional solutions such as anti-depressants.

Clearly this is an area that could do with much further investigation; after all, if ambidexterity is a marker for abnormal brain development that arose in utero (there is some evidence that a difficult pregnancy could be the root cause) then surely there is clearly defined pathway for wide scale research? This could in turn lead to a reduction in people born with these problems.

In the same way that a child's environment can have a profound effect on their mental well-being and behaviour, could support for at-risk pregnant women reduce the chance of their offspring suffering from these conditions? I would have thought there would be a lot to gain from this, yet I can't find evidence of any medical research seeking a solution. Meanwhile, why not try the Miles test yourself and find out where you stand when it comes to connectivity between your brain, senses and limbs?

Tuesday, 23 April 2019

Lift to the stars: sci-fi hype and the space elevator

As an avid science-fiction reader during my childhood, one of the most outstanding extrapolations for future technology was that of the space elevator. As popularised in Arthur C. Clarke's 1979 novel, The Fountains of Paradise, the elevator was described as a twenty-second century project. I've previously written about near-future plans for private sector spaceflight, but the elevator would be a paradigm shift in space transportation: a way of potentially reaching as far as geosynchronous orbit without the need for rocket engines.

Despite the novelty of the idea: a tower stretching from Earth - or indeed any planet's surface - to geosynchronous orbit and beyond; the first description dates back to 1895 and writings of the Russian theoretical astronautics pioneer Konstantin Tsiolkovsky. Since the dawn of the Space Age engineers and designers in various nations have either reinvented the elevator from scratch or elaborated on Tsiolkovsky's idea.

There have of course been remarkable technological developments over the intervening period, with carbyne, carbon nanotubes, tubular carbon 60 and graphene seen as potential materials for the elevator, but we are still a long way from being able to build a full-size structure. Indeed, there are now known to be many more impediments to the space elevator than first thought, including a man-made issue that didn't exist at the end of the nineteenth century. Despite this, there seems to be a remarkable number of recent stories about elevator-related experiments and the near-future feasibility of such a project.

An objective look at practical - as opposed to theoretical - studies show that results to date have been decidedly underwhelming. The Space Shuttle programme started tethered satellite tests in 1992. After an initial failure (the first test achieved a distance of a mere 256 metres), a follow up six years later built a tether that was a rather more impressive twenty kilometres long. Then last year the Japanese STARS-me experiment tested a miniature climber component in orbit, albeit at a miniscule distance of nine metres. Bearing in mind that a real tower would be over 35,000 kilometres long, it cannot be argued that the technology is almost available for a full-scale elevator.

This hasn't prevented continuous research by the International Space Elevator Consortium (ISEC), which was formed in 2008 to promote the concept and the technology behind it. It's only to be expected that fans of the space elevator would be enthusiastic, but to my mind their assessment that we are 'tech ready' for its development seems to be optimistic to the point of incredulity.

A contrasting view is that of Google X's researchers, who mothballed their space elevator work in 2014 on the grounds that the requisite technology will not be available for decades to come. While the theoretical strength of carbon nanotubes meets the requirements, the total of cable manufactured to date is seventy centimetres, showing the difficulties in achieving mass production. A key stopping point apparently involves catalyst activity probability; until that problem is resolved, a space elevator less than one metre in length isn't going to convince me, at least.

What is surprising then is that in 2014, the Japanese Obayashi Corporation published a detailed concept that specified a twenty-year construction period starting in 2030. Not to be outdone, the China Academy of Launch Vehicle Technology released news in 2017 of a plan to actually build an elevator by 2045, using a new carbon nanotube fibre. Just how realistic is this, when so little of the massive undertaking has been prototyped beyond the most basic of levels?

The overall budget is estimated to be around US$90 billion, which suggests an international collaboration in order to offset the many years before the completed structure turns a profit. In addition to the materials issue, there are various other problems yet to be resolved. Chief among these are finding a suitable equatorial location (an ocean-based anchor has been suggested), capturing an asteroid for use as a counterweight, dampening vibrational harmonics, removing space junk, micrometeoroid impact protection and shielding passengers from the Van Allen radiation belts. Clearly, just developing the construction material is only one small element of the ultimate effort required.

Despite all these issues, general audience journalism regarding the space elevator - and therefore the resulting public perception - appears as optimistic as the Chinese announcement. How much these two feedback on each other is difficult to ascertain, but there certainly seems to be a case of running before learning to walk. It's strange that China made the claim, bearing in mind how many other rather important things the nation's scientists should be concentrating on, such as environmental degradation and pollution.

Could it be that China's STEM community have fallen for the widespread hype rather than prosaic reality? It's difficult to say how this could be so, considering their sophisticated internet firewall that blocks much of the outside world's content. Clearly though, the world wide web is full of science and technology stories that consist of parrot fashion copying, little or no analysis and click bait-driven headlines.

A balanced, in-depth synthesis of the relevant research is often a secondary consideration. The evolutionary biologist Stephen Jay Gould once labelled the negative impact of such lazy journalism as "authorial passivity before secondary sources." In this particular case, the public impression of what is achievable in the next few decades seems closer to Hollywood science fiction than scientific fact.

Of course, the irony is that even the more STEM-minded section of the public is unlikely to read the original technical articles in a professional journal. Instead, we are reliant on general readership material and the danger inherent in its immensely variable quality. As far as the space elevator goes (currently, about seventy centimetres), there are far more pressing concerns requiring engineering expertise; US$90 billion could, for example, fund projects to improve quality of life in the developing world.

That's not to say that I believe China will construct a space elevator during this century, or that the budget could be found anywhere else, either. But there are times when there's just too much hype and nonsense surrounding science and not enough fact. It's easy enough to make real-world science appear dull next to the likes of Star Trek, but now more than ever we need the public to trust and support STEM if we are to mitigate climate change and all the other environmental concerns.

As for the space elevator itself, let's return to Arthur C. Clarke. Once asked when he thought humanity could build one, he replied: "Probably about fifty years after everybody quits laughing." Unfortunately, bad STEM journalism seems to have joined conservatism as a negative influence in the struggle to promote science to non-scientists. And that's no laughing matter.