Sunday, 23 June 2019

Spray and walk away? Why stratospheric aerosols could be saviours or destroyers

My first scientific encounters with aerosols weren't particularly good ones. In my early teens, I read that the CFC propellants used as aerosols were depleting the ozone layer. Therefore, tiny atmospheric particles had negative connotations for me from my formative years. This was further enforced by Carl Sagan and Richard Turco's 1990 book A Path Where No Man Thought: Nuclear Winter and the End of the Arms Race, which discussed the potentially devastating effects of high-altitude aerosol's around the world following a nuclear attack. Strike two against these pesky particles!

Of course aerosols aren't just man-made. The stratospheric dust particles generated following the Chicxulub impact event 66 million years ago are known to have been instrumental in the global climate disruption that wiped out the dinosaurs and many other life forms. This would have been in addition to the thousands of years of environmental changes caused by sulfur aerosols from the Deccan Traps supervolcano. Rather more recently, the Mount Tambora volcanic eruption in 1815 led to starvation and epidemics around the world for up to three years.

Now that our civilisation is generating a rapid increase in global temperatures, numerous solutions are being researched. One of the most recent areas involves reducing the amount of solar radiation reaching the Earth's surface. Several methods have been suggested for this, but this year sees a small-scale experiment to actually test a solution, namely seeding the atmosphere with highly reflective particles in an artificial recreation of a volcanic event. The Stratospheric Controlled Perturbation Experiment (SCoPEx) is a solar geoengineering project involving Harvard University that will use a balloon to release calcium carbonate in aerosol form at about twenty kilometres above the Earth's surface, analysing the local airspace the following day to assess the effects.

This experiment is controversial for several reasons. Firstly, it doesn't lead to any reduction in greenhouse gases and particulate pollutants; if anything, by sweeping the issue under a stratospheric rug, it could allow fossil fuel corporations to maintain production levels and reduce investment in alternatives. If the recent reports by meteorologists that natural and non-intentional man-made aerosols are already mitigating global warming, then the gross effects of heat pollution must be higher than realised!

Next, this sort of minute level of testing is unlikely to pinpoint issues that operational use might generate, given the chaotic nature of atmospheric weather patterns. To date, numerous computer simulations have been run, but bearing in mind how inaccurate weather forecasting is beyond ten days, nothing can be as accurate as the real thing. Therefore at what point could a test prove that the process is effective and safe enough to be carried out on a global scale? Possibly it might require such a large scale experiment that it is both research and the actual process itself!

The duration that the aerosols remain aloft is still not completely understood, hinting that regular replenishment would be essential. In addition, could the intentionally-polluted clouds capture greater amounts of water vapour, at first holding onto and then dropping their moisture so as to cause drought followed by deluge? Clouds cannot be contained within the boundaries of the testing nation, meaning other countries could suffer these unintended side-effects.

It may be that as a back-up plan, launching reflective aerosols into the stratosphere makes sense, but surely it makes much more sense to reduce greenhouse gas emissions and increase funding of non-polluting alternatives? The main emphasis from ecologists to date has been to remove human-generated substances from the environment, not add new ones in abundance. I'm all for thinking outside the box, but I worry that the only way to test this technique at a fully effective level involves such a large scale experiment as to be beyond the point of no return. Such chemical-based debacles as ozone depletion via chlorofluorocarbons (CFCs) prove that in just a matter of decades we can make profound changes to the atmosphere - and badly effect regions furthest removed from the source itself.  So why not encourage more reducing, reusing and recycling instead?

Monday, 10 June 2019

Defrosting dangers: global warming and the biohazards under the ice

Despite frequent news reports on the thawing of polar and glacial ice, there appears to be less concern shown towards this aspect of climate change than many others. Perhaps this is due to so few humans living in these regions; lack of familiarity with something helps us to ignore its true importance. The most obvious effects of melting ice are said to be the increase in atmospheric carbon, rising sea levels and unpredictable weather patterns, but there is another threat to our species that is only just beginning to be noticed - and as yet has failed to generate any mitigation plans.

A report last year confirmed a frightening cause behind the deaths back in 2015 of approximately half the world's remaining saiga antelope population: thanks to warmer and more humid weather, a type of bacteria usually confirmed to their nose had spread to the antelopes' bloodstream. Although not the sort of news to attract much attention even from nature-lovers, this ecological David and Goliath scenario looks set to be repeated in colder environments around the globe. Microscopic and fungal life forms that have been trapped or dormant for long periods, possibly millennia, may be on the verge of escaping their frozen confines.

The various film adaptions of John W. Campbell's 1938 novella Who Goes There? show the mayhem caused by an alien organism that has escaped its icy tomb. The real-life equivalents to this fictional invader are unlikely to be of extra-terrestrial origin, but they could prove at least as perilous, should climate change allow them to thaw out. The problem is easy to state: there is an enormous amount of dormant microbial life trapped in ice and permafrost that is in danger of escaping back into the wider ecosystem.

In the first quarter of the Twentieth Century over a million reindeer were killed by anthrax, with subsequent outbreaks occurring sporadically until as late as 1993. Recent years have seen the death of both farmers and their cattle from infection related to the thawing of a single infected reindeer carcass. In various incidents in 2016, dozens of Siberian herders and their families were admitted to hospital while Russian biohazard troops were flown in to run the clean-up operations. One issue is that until recently the infected animals - domesticated as well as wild - have rarely been disposed of to the recommended safety standards. Therefore, it doesn't take much for reactivated microbes to spread into environments where humans can encounter them.

Of course, the numbers of people and livestock living near glaciers and the polar caps is relatively low, but there are enormous regions of permafrost that are used by herders and hunters. Meltwater containing pathogens can get into local water supplies (conventional water treatment doesn't kill anthrax spores), or even reach further afield via oceans - where some microbes can survive for almost two years. The record high temperatures in some of the Northern Hemisphere's permafrost zones are allowing the spread of dangerous biological material into regions that may not have seen them for centuries - or far longer.

Decades-old anthrax spores aren't the only worry. Potential hazards include the smallpox virus, which caused a Siberian epidemic in the 1890s and may be able to survive in a freeze-dried state in victim's corpses before - however unlikely - reviving due to warmer temperatures. In addition, it should be remembered that many of the diseases that infect Homo sapiens today only arose with the development of farming, being variants of bacteria and viruses that transferred across from our domestic livestock.

This would suggest that permafrost and ice sheets include ancient microbes that our species hasn't interacted with for centuries - and which we may therefore have minimal resistance to. Although natural sources of radiation are thought to destroy about half of a bacteria's genome within a million years, there have been various - if disputed - claims of far older bacteria being revived, including those found in salt crystals that are said to be 250 million years old. In this particular case, their location deep underground is said to have minimised cosmic ray mutations and thus ensured their survival. Sounds like one for the Discovery Channel if you ask me, but never say never...

Even if this improbable longevity turns out to be inaccurate, it is known that dormant spore-forming bacteria such those leading to tetanus and botulism could, like anthrax, be revived after decades of containment in permafrost. Fungal spores are likewise known to survive similar interments; with amphibian, bat and snake populations currently declining due to the rapid spread of fungal pathogens, the escape of such material shouldn't be taken lightly.

So can anything be done to prevent these dangers? Other than reversing the increase in global temperatures, I somehow doubt it. Even the location of some of the mass burials during twentieth century reindeer epidemics have been lost, meaning those areas cannot be turned into no-go zones. Anthrax should perhaps be thought of as only one of a suite of biohazards that melting permafrost may be about to inflict on a largely uninformed world. The death of some remote animals and their herders may not earn much public sympathy, but if the revived pathogens spread to the wider ecosystem, there could be far more at stake. Clearly, ignorance is no protection from the microscopic, uncaring dangers now waking up in our warming world.

Tuesday, 28 May 2019

Praying for time: the rise and fall of the New Zealand mantis


While the decline of the giant panda, great apes and various cetacean species have long garnered headlines, our scale prejudice has meant invertebrates have fared rather less well. Only with the worrying spread of colony collapse disorder (CCD) in bee hives have insect-themed stories gained public attention; yet most of the millions of other small critters remain on the sidelines. I've often mentioned that overlooking these small marvels could backfire on us, considering we don't know the knock-on effect their rapid decline - and possible near-future extinction - would have on the environment we rely on.

One such example here in New Zealand is our native praying mantis Orthodera novaezealandiae, which for all we know could be a key player in the pest control of our farms and gardens. Mantid species are often near the apex of invertebrate food webs, consuming the likes of mosquitoes, moth caterpillars and cockroaches. I admit that they are not exactly discriminating and will also eat useful species such as ladybirds or decorative ones like monarch butterflies. However, they are definitely preferable to pesticides, a known cause of CCD today and an acknowledged factor of insect decline since Rachel Carson's pioneering 1962 book Silent Spring.

Of course, we shouldn't just support species due to their usefulness: giant pandas aren't being conserved for any particular practical benefit. From a moral perspective it's much easier to convince the public that we should prevent their extinction than that of the rather uncuddly mantis. We still know so little about many insect species it's difficult to work out which need to be saved in order to preserve our agribusiness (versus all the others that of course should be preserved regardless). I’m not averse to careful extermination of plagues of locusts or mosquitoes, but indiscriminate destruction due to greed or stupidity is well, stupid, really.

Down but not out: the New Zealand praying mantis Orthodera novaezealandiae



Back to O. novaezealandiae. I've only seen New Zealand's sole native mantis species three times in the 'wild': twice in my garden in the past two years and once in my local reserve before that. What is particularly interesting is that since initial descriptions in the 1870's, hypotheses regarding its origin appear to have evolved due to patriotic trends as much as to factual evidence. Late Nineteenth Century accounts of its spread suggest an accidental importation from Australia by European sailing ship, since it is a clumsy, short-range flier and seabirds are unlikely to carry the insects - and certainly not their cemented oothecae (egg sacks) - on their feet.

However, some Victorian naturalists thought the insect was incorporated into Maori tradition, implying a precolonial existence. In contrast, A.W.B.Powell's 1947 book Native Animals of New Zealand refers to the native mantis as Orthodera ministralis (which today is only used to describe the Australian green mantis) and the author states it may well be a recent arrival from across the Tasman Sea. So the native species may not be particularly native after all! I find this fascinating, insomuch as it shows how little we understand about our local, smaller scale, wildlife when compared to New Zealand's birds, bats and even reptiles.

The specimens in my garden have lived up to their reputation for being feisty: they seem to size you up before launching themselves directly towards you, only for their wings to rapidly falter and force the insect into an emergency landing. After the most recent observation, I looked around the outside of the house and found three oothecae, two of which were under a window sill built in 2016. These finds are cheering, as it means that at least in my neighbourhood they must be holding their own.

Perhaps their chief enemy these days is the invasive Miomantis caffra. This inadvertently-introduced South African mantis was first seen in 1978 and is rapidly spreading throughout New Zealand's North Island. The intruder - frequently spotted in my garden - has several advantages over O. novaezealandiae: firstly, it is able to survive through winter. Second, it produces rather more nymphs per ootheca; combined with hatching over a longer period this presumably leads to a larger numbers of survivors per year. In addition, and most unfortunately, the native male appears to find the (cannibalistic) South African female more attractive than the female of its own species, frequently resulting in its own demise during mating.

Humans too have further aided the decline of the native mantis with the accidental introduction of parasitic wasps and widespread use of pesticides. After less than a century and a half of concerted effort, European settlers have managed to convert a large proportion of the best land in this corner of the Pacific into a facsimile of the English countryside - but at what cost to the local fauna and flora?

Working to the old adage that we won't save what we don't love and cannot love what we don't know, perhaps what is really required is an education piece disguised as entertainment. Promoting mammals in anthropomorphic form has long been a near-monopoly of children's literature (think Wind in the Willows) but perhaps it is about time that invertebrates had greater public exposure too. Gerald Durrell's 1956 semi-autobiographical best-seller My Family and Other Animals includes an hilarious battle in the author's childhood bedroom between Cicely the praying mantis and the slightly smaller Geronimo the gecko, with the lizard only winning after dropping its tail and receiving other injuries. Perhaps a contemporary writer telling tales in a similar vein might inspire more love for these overlooked critters before it is too late. Any takers?


Monday, 13 May 2019

Which side are you on? The mysterious world of brain lateralisation

There are many linguistic examples of ancient superstitions still lurking in open sight. Among the more familiar are sinister and dexterous, which are directly related to being left- and right-handed respectively. These words are so common-place that we rarely consider the pre-scientific thinking behind them. I was therefore interested last year to find out that I am what is known as 'anomalous dominant'. Sounds ominous!

The discovery occurred during my first archery lesson where - on conducting the Miles test for ocular dominance - I discovered that despite being right-handed, I am left-eye dominant. I'd not heard of cross-dominance before, so I decided to do some research. As Auckland Central City Library didn't have any books on the subject I had to resort to the Web, only to find plenty of contradictory information, often of dubious accuracy, with some sites clearly existing so as to sell their strategies for overcoming issues related to the condition.

Being cross-dominant essentially means it takes longer for sensory information to be converted into physical activity, since the dominant senses and limbs must rely on additional transmission of neurons between the hemispheres of the brain. One common claim is that the extra time this requires has an effect on coordination and thus affects sporting ability. I'm quite prepared to accept that idea as I've never been any good at sport, although I must admit I got used to shooting a bow left-handed much quicker than I thought; lack of strength on my left side proved to be a more serious issue than lack of coordination due to muscle memory.

Incidentally, when I did archery at school in the 1980s, no mention was ever made about testing for eye dominance and so I shot right-handed! I did try right-handed shooting last year, only to find that I was having to aim beyond the right edge of the sight in order to make up for the parallax error caused by alignment of the non-dominant eye.

Research over the past century suggests children with crossed lateralisation could suffer a reduction in academic achievement or even general intelligence as a direct result, although a 2017 meta-analysis found little firm evidence to support this. Archery websites tend to claim that the percentage of people with mixed eye-hand dominance is around 18%, but other sources I have found vary anywhere from 10% to 35%. This lack of agreement over so fundamental a statistic suggests that there is still much research to be done on the subject, since anecdotal evidence is presumably being disseminated due to lack of hard data.

There is another type of brain lateralisation which is colloquially deemed ambidextrous, but this term covers a wide range of mixed-handedness abilities. Despite the descriptions of ambidextrous people as lucky or gifted (frequently-named examples include Leonardo da Vinci, Beethoven, Gandhi and Albert Einstein) parenting forums describe serious issues as a result of a non-dominant brain hemisphere. Potential problems include dyspraxia and dyslexia, ADHD, even autism or schizophrenia.

While the reporting of individual families can't be considered of the same quality as professional research, a 2010 report by Imperial College London broadly aligns with parents' stories. 'Functional disconnection syndrome' has been linked to learning disabilities and slower physical reaction times, rooted in the communications between the brain's hemispheres. There also seems to be evidence for the opposite phenomenon, in which the lack of a dominant hemisphere causes too much communication between left and right sides, generating noise that impedes normal mental processes.

What I would like to know is why there is so little information publicly available? I can only conclude that this is why there is such a profusion of non-scientific (if frequently first-hand) evidence. I personally know of people with non-dominant lateralisation who have suffered from a wide range of problems from dyslexia to ADHD, yet they have told me that their general practitioners failed to identify root causes for many years and suggested conventional solutions such as anti-depressants.

Clearly this is an area that could do with much further investigation; after all, if ambidexterity is a marker for abnormal brain development that arose in utero (there is some evidence that a difficult pregnancy could be the root cause) then surely there is clearly defined pathway for wide scale research? This could in turn lead to a reduction in people born with these problems.

In the same way that a child's environment can have a profound effect on their mental well-being and behaviour, could support for at-risk pregnant women reduce the chance of their offspring suffering from these conditions? I would have thought there would be a lot to gain from this, yet I can't find evidence of any medical research seeking a solution. Meanwhile, why not try the Miles test yourself and find out where you stand when it comes to connectivity between your brain, senses and limbs?

Tuesday, 23 April 2019

Lift to the stars: sci-fi hype and the space elevator

As an avid science-fiction reader during my childhood, one of the most outstanding extrapolations for future technology was that of the space elevator. As popularised in Arthur C. Clarke's 1979 novel, The Fountains of Paradise, the elevator was described as a twenty-second century project. I've previously written about near-future plans for private sector spaceflight, but the elevator would be a paradigm shift in space transportation: a way of potentially reaching as far as geosynchronous orbit without the need for rocket engines.

Despite the novelty of the idea: a tower stretching from Earth - or indeed any planet's surface - to geosynchronous orbit and beyond; the first description dates back to 1895 and writings of the Russian theoretical astronautics pioneer Konstantin Tsiolkovsky. Since the dawn of the Space Age engineers and designers in various nations have either reinvented the elevator from scratch or elaborated on Tsiolkovsky's idea.

There have of course been remarkable technological developments over the intervening period, with carbyne, carbon nanotubes, tubular carbon 60 and graphene seen as potential materials for the elevator, but we are still a long way from being able to build a full-size structure. Indeed, there are now known to be many more impediments to the space elevator than first thought, including a man-made issue that didn't exist at the end of the nineteenth century. Despite this, there seems to be a remarkable number of recent stories about elevator-related experiments and the near-future feasibility of such a project.

An objective look at practical - as opposed to theoretical - studies show that results to date have been decidedly underwhelming. The Space Shuttle programme started tethered satellite tests in 1992. After an initial failure (the first test achieved a distance of a mere 256 metres), a follow up six years later built a tether that was a rather more impressive twenty kilometres long. Then last year the Japanese STARS-me experiment tested a miniature climber component in orbit, albeit at a miniscule distance of nine metres. Bearing in mind that a real tower would be over 35,000 kilometres long, it cannot be argued that the technology is almost available for a full-scale elevator.

This hasn't prevented continuous research by the International Space Elevator Consortium (ISEC), which was formed in 2008 to promote the concept and the technology behind it. It's only to be expected that fans of the space elevator would be enthusiastic, but to my mind their assessment that we are 'tech ready' for its development seems to be optimistic to the point of incredulity.

A contrasting view is that of Google X's researchers, who mothballed their space elevator work in 2014 on the grounds that the requisite technology will not be available for decades to come. While the theoretical strength of carbon nanotubes meets the requirements, the total of cable manufactured to date is seventy centimetres, showing the difficulties in achieving mass production. A key stopping point apparently involves catalyst activity probability; until that problem is resolved, a space elevator less than one metre in length isn't going to convince me, at least.

What is surprising then is that in 2014, the Japanese Obayashi Corporation published a detailed concept that specified a twenty-year construction period starting in 2030. Not to be outdone, the China Academy of Launch Vehicle Technology released news in 2017 of a plan to actually build an elevator by 2045, using a new carbon nanotube fibre. Just how realistic is this, when so little of the massive undertaking has been prototyped beyond the most basic of levels?

The overall budget is estimated to be around US$90 billion, which suggests an international collaboration in order to offset the many years before the completed structure turns a profit. In addition to the materials issue, there are various other problems yet to be resolved. Chief among these are finding a suitable equatorial location (an ocean-based anchor has been suggested), capturing an asteroid for use as a counterweight, dampening vibrational harmonics, removing space junk, micrometeoroid impact protection and shielding passengers from the Van Allen radiation belts. Clearly, just developing the construction material is only one small element of the ultimate effort required.

Despite all these issues, general audience journalism regarding the space elevator - and therefore the resulting public perception - appears as optimistic as the Chinese announcement. How much these two feedback on each other is difficult to ascertain, but there certainly seems to be a case of running before learning to walk. It's strange that China made the claim, bearing in mind how many other rather important things the nation's scientists should be concentrating on, such as environmental degradation and pollution.

Could it be that China's STEM community have fallen for the widespread hype rather than prosaic reality? It's difficult to say how this could be so, considering their sophisticated internet firewall that blocks much of the outside world's content. Clearly though, the world wide web is full of science and technology stories that consist of parrot fashion copying, little or no analysis and click bait-driven headlines.

A balanced, in-depth synthesis of the relevant research is often a secondary consideration. The evolutionary biologist Stephen Jay Gould once labelled the negative impact of such lazy journalism as "authorial passivity before secondary sources." In this particular case, the public impression of what is achievable in the next few decades seems closer to Hollywood science fiction than scientific fact.

Of course, the irony is that even the more STEM-minded section of the public is unlikely to read the original technical articles in a professional journal. Instead, we are reliant on general readership material and the danger inherent in its immensely variable quality. As far as the space elevator goes (currently, about seventy centimetres), there are far more pressing concerns requiring engineering expertise; US$90 billion could, for example, fund projects to improve quality of life in the developing world.

That's not to say that I believe China will construct a space elevator during this century, or that the budget could be found anywhere else, either. But there are times when there's just too much hype and nonsense surrounding science and not enough fact. It's easy enough to make real-world science appear dull next to the likes of Star Trek, but now more than ever we need the public to trust and support STEM if we are to mitigate climate change and all the other environmental concerns.

As for the space elevator itself, let's return to Arthur C. Clarke. Once asked when he thought humanity could build one, he replied: "Probably about fifty years after everybody quits laughing." Unfortunately, bad STEM journalism seems to have joined conservatism as a negative influence in the struggle to promote science to non-scientists. And that's no laughing matter.

Monday, 1 April 2019

The day of the dolphin: covert cetaceans, conspiracy theories and Hurricane Katrina

One of the late, great Terry Pratchett's Discworld novels mentions a failed attempt by King Gurnt the Stupid to conduct aerial warfare using armoured ravens. Since real life is always stranger than fiction, just how harebrained are schemes by armed forces to utilise animals in their activities?

Large mammals such as horses and elephants have long been involved in the darker aspects of human existence, but the twentieth century saw the beginnings of more sophisticated animals-as-weapons schemes, including for example, research into the use of insects as disease vectors.

Some of the fruitier research projects of the 1960s saw the recruitment of marine mammals, reaching an apotheosis - or nadir - in the work of John Lilly. A controversial neuroscientist concerned with animal (and extraterrestrial) communication, Lilly even gave psychedlic drugs to dolphins as part of attempts to teach them human language and logic: go figure!

Whether this work was the direct inspiration for military programmes is uncertain, but both the Soviet and United States navies sought to harness the intelligence and learning capabilities of marine mammals during the Cold War. Besides bottlenose dolphins, sea lions were also trained in activities such as mine detection, hardware retrieval and human rescue. Although the Russians are said to have discontinued their research some years ago, the US Navy's Marine Mammal Research Program is now in its sixth decade and has funding up until at least next year.

Various sources claim that there is a classified component to the program headquartered in San Diego under the moniker the Cetacean Intelligence Mission. Although little of any value is known for certain, researchers at the University of Texas at Austin have been named as one of the groups who have used naval funding to train dolphins - plus design a dolphin equipment harness - for underwater guard duty. A more controversial yet popular claim is for their use as weapon platforms involving remote-controlled knock-out drug dart guns. If this all sounds a bit like Dr. Evil's request for "sharks with lasers" then read on before you scoff.

In the aftermath of the devastation caused by Hurricane Katrina in August 2005, it was discovered that eight out of fourteen bottlenose dolphins that were housed at the Marine Life Oceanarium in Gulfport, Mississippi, had been swept out to sea. Although later recovered by the United States Navy, this apparently innocent operation has a bearing on a similar escape that was given much greater news coverage soon after the hurricane.

Even respected broadsheet newspapers around the world covered the story generated by a US Government leak that thirty-eight United States Navy dolphins had also gotten free after their training ponds near Lake Pontchartrain, Louisiana, were inundated by Hurricane Katrina. Apart from the concerns of animal rights groups that: (a) dolphins shouldn't be used as weapons platforms; and (b) how would they cope in the open ocean of the Gulf of Mexico (vis-a-vis its busy shipping lanes)? another issue was the notion that the dolphins might attack civilian divers or vessels.

It would be quite easy here to veer into the laughable fantasies that the Discovery Channel tries to pass off as genuine natural history, if it weren't for a string of disconcerting facts. The eight dolphins that escaped from the Marine Life Oceanarium were kept by the navy for a considerable period before being returned to Mississippi. This was explained at the time as a health check by navy biologists, but there is a more sinister explanation: what if the dolphins were being examined to ensure that they were not military escapees from Lake Pontchartrain?

The latter half of 2005 into early 2006 saw the resumption of fishing in the Gulf of Mexico, following the destruction of almost ninety per cent of the region's commercial fleet in the hurricane. However, many of the smaller boats that did make it back to sea returned to port with unusual damage, or in some cases, had to be towed after failing to make it home under their own power. Much of this was put down to hasty repairs in order to resume fishing - a key component of the local economy - as soon as possible.

Reports released by boat yards during this period show inexplicable damage to rudders and propellers, mainly to shrimp boats. Fragments of metal, plastic and pvc were recovered in a few cases, causing speculation as to where this material had come from. The National Marine Fisheries Service requested access to the flotsam, which was subsequently lost in the chain of bureaucracy; none of the fragments have been seen since. It may not be on the scale of Roswell, but someone in the US military seems to be hiding something here.

It's been over half a century since Dr. Lilly's experiments inspired such fictional cetacean-centred intrigue as The Day of the Dolphin. Therefore, there has been plenty of time for conspiracy theorists to cobble together outlandish schemes on the basis of threadbare rumours. What is certain is that the enormous reduction in the region's fishing that followed in the wake of Hurricane Katrina would have been a boon for the Gulf of Mexico's fish stocks. This would presumably have carried on up the food chain, allowing dolphin numbers to proliferate throughout 2006 and beyond.

Whether the US Navy was able to recover some or all of its underwater army is not known, but it doesn't take much imagination to think of the dolphins enjoying their freedom in the open ocean, breaking their harnesses upon the underside of anchored fishing vessels, determined to avoid being rounded up by their former keepers. The Gulf in the post-Katrina years would have been a relative paradise for the animals compared to their military careers.

Although the United States Navy is said to have spent less than $20 million dollars per annum on the Marine Mammal Research Program, a mere drop in the ocean (you know that one's irresistible) compared to the mega-budgets of many Department of Defense projects, the low cost alone suggests the value of attempting to train dolphins for military purposes. Perhaps the truth will emerge one day, once the relevant files are declassified. Or alternatively, a new John Lilly may come along and be finally able to translate dolphinese. In which case, what are the chances that descendants of the Lake Pontchartrain escapees will recall the transition from captivity to freedom with something along the lines of "So long, and thanks for all the fish!"

Wednesday, 20 March 2019

My family & other animals: what is it that makes Homo sapiens unique?

It's a curious thing, but I can't recall ever having come across a comprehensive assessment of what differentiates Homo sapiens from all other animals. Hence this post is a brief examination on what I have found out over the years. I originally thought of dividing it into three neat sections, but quickly discovered that this would be, as Richard Dawkins once put it, 'a gratuitously manufactured discontinuity in a continuous reality.' In fact, I found a reasonably smooth gradation between these segments:
  1. Long-held differences now found to be false
  2. Possibly distinctions - but with caveats
  3. Uniquely human traits
Despite the carefully-observed, animal-centered stories of early civilisations - Aesop's fable of The Crow and the Pitcher springs to mind - the conventional wisdom until recently was that animals are primarily automatons and as such readily exploitable by humanity. Other animals were deemed vastly inferior to us by a question of kind, not just degree, with a complete lack of awareness of themselves as individuals.

The mirror test developed in 1970 has disproved that for a range of animals, from the great apes to elephants, dolphins to New Caledonian crows. Therefore, individuals of some species can differentiate themselves from their kin, leading to complex and fluid hierarchies within groups - and in the case of primates, some highly Machiavellian behaviour.

Man the tool-maker has been a stalwart example of humanity's uniqueness, but a wide range of animals in addition to the usual suspects (i.e. great apes, dolphins and Corvidae birds) are now known to make and use tools on a regular basis. Examples include sea otters, fish, elephants, and numerous bird species, the latter creating everything from fish bait to insect probes. Even octopuses are known to construct fences and shelters, such as stacking coconut shells - but then they do have eight ancillary brains in addition to the main one!

We recognise regional variations in human societies as the result of culture, but some animal species also have geographically-differentiated traits or tools that are the obvious equivalent. Chimpanzees are well known for their variety of techniques used in obtaining food or making tools. These skills are handed down through the generations, remaining different to those used in neighbouring groups.

Interestingly, farming has really only been adopted by the most humble of organisms, namely the social insects. Ants and termites farm aphids and fungi in their complex, air-conditioned cities that have more than a touch of Aldous Huxley's Brave New World about them; in a few species, the colonies may even largely consist of clones!

Although many animals construct nests, tunnels, dams, islets or mounds, these appear to serve purely functional purposes: there is no equivalent of the human architectural aesthetic. Octopus constructions aside, birds for example will always build a structure that resembles the same blueprint used by the rest of their kind.

Many species communicate by aural, gestural or pheremonal languages, but only humans can store information outside of the body and across generations living at different times. Bird song might sound pretty, but again, this appears to be a series of basic, hard-wired, communications. Conversely, humpback whale song may contain artistic values but we just don't know enough about it to judge it in this light.

Birds and monkeys are happy to hoard interesting objects, but there is little aesthetic sense in animals other than that required to identify a high-quality mate. In contrast, there is evidence to suggest that other species in the hominin line, such as Neanderthals and Homo erectus, created art in forms recognisable today, including geometric engravings and jewellery.

Some of our ancestor's earliest artworks are realistic representations, whereas when armed with a paint brush, captive chimps and elephants produce abstract work reminiscent of pre-school children. We should remember that only since the start of the Twentieth Century has abstract art become an acceptable form for professional artists.

Jane Goodall's research on the Gombe chimps shows that humans are not the only animal to fight and kill members of the same species for reasons other than predation or rivalry. Sustained group conflict may be on a smaller scale and have less rules than sanctioned warfare, but it still has enough similarity to our own violence to say that humanity is not its sole perpetrator. One interesting point is that although chimps have been known to use sharpened sticks to spear prey, they haven't as yet used their weapons on each other.

Chimpanzees again have been shown to empathise with other members of their group, for example after the death of a close relative. Altruism has also been observed in the wild, but research suggests there is frequently another motive involved as part of a long-term strategy. This is countered with the notion that humans are deemed able to offer support without the expectation of profit or gain in the future; then again, what percentage of such interactions are due to a profitless motivation is open to suggestion.

A tricky area is to speculate on the uniqueness of ritual to Homo sapiens. While we may have usurped the alpha male position in domesticated species such as dogs, their devotion and loyalty seems too far from deity worship to be a useful comparison; certainly the idea of organised religion has to be alien to all other species? Archaeological evidence shows what appears to be Neanderthal rituals centred on cave bears, as well as funereal rites, but the DNA evidence for interbreeding with modern humans doesn't give enough separation to allow religion to be seen as anything other than a human invention. What is probably true though is that we are the only species aware of our own mortality.

One area in which humans used to be deemed sole practitioners is abstract thought, but even here there is evidence that the great apes have some capability, albeit no greater than that of a pre-schooler. Common chimps and bonobos raised in captivity have learnt - in some cases by observation, rather than being directly taught - how to use sign language or lexigrams to represent objects and basic grammar. It's one thing to see a button with a banana on it and to learn that pressing it produces a banana, but to receive the same reward for pressing an abstract symbol shows a deeper understanding of relationship and causality.

A consideration of a potential future is also shared with birds of the Corvidae family, who are able to plan several steps ahead. Where humans are clearly far ahead is due to a gain in degree rather than just kind. Namely, we have the ability to consider numerous future paths and act accordingly; this level of sophistication and branch analysis appears to be uniquely human, allowing us to cogitate about possibilities in the future that might occur - or may never be possible. Both prose and poetic literature are likely to be uniquely human; at least until we can decipher humpback whale song.

Finally, there is science, possibly the greatest of human inventions. The multifarious aspects of the scientific endeavour, from tentative hypothesis to experimentation, advanced mathematics to working theory, are unlikely to be understood let alone attempted by any other species. The combination of creative and critical thinking, rigour and repetition, and objectivity and analysis require the most sophisticated object in the known universe, the human brain. That's not to say there aren't far more intelligent beings out there somewhere, but for now there is one clear activity that defines us as unique. And thank goodness it isn't war!