Monday, 29 October 2018

Space is the place: did life begin in the cosmic void?

A few weeks' ago I was watching a television documentary about the search for intelligence aliens and featuring the usual SETI experts Jill Tarter and Seth Shostak when I realised that we rarely see any crossover with research into non-intelligent extra-terrestrial life. Whereas the former is often seen by outsiders as pie-in-the-sky work by idealistic dreamers, the latter has more of a down-to-Earth feel about it, even though it has at times also suffered from a lack of credibility.

Based on current thinking, it seems far more probable that life in the universe will mostly be very small and entirely lacking consciousness, in other words, microbial. After all, life on Earth arose pretty much as soon as the environment was stable enough, around 3.7 billion years ago or even earlier. In contrast, lifeforms large enough to be visible without a microscope evolved around 1 billion or so years ago (for photosynthetic plants) and only about 580 million years ago for complex marine animals.

The recent publicity surrounding the seasonal variations in methane on Mars has provided ever more tantalising hints that microbial life may survive in ultraviolet-free shelters near the Martian surface, although it will be some years before a robot mission sophisticated enough to visit sink holes or canyon walls can investigate likely habitats. (As for the oft-talked about but yet to be planned crewed mission, see this post from 2015.)

Therefore it seems that it is worth concentrating on finding biological or pre-biological compounds in extra-terrestrial objects as much as listening for radio signals. The search can be via remote sensing (e.g. of molecular clouds, comets and asteroids) as well as investigating meteorites - bearing in mind that the Earth receives up to one million kilogrammes of material per day, although less than one percent is large enough to be identified as such.

The problem is that this area of research has at times had a fairly poor reputation due to the occasional premature claim of success. Stories then become widespread via non-specialist media in such a way that the resulting hype frequently bears little relation to the initial scientific report. In addition, if further evidence reverses that conclusion, the public's lack of understanding of the error-correcting methods of science leads to disillusion at best and apathy at worst.

One key hypothesis that has received more than its fair share of negative publicity has been that of panspermia, which suggests not just the chemicals of biology but life itself has been brought to Earth by cosmic impactors. The best known advocates are Fred Hoyle and Chandra Wickramasinghe, but their outspoken championing of an hypothesis severely lacking in evidence has done little to promote the idea. For while it is feasible - especially with the ongoing discovery of extremophiles everywhere from deep ocean vents to the coolant ponds of nuclear reactors - to envisage microbial life reaching Earth from cometary or asteroid material, the notion that these extra-terrestrials have been responsible for various epidemics seems to be a step too far.

It's long been known that comets contain vast amounts of water; indeed, simulations suggest that until the Late Heavy Bombardment around four billion years ago there may have been far less water on Earth than subsequently. Considering the volumes of water ice now being discovered on Mars and the Moon, the probability of life-sustaining environments off the Earth has gained a respectable boost.

It isn't just water, either: organic compounds that are precursors to biological material have been found in vast quantities in interstellar space; and now they are being found in the inner solar system too. That's not to say that this research has been without controversy as well. Since the early 1960s, Bartholomew Nagy has stirred debate by announcing the discovery of sophisticated pre-biological material in impactors such as the Orgueil meteorite. Examination by other teams has found that contamination has skewed the results, implying that Nagy's conclusions were based on inadequate research. Although more recent investigation of meteorites and spectrophotometry of carbonaceous chondrite asteroids have supplied genuine positives, the earlier mistakes have sullied the field.

Luckily, thorough examination of the Australian Murchison meteorite has promoted the discipline again, with numerous amino acids being confirmed as of non-terrestrial origin. The RNA nucleobase uracil has also been found in the Murchison meteorite, with ultraviolet radiation in the outer space vacuum being deemed responsible for the construction of these complex compounds.

Not that there haven't been other examples of premature results leading to unwarranted hype. Perhaps the best known example of this was the 1996 announcement of minute bacteria-like forms in the Martian ALH84001 meteorite. The international headlines soon waned when a potential non-biological origin was found.

In addition to examination of these objects, experiments are increasingly being performed to test the resilience of life forms in either vacuum chambers or real outer space, courtesy of the International Space Station. After all, if terrestrial life can survive in such a hostile environment, the higher the likelihood that alien microbiology could arrive on Earth via meteorite impact or cometary tail (and at least one amino acid, glycine, has been found on several comets).

Unmanned probes are now replicating these findings, with the European Space Agency's Rosetta spacecraft finding glycine in the dust cloud around Comet 67P/Churyumov-Gerasimenko in 2016. Although these extra-terrestrial objects may lack the energy source required to kick-start life itself, some are clearly harbouring many of the complex molecules used in life on Earth.

It has now been proven beyond any doubt that organic and pre-biological material is common in space. The much higher frequency of impacts in the early solar system suggests that key components of Earth's surface chemistry - and its water - were delivered via meteorites and comets. Unfortunately, the unwary publication of provisional results, when combined with the general public's feeble grasp of scientific methodology, has hindered support for what is surely one of the most exciting areas in contemporary science. A multi-faceted approach may in time supply the answers to the ultimate questions surrounding the origin of life and its precursor material. This really is a case of watch (this) space!

Thursday, 11 October 2018

Sonic booms and algal blooms: a smart approach to detoxifying waterways

A recent report here in New Zealand has raised some interesting issues around data interpretation and the need for independent analysis to minimise bias. The study has examined the state of our fresh water environment over the past decade, leading to the conclusion that our lakes and rivers are improving in water quality.

However, some of the data fails to support this: populations of freshwater macro invertebrates remain low, following a steady decline over many decades. Therefore while the overall tone of the report is one of optimism, some researchers have claimed that the data has been deliberately cherry-picked in order to present as positive a result as possible.

Of course, there are countless examples of interested parties skewing scientific data for their own ends, with government organisations and private corporations among the most common culprits. In this case, the recorded drop in nitrate levels has been promoted at the expense of the continued low population of small-scale fauna. You might well ask what use these worms, snails and insects are, but even a basic understanding of food webs shows that numerous native bird and freshwater fish species rely on these invertebrates for food. As I've mentioned so often the apparently insignificant may play a fundamental role in sustaining human agriculture (yes, some other species practice farming too!)

So what is it that is preventing the invertebrates' recovery? The answer seems to be an increase in photosynthetic cyanobacteria, or as is more commonly - and incorrectly known - blue-green algae. If it is identified at all, it's as a health food supplement called spirulina, available in smoothies and tablet form. However, most cyanobacteria species are not nearly as useful - or pleasant. To start with, their presence in water lowers the oxygen content, so thanks to fertiliser runoff - nitrogen and phosphorus in particular - they bloom exponentially wherever intensive farming occurs close to fresh water courses. Another agriculture-related issue is due to clearing the land for grazing: without trees to provide shade, rivers and streams grow warmer, encouraging algal growth. Therefore as global temperatures rise, climate change is having yet another negative effect on the environment.

Most species of cyanobacteria contain toxins that can severely affect animals much larger than fresh water snails. Dogs have been reported as dying in as little as a quarter of an hour from eating it, with New Zealand alone losing over one hundred and fifty pet canines in the past fifteen years; it's difficult to prevent consumption, since dogs seem to love the smell! Kiwis are no stranger to the phylum for other reasons, as over one hundred New Zealand rivers and lakes have been closed to swimmers since 2011 due to cyanobacterial contamination.

Exposure to contaminated water or eating fish from such an environment is enough to cause external irritation to humans and may even damage our internal organs and nervous system. Drinking water containing blue-green algae is even worse; considering their comparable size to some dogs, it is supposed that exposure could prove fatal to young children. Research conducted over the past few years also suggests that high-level contamination can lead to Lou Gehrig's disease, A.K.A. amyotrophic lateral sclerosis, the same condition that Stephen Hawking suffered from.

What research you might ask is being done to discover a solution to this unpleasant organism? Chemicals additives including copper sulphate and calcium hypochlorite have been tried, but many are highly expensive while the toxicity of others is such that fish and crustacean populations also suffer, so this is hardly a suitable answer.

A more elegant solution has been under trial for the past two years, namely the use of ultrasound to sink the blue-green algae too deep to effectively photosynthesise, thus slowly killing it. A joint programme between New Zealand and the Netherlands uses a high-tech approach to identifying and destroying ninety per cent of each bloom. Whereas previous ultrasonic methods tended to be too powerful, thereby releasing algal toxins into the water, the new technique directly targets the individual algal species. Six tests per hour are used to assess water quality and detect the species to be eradicated. Once identified, the sonic blasts are calibrated for the target species and water condition, leading to a slower death for the blue-green algae that avoids cell wall rupture and so prevents the toxins from escaping.

Back to the earlier comment as to why the report's conclusions appear to have placed a positive spin that is unwarranted, the current and previous New Zealand Governments have announced initiatives to clean up our environment and so live up to the tourist slogan of '100% Pure'. The latest scheme requires making ninety percent of the nation's fresh water environments swimmable by 2040, which seems to be something of a tall order without radical changes to agriculture and the heavily polluting dairy sector in particular. Therefore the use of finely target sonic blasting couldn't come a moment too soon.

Our greed and short-sightedness has allowed cyanobacteria to greatly increase at the expense of the freshwater ecosystem, not to mention domesticated animals. Now advanced but small-scale technology has been developed to reduce it to non-toxic levels, but is yet to be implemented beyond the trial stage. Hopefully this eradication method will become widespread in the near future, a small victory in our enormous fight to right the wrongs of over-exploitation of the environment. But as with DDT, CFCs and numerous others, it does make me wonder how many more man-made time bombs could be ticking out there...

Thursday, 27 September 2018

The anaesthetic of familiarity: how our upbringing can blind us to the obvious

In the restored Edwardian school classroom at Auckland's Museum of Transport and Technology (MOTAT) there is a notice on the wall stating 'Do not ask your teacher questions.' Fortunately, education now goes some way in many nations to emphasising the importance of individual curiosity rather than mere obedience to authority. Of course, there are a fair number of politicians and corporation executives who wish it wasn't so, as an incurious mind is easier to sway than a questioning one. As my last post mentioned, the World Wide Web can be something of an ally for them, since the 'winner takes all' approach of a review-based system aids the slogans and rhetoric of those who wish to control who we vote for and what we buy.

Even the most liberal of nations and cultures face self-imposed hurdles centered round which is the best solution and which is just the most familiar one from our formative years. This post therefore looks at another side of the subjective thinking discussed earlier this month, namely a trap that Richard Dawkins has described as the "anaesthetic of familiarity". Basically, this is when conventions are so accepted as to be seen as the primary option instead of being merely one of a series of choices. Or, as the British philosopher Susan Stebbing wrote in her 1939 book Thinking to Some Purpose: "One of the gravest difficulties encountered at the outset of the attempt to think effectively consists in the difficulty of recognizing what we know as distinguished from what we do not know but merely take for granted."

Again, this mind set is much loved by the manufacturing sector; in addition to such well-known ploys as deliberate obsolescence and staggered release cycles, there are worse examples, especially in everyday consumerism. We often hear how little nutritional value many highly processed foods contain, but think what this has done for the vitamin and mineral supplement industry, whose annual worldwide sales now approach US$40 billion!

Citizens of developed nations today face very different key issues to our pre-industrial ancestors, not the least among them being a constant barrage of decision making. Thanks to the enormous variety of choices available concerning almost every aspect of our daily lives, we have to consider everything from what we wear to what we eat. The deluge of predominantly useless information that we receive in the era of the hashtag makes it more difficult for us to concentrate on problem solving, meaning that the easiest way out is just to follow the crowd.

Richard Dawkins' solution to these issues is to imagine yourself as an alien visitor and then observe the world as a curious outsider. This seems to me to be beyond the reach of many, for whom daily routine appears to be their only way to cope. If this sounds harsh, it comes from personal experience; I've met plenty of people who actively seek an ostrich-like head-in-the-sand approach to life to avoid the trials and tribulations - as well as the wonders - of this rapidly-changing world.

Instead, I would suggest an easier option when it comes to some areas of STEM research: ensure that a fair proportion of researchers and other thought leaders are adult migrants from other nations. Then they will be able to apply an outside perspective, hopefully identifying givens that are too obvious to be spotted by those who have grown up with them.

New Zealand is a good example of this, with arguably its two best known science communicators having been born overseas: Siouxsie Wiles and Michelle Dickinson, A.K.A. Nanogirl. Dr Wiles is a UK-trained microbiologist at the University of Auckland. She frequently appears on Radio New Zealand as well as undertaking television and social media work to promote science in general, as well as for her specialism of fighting bacterial infection.

Dr Dickinson is a materials engineering lecturer and nanomaterials researcher at the University of Auckland who studied in both the UK and USA. Her public outreach work includes books, school tours and both broadcast and social media. She has enough sci-comm kudos that last year, despite not having a background in astronomy, she interviewed Professor Neil deGrasse Tyson during the Auckland leg of his A Cosmic Perspective tour.

The work of the above examples is proof that newcomers can recognise a critical need compared to their home grown equivalents. What is interesting is that despite coming from English-speaking backgrounds - and therefore with limited cultural disparity to their adoptive New Zealand - there must have been enough that was different to convince Doctors Wiles and Dickinson of the need for a hands-on, media savvy approach to science communication.

This is still far from the norm: many STEM professionals believe there is little point to promoting their work to the public except via print-based publications. Indeed, some famous science communicators such as Carl Sagan and Stephen Jay Gould were widely criticised during their lifetime by the scientific establishment for what were deemed undue efforts at self-promotion and the associated debasement of science by combining it with show business.

As an aside, I have to say that as brilliant as some volumes of popular science are, they do tend to preach to the converted; how many non-science fans are likely to pick up a book on say string theory, just for a bit of light reading or self-improvement (the latter being a Victorian convention that appears to have largely fallen from favour)? Instead, the outreach work of the expat examples above is aimed at the widest possible audience without over-simplification or distortion of the principles being communicated.

This approach may not solve all issues about how to think outside the box - scientists may be so embedded within their culture as to not realise that there is a box - but surely by stepping outside the comfort zone we grew up in we may find problems that the local population hasn't noticed?

Critical thinking is key to the scientific enterprise, but it would appear, to little else in human cultures. If we can find methods to avoid the anaesthetic of familiarity and acknowledge that what we deem of as normal can be far from optimal, then these should be promoted with all gusto. If the post-modern creed is that all world views are equally valid and science is just another form of culture-biased story-telling, then now more than ever we need cognitive tools to break through the subjective barriers. If more STEM professionals are able to cross borders and work in unfamiliar locations, isn’t there a chance they can recognise issues that fall under the local radar and so supply a new perspective we need if we are to fulfil our potential?

Wednesday, 12 September 2018

Seasons of the mind: how can we escape subjective thinking?

According to some people I've met, the first day of spring in the Southern Hemisphere has been and gone with the first day of September. Not incidentally, there are also some, myself included, who think that it has suddenly started to feel a bit warmer. Apparently, the official start date is at the spring equinox during the third week of September. So on the one hand, the weather has been warming since the start of the month but on the other, why should a planet followed neat calendrical conventions, i.e. the first of anything? Just how accurate is the official definition?

There are many who like to reminisce about how much better the summer weather was back in their school holidays. The rose-tinted memories of childhood can seem idyllic, although I also recall summer days of non-stop rain (I did grow up in the UK, after all). Therefore our personal experiences, particularly during our formative years, can promote an emotion-based response that is so far ingrained we fail to consider they may be inaccurate. Subjectivity and wishful thinking are key to the human experience: how often do we remember the few hits and not the far more misses? As science is practiced by humans it is subject to the same lack of objectivity as anything else; only its built-in error-checking can steer practitioners onto a more rational course than in other disciplines.

What got me to ponder the above was that on meeting someone a few months' ago for the first time, almost his opening sentence was a claim that global warming isn't occurring and that instead we are on the verge of an ice age. I didn't have time for a discussion on the subject, so I filed that one for reply at a later date. Now seems like a good time to ponder what it is that leads people to make such assertions that are seemingly contrary to the evidence.

I admit to being biased on this particular issue, having last year undertaken research for a post on whether agriculture has postponed the next glaciation (note that this woolly - but not mammoth, ho-ho - terminology is one of my bugbears: we are already in an ice age, but currently in an interglacial stage). Satellite imagery taken over the past few decades shows clear evidence of large-scale reductions in global ice sheets. For example, the northern polar ice cap has been reduced by a third since 1980, with what remains only half its previous thickness. Even so, are three decades a long enough period to make accurate predictions? Isn't using a scale that can be sympathetic with the human lifespan just as bad as relying on personal experience?

The UK's Met Office has confirmed that 2018 was that nation's hottest summer since records began - which in this instance, only goes back as far back as 1910.  In contrast, climate change sceptics use a slight growth in Antarctic sea ice (contrary to its steadily decreasing continental icesheet) as evidence of climate equilibrium. Now I would argue that this growth is just a local drop in the global ocean, but I wonder if my ice age enthusiast cherry-picked this data to formulate his ideas? Even so, does he believe that all the photographs and videos of glaciers, etc. have been faked by the twenty or so nations who have undertaken Earth observation space missions? I will find out at some point!

If we try to be as objective as possible, how can we confirm with complete certainty the difference between long term climate change and local, short term variability? In particular, where do you draw the line between the two? If we look at temporary but drastic variations over large areas during the past thousand years, there is a range of time scales to explore. The 15th to 18th centuries, predominantly the periods 1460-1550 and 1645-1715, contained climate variations now known as mini ice ages, although these may have been fairly restricted in geographic extent. Some briefer but severe, wide-scale swings can be traced to single events, such as the four years of cold summers following the Tambora eruption of 1815.

Given such variability over the past millennium, in itself a tiny fragment of geological time, how much certainty surrounds the current changes? The public have come to expect yes or no answers delivered with aplomb, yet some areas of science such as climate studies involve chaos mathematics, thus generating results based on levels of probability. What the public might consider vacillation, researchers consider the ultimate expression of scientific good practice. Could this lack of black-and-white certainty be why some media channels insist on providing a 'counterbalancing' viewpoint from non-expert sources, as ludicrous as this seems?

In-depth thinking about a subject relies upon compartmentalisation and reductionism. Otherwise, we would forever be bogged down in the details and never be able to form an overall picture. But this quantising of reality is not necessarily a good thing if it generates a false impression regarding cause and effect. By suffering from what Richard Dawkins calls the “tyranny of the discontinuous mind” we are prone to creating boundaries that just don't exist. In which case, could a line ever be found between short term local variation and global climate change? Having said that, I doubt many climate scientists would use this as an excuse to switch to weather forecasting instead. Oh dear: this is beginning to look like a 'does not compute' error!

In a sense of course we are exceptionally lucky to have developed science at all. We rely on language to define our ideas, so need a certain level of linguistic sophistication to achieve this focus; tribal cultures whose numbers consist of imprecise values beyond two are unlikely to achieve much headway in, for example, classifying the periodic table.

Unfortunately, our current obsession with generating information of every quality imaginable and then loading it to all available channels for the widest possible audience inevitably leads to a tooth-and-claw form of meme selection. The upshot of this bombardment of noise and trivia is to require an enormous amount of time just filtering it. The knock-on effect being that minimal time is left for identifying the most useful or accurate content rather than simply the most disseminated.

Extremist politicians have long been adept at exploiting this weakness to expound polarising phraseology that initially sounds good but lacks depth; they achieve cut-through with the simplest and loudest of arguments, fulfilling the desire most people have to fit into a rigid social hierarchy - as seen in many other primate species. The problem is that in a similar vein to centrist politicians who can see both sides of an argument but whose rational approach negates emotive rhetoric, scientists are often stuck with the unappealing options of either taking a stand when the outcome is not totally clear, or facing accusations of evasion. There is current trend, particularly espoused by politicians, to disparage experts, but discovering how the universe works doesn't guarantee hard-and-fast answers supplied exactly when required and which provide comfort blankets in a harsh world.

Where then does this leave critical thinking, let alone science? Another quote from Richard Dawkins is that "rigorous common sense is by no means obvious to much of the world". This pessimistic view of the human race is supported by many a news article but somewhat negated by the immense popularity of star science communicators, at least in a number of countries.

Both the methods and results of science need to find a space amongst the humorous kitten videos, conspiracy theorists and those who yearn for humanity to be the pinnacle and purpose of creation. If we can comprehend that our primary mode of thinking includes a subconscious baggage train of hopes, fears and distorted memories, we stand a better chance of seeing the world for how it really is and not how we wish it to be. Whether enough of us can dissipate that fog remains to be seen. Meanwhile, the ice keeps melting and the temperature rising, regardless of what you might hear...

Monday, 27 August 2018

Hammer and chisel: the top ten reasons why fossil hunting is so important

At a time when the constantly evolving world of consumer digital technology seems to echo the mega-budget, cutting-edge experiments of the LHC and LIGO, is there still room for such an old-fashioned, low-tech science as paleontology?

The answer is of course yes, and while non-experts might see little difference between its practice today and that of its Eighteenth and Nineteenth Century pioneers, contemporary paleontology does on occasion utilise MRI scanners among other sophisticated equipment. I've previously discussed the delights of fossil hunting as an easy way to involve children in science, yet the apparent simplicity of its core techniques mask the key role that paleontology still plays in evolutionary biology.

Since the days of Watson and Crick molecular biology has progressed in leaps and bounds, yet the contemporary proliferation of cheap DNA-testing kits and television shows devoted to gene-derived genealogy obscure just how tentatively some of their results should be accepted. The levels of accuracy quoted in non-specialist media is often far greater than what can actually be attained. For example, the data on British populations has so far failed to separate those with Danish Viking ancestry from descendants of earlier Anglo-Saxon immigration, leading to population estimates at odds with the archaeological evidence.


Here then is a list of ten reasons why fossil hunting will always be a relevant branch of science, able to supply information that cannot be supplied by other scientific disciplines:
  1. Locations. Although genetic evidence can show the broad sweeps connecting extant (and occasionally, recently-extinct) species, the details of where animals, plants or fungi evolved, migrated to - and when - relies on fossil evidence.
  2. Absolute dating. While gene analysis can be used to obtain the dates of a last common ancestor shared by contemporary species, the results are provisional at best for when certain key groups or features evolved. Thanks to radiometric dating from some fossiliferous locales, paleontologists are able to fill in the gaps in fossil-filled strata that don't have radioactive mineralogy.
  3. Initial versus canonical. Today we think of land-living tetrapods (i.e. amphibians, reptiles, mammals and birds) as having a maximum of five digits per limb. Although these are reduced in many species – as per horse's hooves – five is considered canonical. However, fossil evidence shows that early terrestrial vertebrates had up to eight digits on some or all of their limbs. We know genetic mutation adds extra digits, but this doesn't help reconstruct the polydactyly of ancestral species; only fossils provide confirmation.
  4. Extinct life. Without finding their fossils, we wouldn't know of even such long-lasting and multifarious groups as the dinosaurs: how could we guess about the existence of a parasaurolophus from looking at its closest extant cousins, such as penguins, pelicans or parrots? There are also many broken branches in the tree of life, with such large-scale dead-ends as the pre-Cambrian Ediacaran biota. These lost lifeforms teach us something about the nature of evolution yet leave no genetic evidence.
  5. Individual history. Genomes show the cellular state of an organism, but thanks to fossilised tooth wear, body wounds and stomach contents (including gastroliths) we have important insights into day-to-day events in the life of ancient animals. This has led to fairly detailed biographies of some creatures, prominent examples being Sue the T-Rex and Al the Allosaurus, their remains being comprehensive enough to identify various pathologies.
  6. Paleoecology. Coprolites (fossilised faeces), along with the casts of burrows, help build a detailed enviromental picture that zoology and molecular biology cannot provide. Sometimes the best source of vegetation data comes from coprolites containing plant matter, due to the differing circumstances of decomposition and mineralisation.
  7. External appearance. Thanks to likes of scanning electron microscopes, fossils of naturally mummified organisms or mineralised skin can offer details that are unlikely to be discovered by any other method. A good example that has emerged in the past two decades is the colour of feathered dinosaurs obtained from the shape of their melanosomes.
  8. Climate analysis. Geological investigation can provide ancient climate data but fossil evidence, such as the giant insects of the Carboniferous period, confirm the hypothesis. After all, dragonflies with seventy centimetre wingspans couldn't survive with today's level of atmospheric oxygen.
  9. Stratigraphy. Paleontology can help geologists trying to sequence an isolated section of folded stratigraphy that doesn't have radioactive mineralogy. By assessing the relative order of known fossil species, the laying down of the strata can be placed in the correct sequence.
  10. Evidence of evolution. Unlike the theories and complex equipment used in molecular biology, anyone without expert knowledge can visit fossils in museums or in situ. They offer a prominent resource as defence against religious fundamentalism, as their ubiquity makes them difficult to explain by alternative theories. The fact that species are never found in strata outside their era supports the scientific view of life's development rather than those found in religious texts (the Old Testament, for example, erroneously states that birds were created prior to all other land animals).
To date, no DNA has been found over about 800,000 years old. This means that many of the details of the history of life rely primarily on fossil evidence. It's therefore good to note that even in an age of high-tech science, the painstaking techniques of paleontology can shed light on biology in a way unobtainable by more recent examples of the scientific toolkit. Of course, the study is far from fool-proof: it is thought that only about ten percent of all species have ever come to light in fossil form, with the found examples heavily skewed in favour of shallow marine environments.

Nevertheless, paleontology is a discipline that constantly proves its immense value in expanding our knowledge of the past in a way no religious text could ever do. It may be easy to understand what fossils are, but they are assuredly worth their weight in gold: precious windows onto an unrecoverable past.

Monday, 13 August 2018

Life on Mars? How accumulated evidence slowly leads to scientific advances

Although the history of science is often presented as a series of eureka moments, with a single scientist's brainstorm paving the way for a paradigm-shifting theory, the truth is usually rather less dramatic. A good example of the latter is the formulation of plate tectonics, with the meteorologist Alfred Wegener's continental drift being rejected by the geological orthodoxy for over thirty years. It was only with the accumulation of data from late 1950's onward that the mobility of Earth's crust slowly gained acceptance, thanks to the multiple strands of new evidence that supported it.

One topic that looks likely to increase in popularity amongst both public and biologists is the search for life on Mars. Last month's announcement of a lake deep beneath the southern polar ice cap is the latest piece of observational data that Mars might still have environments suitable for microbial life. This is just the latest in an increasing body of evidence that conditions may be still be capable of supporting life, long after the planet's biota-friendly heyday. However, the data hasn't always been so positive, having fluctuated in both directions over the past century or so. So what is the correspondence between positive results and the levels of research for life on Mars?

The planet's polar ice caps were first discovered in the late Seventeenth Century, which combined with the Earth-like duration of the Martian day implied the planet might be fairly similar to our own. This was followed a century later by observation of what appeared to be seasonal changes to surface features, leading to the understandable conclusion of Mars as a temperate, hospitable world covered with vegetation. Then another century on, an early use of spectroscopy erroneously described abundant water on Mars; although the mistake was later corrected, the near contemporary reporting of non-existent Martian canals led to soaring public interest and intense speculation. The French astronomer Camille Flammarion helped popularise Mars as a potentially inhabited world, paving the way for H.G. Wells' War of the Worlds and Edgar Rice Burroughs' John Carter series.

As astronomical technology improved and the planet's true environment became known (low temperatures, thin atmosphere and no canals), Mars' popularity waned. By the time of Mariner 4's 1965 fly-by, the arid, cratered and radiation-smothered surface it revealed only served to reinforce the notion of a lifeless desert; the geologically inactive world was long past its prime and any life still existing there probably wouldn't be visible without a microscope.

Despite this disappointing turnabout, NASA somehow managed to gain the funding to incorporate four biological experiments on the two Viking landers that arrived on Mars in 1976. Three of the experiments gave negative results while the fourth was inconclusive, most researchers hypothesising a geochemical rather than biological explanation for the outcome. After a decade and a half of continuous missions to Mars, this lack of positive results - accompanied by experimental cost overruns - probably contributed to a sixteen-year hiatus (excluding two Soviet attempts at missions to the Martian moons). Clearly, Mars' geology by itself was not enough to excite the interplanetary probe funding czars.

In the meantime, it was some distinctly Earth-bound research that reignited interested in Mars as a plausible source of life. The 1996 report that Martian meteorite ALH84001 contained features resembling fossilised (if extremely small) bacteria gained worldwide attention, even though the eventual conclusion repudiated this. Analysis of three other meteorites originating from Mars showed that complex organic chemistry, lava flows and moving water were common features of the planet's past, although they offered no more than tantalising hints that microbial life may have flourished, possibly billions of years ago.

Back on Mars, NASA's 1997 Pathfinder lander delivered the Sojourner rover. Although it appeared to be little more than a very expensive toy, managing a total distance in its operational lifetime of just one hundred metres, the proof of concept led to much larger and more sophisticated vehicles culminating in today’s Curiosity rover.

The plethora of Mars missions over the past two decades has delivered immense amounts of data, including that the planet used to have near-ideal conditions for microbial life - and still has a few types of environment that may be able to support miniscule extremophiles.

Together with research undertaken in Earth-bound simulators, the numerous Mars projects of the Twenty-first Century have to date swung the pendulum back in favour of a Martian biota. Here are a few prominent examples:

  • 2003 - atmospheric methane is discovered (the lack of active geology implying a biological rather than geochemical origin)
  • 2005 - atmospheric formaldehyde is detected (it could be a by-product of methane oxidation)
  • 2007 - silica-rich rocks, similar to hot springs, are found
  • 2010 - giant sinkholes are found (suitable as radiation-proof habitats)
  • 2011 - flowing brines and gypsum deposits discovered
  • 2012 - lichen survived for a month in the Mars Simulation Laboratory
  • 2013 - proof of ancient freshwater lakes and complex organic molecules, along with a long-lost magnetic field
  • 2014 - large-scale seasonal variation in methane, greater than usual if of geochemical origin
  • 2015 - Earth-based research successfully incubates methane-producing bacteria under Mars-like conditions
  • 2018 - a 20 kilometre across brine lake is found under the southern polar ice sheet

Although these facts accumulate into an impressive package in favour of Martian microbes, they should probably be treated as independent points, not as one combined argument. For as well as finding factors supporting microbial life, other research has produced opposing ones. For example, last year NASA found that a solar storm had temporarily doubled surface radiation levels, meaning that even dormant microbes would have to live over seven metres down in order to survive. We should also bear in mind that for some of each orbit, Mars veers outside our solar system's Goldilocks Zone and as such any native life would have its work cut out for it at aphelion.

A fleet of orbiters, landers, rovers and even a robotic helicopter are planned for further exploration in the next decade, so clearly the search for life on Mars is still deemed a worthwhile effort. Indeed, five more missions are scheduled for the next three years alone. Whether any will provide definitive proof is the big question, but conversely, how much of the surface - and sub-surface - would need to be thoroughly searched before concluding that Mars has either never had microscopic life or that it has long since become extinct?

What is apparent from all this is that the quantity of Mars-based missions has fluctuated according to confidence in the hypothesis. In other words, the more that data supports the existence of suitable habitats for microbes, the greater the amount of research to find them. In a world of limited resources, even such profoundly interesting questions as extra-terrestrial life appear to gain funding based on the probability of near-future success. If the next generation of missions fails to find traces of even extinct life, my bet would be a rapid and severe curtailing of probes to the red planet.

There is a caricature of the stages that scientific hypotheses go through, which can ironically best be described using religious terminology: they start as heresy; proceed to acceptance; and are then carved into stone as orthodoxy. Of course, unlike with religions, the vast majority of practitioners accept the new working theory once the data has passed a certain probability threshold, even if it totally negates an earlier one. During the first stage - and as the evidence starts to be favourable - more researchers may join the bandwagon, hoping to be the first to achieve success.

In this particular case, the expense and sophistication of the technology prohibits entries from all except a few key players such as NASA and ESA. It might seem obvious that in expensive, high-tech fields, there has to be a correlation between hypothesis-supporting facts and the amount of research. But this suggests a stumbling block for out-of-the-box thinking, as revolutionary hypotheses fail to gain funding without at least some supporting evidence.

Therefore does the cutting-edge, at least in areas that require expensive experimental confirmation, start life as a chicken-and-egg situation? Until data providentially appears, is it often the case that the powers-that-be have little enticement for funding left-field projects? That certainly seems to have been true for meteorologist Alfred Wegener and his continental drift hypothesis, since it took several research streams to codify plate tectonics as the revolutionary solution. 

Back to Martian microbes. Having now read in greater depth about seasonal methane, it appears that the periodicity could be due to temperature-related atmospheric changes. This only leaves the scale of variation as support for a biological rather than geochemical origin. Having said that, the joint ESA/Roscosmos ExoMars Trace Gas Orbiter may find a definitive answer as to its source in the next year or so, although even a negative result is unlikely to close the matter for some time to come. Surely this has got to be one of the great what-ifs of our time? Happy hunting, Mars mission teams!

Monday, 30 July 2018

Biophilic cities: why green is the new black

I've previously discussed the notion that children who spend more time outside in natural surroundings are more likely to have improved mental and physical health compared to their indoors, gadget-centred peers, but does the same hold true for adults as well? After all, there have been many claims that the likes of the fractal geometry of natural objects, the sensual stimulation, the random behaviour of animals, even feeling breezes or better air quality can have a positive or 'wellness' (horrific term though it is) effect.

It is pretty much a given that the larger the percentage of nature existing within conurbations, the greater the improvement to the local environment. This begins at the practical level, with vegetation mitigating extremes of heat while its roots helps reduce flooding. In addition, fauna and flora gain more room to live in, with a greater number of species able to survive than just the usual urban adaptees such as rats and pigeons. What about the less tangible benefits to humans, culminating in a better quality of life? Science isn't wishful thinking, so what about the evidence for more nature-filled urban environments improving life for all its citizens, not just children?

Studies suggest that having window views of trees can increase concentration and wellbeing in the workplace, while for hospital patients there is a clear correlation between types of view and both the length of recovery periods and painkiller usage. Therefore it seems that even the appearance of close-at-hand nature can have an effect, without the necessity of immersion. Having said that, there are clear advantages to having a public green space, since it allows a wide range of activities such as flying kites, playing ball games, jogging and boot camps, or just having a picnic.

Our largely sedentary, over-caloried lives necessitate as much physical activity as we can get, but there is apparently something greater than just physical exercise behind nature as a promoter of wellbeing. Investigation appears to show that spaces with trees and the hint of wilderness are far more beneficial than the unnatural and restricted geometries of manicured lawns and neatly maintained flower beds. It seems that we are still very much beholden to the call of the wild. If this is a fundamental component of our highly civilised lives, are urban planners aware of this and do they incorporate such elements into our artificial environments?

The concept of integrating nature into our towns and cities certainly isn't a new one. As a child, I occasionally visited Letchworth Garden City, a town just north of London. As the name suggests, it was an early form of 'Green Belt' planning, created at the start of the Twentieth century and divided into sectors for residential, industrial and agricultural usage. In its first half century it tried to live up to its intention to be self-sufficient in food, water and power generation, but this later proved impractical. I don't recall it being anything special, but then its heyday as a mecca for the health conscious (at a time when the likes of exercise and vegetarianism were associated with far left-wing politics) has long since passed. As to whether the inhabitants have ever been mentally - or even physically - advantaged compared to the older conurbations elsewhere in the UK, I cannot find any evidence.

Across the Atlantic, the great American architect Frank Lloyd-Wright conceived of something similar but on a far larger scale. His Broadacre City concept was first published in 1932, with the key idea that every family would live on an acre-sized plot. However, Lloyd-Wright's concept - apart from being economically prohibitive - relied on private cars (later updated to aerator, a form of personal helicopter) for most transportation; sidewalks were largely absent from his drawings and models. Incidentally, some US cities today have partially adopted the sidewalk-free model but without Lloyd-Wright's green-oriented features. For example, there are suburbs in oil-centric Houston that are only reachable by car; you have to drive even to reach shopping malls you can see from your own home, with high pedestrian mortality rates proving the dangers of attempting to walk anywhere. Back to Lloyd-Wright: like many of his schemes, his own predilections and aesthetic sensibilities seem to have influenced his design rather more than any evidence-based insight into social engineering.

In recent years the term 'biophilic cities' has been used to describe conurbations attempting to increase their ratio of nature to artifice, often due to a combination of public campaigning and far-sighted local governments. Although these schemes cover much wider ground than just human wellbeing (prominent issues being reduction in power usage and waste, greater recycling and ecological diversity, etc), one of the side effects of the improvements is to quality of life. Thirteen cities joined the Biophilic Cities project in 2013, but others are just as committed in the long-term to offsetting the downsides of urban living. Here are three cities I have visited that are dedicated to improving their environment:

  1. Singapore. Despite the abundance of tower blocks, especially in its southern half, this city that is also a nation has a half-century history of planting vegetation in order to live up to the motto ‘Singapore - City in a Garden’. Despite its large-scale adoption of high-tech, high-rise architecture, Singapore has preserved an equivalent area of green space and now ranks top of the Green View Index. Even the maximal artificiality of the main highways is tempered by continuous rows of tall, closedly-packed trees while building regulations dictate replacement of ground-level vegetation lost to development. A new 280-metre tall office, retail and residential building, due for completion in 2021, is set to incorporate overtly green elements such as a rainforest plaza. It could be argued that it's easy for Singapore to undertake such green initiatives considering that much of city didn't exist before the late Twentieth century and what did has been subject to wide-scale demolition. However, it seems that Singapore's Government has a long-term strategy to incorporate nature into the city, with the resulting improvements in the mental and physical wellbeing of its inhabitants.
  2. Toronto. Although not as ecologically renowned as Vancouver, the local government and University of Toronto are engaged in a comprehensive series of plans to improve the quality of life for both humans and the rest of nature. From the green roof bylaw and eco-friendly building subsidies to Live Green Toronto Program, there is a set of strategies to aid the local environment and planet in general. It is already paying dividends in a large reduction of air pollution-related medical cases, while quality of life improvements are shown by the substantial bicycle-biased infrastructure and increase in safe swimming days. There's still plenty to do in order to achieve their long term goals, particularly around traffic-related issues, but the city and its inhabitants are clearly aiming high.
  3. Wellington. New Zealand's capital has wooded parks and tree-filled valleys that the council promotes as part of the city's quality of life. The recreated wetlands at Waitangi Park and the Zealandia (formerly Karori) predator-proof wildlife sanctuary are key components in the integration of large-scale nature into the urban environment. Indeed, the latter is proving so successful that rare native birds such as the kaka are being increasingly found in neighbourhood gardens. Both the city and regional councils are committed to improving both the quality of life for citizens as well as for the environment in general, from storm water filtering in Waitangi Park to the wind turbines on the hilltops of what may be the world's windiest city.

These cities are just the tip of the iceberg when it comes to conurbations around the world seeking to make amends for the appalling environmental and psychological consequences of cramming immense numbers of humans into a small region that cannot possibly supply all their needs. In some respects these biophilic cities appear too good to be true, as their schemes reduce pollution and greenhouse gas emissions, improve the local ecosystem, and at the same time appear to aid the physical and mental wellbeing of their inhabitants. Yet it shouldn't be surprising really; cities are a recent invention and before that a nomadic lifestyle embedded us in landscapes that were mostly devoid of human intervention. If we are to achieve any sort of comfortable equilibrium in these hectic times, then surely covering bare concrete with greenery is the key? You don't have to be a hippy tree hugger to appreciate what nature can bring to our lives.