Thursday, 27 September 2018

The anaesthetic of familiarity: how our upbringing can blind us to the obvious

In the restored Edwardian school classroom at Auckland's Museum of Transport and Technology (MOTAT) there is a notice on the wall stating 'Do not ask your teacher questions.' Fortunately, education now goes some way in many nations to emphasising the importance of individual curiosity rather than mere obedience to authority. Of course, there are a fair number of politicians and corporation executives who wish it wasn't so, as an incurious mind is easier to sway than a questioning one. As my last post mentioned, the World Wide Web can be something of an ally for them, since the 'winner takes all' approach of a review-based system aids the slogans and rhetoric of those who wish to control who we vote for and what we buy.

Even the most liberal of nations and cultures face self-imposed hurdles centered round which is the best solution and which is just the most familiar one from our formative years. This post therefore looks at another side of the subjective thinking discussed earlier this month, namely a trap that Richard Dawkins has described as the "anaesthetic of familiarity". Basically, this is when conventions are so accepted as to be seen as the primary option instead of being merely one of a series of choices. Or, as the British philosopher Susan Stebbing wrote in her 1939 book Thinking to Some Purpose: "One of the gravest difficulties encountered at the outset of the attempt to think effectively consists in the difficulty of recognizing what we know as distinguished from what we do not know but merely take for granted."

Again, this mind set is much loved by the manufacturing sector; in addition to such well-known ploys as deliberate obsolescence and staggered release cycles, there are worse examples, especially in everyday consumerism. We often hear how little nutritional value many highly processed foods contain, but think what this has done for the vitamin and mineral supplement industry, whose annual worldwide sales now approach US$40 billion!

Citizens of developed nations today face very different key issues to our pre-industrial ancestors, not the least among them being a constant barrage of decision making. Thanks to the enormous variety of choices available concerning almost every aspect of our daily lives, we have to consider everything from what we wear to what we eat. The deluge of predominantly useless information that we receive in the era of the hashtag makes it more difficult for us to concentrate on problem solving, meaning that the easiest way out is just to follow the crowd.

Richard Dawkins' solution to these issues is to imagine yourself as an alien visitor and then observe the world as a curious outsider. This seems to me to be beyond the reach of many, for whom daily routine appears to be their only way to cope. If this sounds harsh, it comes from personal experience; I've met plenty of people who actively seek an ostrich-like head-in-the-sand approach to life to avoid the trials and tribulations - as well as the wonders - of this rapidly-changing world.

Instead, I would suggest an easier option when it comes to some areas of STEM research: ensure that a fair proportion of researchers and other thought leaders are adult migrants from other nations. Then they will be able to apply an outside perspective, hopefully identifying givens that are too obvious to be spotted by those who have grown up with them.

New Zealand is a good example of this, with arguably its two best known science communicators having been born overseas: Siouxsie Wiles and Michelle Dickinson, A.K.A. Nanogirl. Dr Wiles is a UK-trained microbiologist at the University of Auckland. She frequently appears on Radio New Zealand as well as undertaking television and social media work to promote science in general, as well as for her specialism of fighting bacterial infection.

Dr Dickinson is a materials engineering lecturer and nanomaterials researcher at the University of Auckland who studied in both the UK and USA. Her public outreach work includes books, school tours and both broadcast and social media. She has enough sci-comm kudos that last year, despite not having a background in astronomy, she interviewed Professor Neil deGrasse Tyson during the Auckland leg of his A Cosmic Perspective tour.

The work of the above examples is proof that newcomers can recognise a critical need compared to their home grown equivalents. What is interesting is that despite coming from English-speaking backgrounds - and therefore with limited cultural disparity to their adoptive New Zealand - there must have been enough that was different to convince Doctors Wiles and Dickinson of the need for a hands-on, media savvy approach to science communication.

This is still far from the norm: many STEM professionals believe there is little point to promoting their work to the public except via print-based publications. Indeed, some famous science communicators such as Carl Sagan and Stephen Jay Gould were widely criticised during their lifetime by the scientific establishment for what were deemed undue efforts at self-promotion and the associated debasement of science by combining it with show business.

As an aside, I have to say that as brilliant as some volumes of popular science are, they do tend to preach to the converted; how many non-science fans are likely to pick up a book on say string theory, just for a bit of light reading or self-improvement (the latter being a Victorian convention that appears to have largely fallen from favour)? Instead, the outreach work of the expat examples above is aimed at the widest possible audience without over-simplification or distortion of the principles being communicated.

This approach may not solve all issues about how to think outside the box - scientists may be so embedded within their culture as to not realise that there is a box - but surely by stepping outside the comfort zone we grew up in we may find problems that the local population hasn't noticed?

Critical thinking is key to the scientific enterprise, but it would appear, to little else in human cultures. If we can find methods to avoid the anaesthetic of familiarity and acknowledge that what we deem of as normal can be far from optimal, then these should be promoted with all gusto. If the post-modern creed is that all world views are equally valid and science is just another form of culture-biased story-telling, then now more than ever we need cognitive tools to break through the subjective barriers. If more STEM professionals are able to cross borders and work in unfamiliar locations, isn’t there a chance they can recognise issues that fall under the local radar and so supply a new perspective we need if we are to fulfil our potential?

Wednesday, 12 September 2018

Seasons of the mind: how can we escape subjective thinking?

According to some people I've met, the first day of spring in the Southern Hemisphere has been and gone with the first day of September. Not incidentally, there are also some, myself included, who think that it has suddenly started to feel a bit warmer. Apparently, the official start date is at the spring equinox during the third week of September. So on the one hand, the weather has been warming since the start of the month but on the other, why should a planet followed neat calendrical conventions, i.e. the first of anything? Just how accurate is the official definition?

There are many who like to reminisce about how much better the summer weather was back in their school holidays. The rose-tinted memories of childhood can seem idyllic, although I also recall summer days of non-stop rain (I did grow up in the UK, after all). Therefore our personal experiences, particularly during our formative years, can promote an emotion-based response that is so far ingrained we fail to consider they may be inaccurate. Subjectivity and wishful thinking are key to the human experience: how often do we remember the few hits and not the far more misses? As science is practiced by humans it is subject to the same lack of objectivity as anything else; only its built-in error-checking can steer practitioners onto a more rational course than in other disciplines.

What got me to ponder the above was that on meeting someone a few months' ago for the first time, almost his opening sentence was a claim that global warming isn't occurring and that instead we are on the verge of an ice age. I didn't have time for a discussion on the subject, so I filed that one for reply at a later date. Now seems like a good time to ponder what it is that leads people to make such assertions that are seemingly contrary to the evidence.

I admit to being biased on this particular issue, having last year undertaken research for a post on whether agriculture has postponed the next glaciation (note that this woolly - but not mammoth, ho-ho - terminology is one of my bugbears: we are already in an ice age, but currently in an interglacial stage). Satellite imagery taken over the past few decades shows clear evidence of large-scale reductions in global ice sheets. For example, the northern polar ice cap has been reduced by a third since 1980, with what remains only half its previous thickness. Even so, are three decades a long enough period to make accurate predictions? Isn't using a scale that can be sympathetic with the human lifespan just as bad as relying on personal experience?

The UK's Met Office has confirmed that 2018 was that nation's hottest summer since records began - which in this instance, only goes back as far back as 1910.  In contrast, climate change sceptics use a slight growth in Antarctic sea ice (contrary to its steadily decreasing continental icesheet) as evidence of climate equilibrium. Now I would argue that this growth is just a local drop in the global ocean, but I wonder if my ice age enthusiast cherry-picked this data to formulate his ideas? Even so, does he believe that all the photographs and videos of glaciers, etc. have been faked by the twenty or so nations who have undertaken Earth observation space missions? I will find out at some point!

If we try to be as objective as possible, how can we confirm with complete certainty the difference between long term climate change and local, short term variability? In particular, where do you draw the line between the two? If we look at temporary but drastic variations over large areas during the past thousand years, there is a range of time scales to explore. The 15th to 18th centuries, predominantly the periods 1460-1550 and 1645-1715, contained climate variations now known as mini ice ages, although these may have been fairly restricted in geographic extent. Some briefer but severe, wide-scale swings can be traced to single events, such as the four years of cold summers following the Tambora eruption of 1815.

Given such variability over the past millennium, in itself a tiny fragment of geological time, how much certainty surrounds the current changes? The public have come to expect yes or no answers delivered with aplomb, yet some areas of science such as climate studies involve chaos mathematics, thus generating results based on levels of probability. What the public might consider vacillation, researchers consider the ultimate expression of scientific good practice. Could this lack of black-and-white certainty be why some media channels insist on providing a 'counterbalancing' viewpoint from non-expert sources, as ludicrous as this seems?

In-depth thinking about a subject relies upon compartmentalisation and reductionism. Otherwise, we would forever be bogged down in the details and never be able to form an overall picture. But this quantising of reality is not necessarily a good thing if it generates a false impression regarding cause and effect. By suffering from what Richard Dawkins calls the “tyranny of the discontinuous mind” we are prone to creating boundaries that just don't exist. In which case, could a line ever be found between short term local variation and global climate change? Having said that, I doubt many climate scientists would use this as an excuse to switch to weather forecasting instead. Oh dear: this is beginning to look like a 'does not compute' error!

In a sense of course we are exceptionally lucky to have developed science at all. We rely on language to define our ideas, so need a certain level of linguistic sophistication to achieve this focus; tribal cultures whose numbers consist of imprecise values beyond two are unlikely to achieve much headway in, for example, classifying the periodic table.

Unfortunately, our current obsession with generating information of every quality imaginable and then loading it to all available channels for the widest possible audience inevitably leads to a tooth-and-claw form of meme selection. The upshot of this bombardment of noise and trivia is to require an enormous amount of time just filtering it. The knock-on effect being that minimal time is left for identifying the most useful or accurate content rather than simply the most disseminated.

Extremist politicians have long been adept at exploiting this weakness to expound polarising phraseology that initially sounds good but lacks depth; they achieve cut-through with the simplest and loudest of arguments, fulfilling the desire most people have to fit into a rigid social hierarchy - as seen in many other primate species. The problem is that in a similar vein to centrist politicians who can see both sides of an argument but whose rational approach negates emotive rhetoric, scientists are often stuck with the unappealing options of either taking a stand when the outcome is not totally clear, or facing accusations of evasion. There is current trend, particularly espoused by politicians, to disparage experts, but discovering how the universe works doesn't guarantee hard-and-fast answers supplied exactly when required and which provide comfort blankets in a harsh world.

Where then does this leave critical thinking, let alone science? Another quote from Richard Dawkins is that "rigorous common sense is by no means obvious to much of the world". This pessimistic view of the human race is supported by many a news article but somewhat negated by the immense popularity of star science communicators, at least in a number of countries.

Both the methods and results of science need to find a space amongst the humorous kitten videos, conspiracy theorists and those who yearn for humanity to be the pinnacle and purpose of creation. If we can comprehend that our primary mode of thinking includes a subconscious baggage train of hopes, fears and distorted memories, we stand a better chance of seeing the world for how it really is and not how we wish it to be. Whether enough of us can dissipate that fog remains to be seen. Meanwhile, the ice keeps melting and the temperature rising, regardless of what you might hear...

Monday, 27 August 2018

Hammer and chisel: the top ten reasons why fossil hunting is so important

At a time when the constantly evolving world of consumer digital technology seems to echo the mega-budget, cutting-edge experiments of the LHC and LIGO, is there still room for such an old-fashioned, low-tech science as paleontology?

The answer is of course yes, and while non-experts might see little difference between its practice today and that of its Eighteenth and Nineteenth Century pioneers, contemporary paleontology does on occasion utilise MRI scanners among other sophisticated equipment. I've previously discussed the delights of fossil hunting as an easy way to involve children in science, yet the apparent simplicity of its core techniques mask the key role that paleontology still plays in evolutionary biology.

Since the days of Watson and Crick molecular biology has progressed in leaps and bounds, yet the contemporary proliferation of cheap DNA-testing kits and television shows devoted to gene-derived genealogy obscure just how tentatively some of their results should be accepted. The levels of accuracy quoted in non-specialist media is often far greater than what can actually be attained. For example, the data on British populations has so far failed to separate those with Danish Viking ancestry from descendants of earlier Anglo-Saxon immigration, leading to population estimates at odds with the archaeological evidence.


Here then is a list of ten reasons why fossil hunting will always be a relevant branch of science, able to supply information that cannot be supplied by other scientific disciplines:
  1. Locations. Although genetic evidence can show the broad sweeps connecting extant (and occasionally, recently-extinct) species, the details of where animals, plants or fungi evolved, migrated to - and when - relies on fossil evidence.
  2. Absolute dating. While gene analysis can be used to obtain the dates of a last common ancestor shared by contemporary species, the results are provisional at best for when certain key groups or features evolved. Thanks to radiometric dating from some fossiliferous locales, paleontologists are able to fill in the gaps in fossil-filled strata that don't have radioactive mineralogy.
  3. Initial versus canonical. Today we think of land-living tetrapods (i.e. amphibians, reptiles, mammals and birds) as having a maximum of five digits per limb. Although these are reduced in many species – as per horse's hooves – five is considered canonical. However, fossil evidence shows that early terrestrial vertebrates had up to eight digits on some or all of their limbs. We know genetic mutation adds extra digits, but this doesn't help reconstruct the polydactyly of ancestral species; only fossils provide confirmation.
  4. Extinct life. Without finding their fossils, we wouldn't know of even such long-lasting and multifarious groups as the dinosaurs: how could we guess about the existence of a parasaurolophus from looking at its closest extant cousins, such as penguins, pelicans or parrots? There are also many broken branches in the tree of life, with such large-scale dead-ends as the pre-Cambrian Ediacaran biota. These lost lifeforms teach us something about the nature of evolution yet leave no genetic evidence.
  5. Individual history. Genomes show the cellular state of an organism, but thanks to fossilised tooth wear, body wounds and stomach contents (including gastroliths) we have important insights into day-to-day events in the life of ancient animals. This has led to fairly detailed biographies of some creatures, prominent examples being Sue the T-Rex and Al the Allosaurus, their remains being comprehensive enough to identify various pathologies.
  6. Paleoecology. Coprolites (fossilised faeces), along with the casts of burrows, help build a detailed enviromental picture that zoology and molecular biology cannot provide. Sometimes the best source of vegetation data comes from coprolites containing plant matter, due to the differing circumstances of decomposition and mineralisation.
  7. External appearance. Thanks to likes of scanning electron microscopes, fossils of naturally mummified organisms or mineralised skin can offer details that are unlikely to be discovered by any other method. A good example that has emerged in the past two decades is the colour of feathered dinosaurs obtained from the shape of their melanosomes.
  8. Climate analysis. Geological investigation can provide ancient climate data but fossil evidence, such as the giant insects of the Carboniferous period, confirm the hypothesis. After all, dragonflies with seventy centimetre wingspans couldn't survive with today's level of atmospheric oxygen.
  9. Stratigraphy. Paleontology can help geologists trying to sequence an isolated section of folded stratigraphy that doesn't have radioactive mineralogy. By assessing the relative order of known fossil species, the laying down of the strata can be placed in the correct sequence.
  10. Evidence of evolution. Unlike the theories and complex equipment used in molecular biology, anyone without expert knowledge can visit fossils in museums or in situ. They offer a prominent resource as defence against religious fundamentalism, as their ubiquity makes them difficult to explain by alternative theories. The fact that species are never found in strata outside their era supports the scientific view of life's development rather than those found in religious texts (the Old Testament, for example, erroneously states that birds were created prior to all other land animals).
To date, no DNA has been found over about 800,000 years old. This means that many of the details of the history of life rely primarily on fossil evidence. It's therefore good to note that even in an age of high-tech science, the painstaking techniques of paleontology can shed light on biology in a way unobtainable by more recent examples of the scientific toolkit. Of course, the study is far from fool-proof: it is thought that only about ten percent of all species have ever come to light in fossil form, with the found examples heavily skewed in favour of shallow marine environments.

Nevertheless, paleontology is a discipline that constantly proves its immense value in expanding our knowledge of the past in a way no religious text could ever do. It may be easy to understand what fossils are, but they are assuredly worth their weight in gold: precious windows onto an unrecoverable past.

Monday, 13 August 2018

Life on Mars? How accumulated evidence slowly leads to scientific advances

Although the history of science is often presented as a series of eureka moments, with a single scientist's brainstorm paving the way for a paradigm-shifting theory, the truth is usually rather less dramatic. A good example of the latter is the formulation of plate tectonics, with the meteorologist Alfred Wegener's continental drift being rejected by the geological orthodoxy for over thirty years. It was only with the accumulation of data from late 1950's onward that the mobility of Earth's crust slowly gained acceptance, thanks to the multiple strands of new evidence that supported it.

One topic that looks likely to increase in popularity amongst both public and biologists is the search for life on Mars. Last month's announcement of a lake deep beneath the southern polar ice cap is the latest piece of observational data that Mars might still have environments suitable for microbial life. This is just the latest in an increasing body of evidence that conditions may be still be capable of supporting life, long after the planet's biota-friendly heyday. However, the data hasn't always been so positive, having fluctuated in both directions over the past century or so. So what is the correspondence between positive results and the levels of research for life on Mars?

The planet's polar ice caps were first discovered in the late Seventeenth Century, which combined with the Earth-like duration of the Martian day implied the planet might be fairly similar to our own. This was followed a century later by observation of what appeared to be seasonal changes to surface features, leading to the understandable conclusion of Mars as a temperate, hospitable world covered with vegetation. Then another century on, an early use of spectroscopy erroneously described abundant water on Mars; although the mistake was later corrected, the near contemporary reporting of non-existent Martian canals led to soaring public interest and intense speculation. The French astronomer Camille Flammarion helped popularise Mars as a potentially inhabited world, paving the way for H.G. Wells' War of the Worlds and Edgar Rice Burroughs' John Carter series.

As astronomical technology improved and the planet's true environment became known (low temperatures, thin atmosphere and no canals), Mars' popularity waned. By the time of Mariner 4's 1965 fly-by, the arid, cratered and radiation-smothered surface it revealed only served to reinforce the notion of a lifeless desert; the geologically inactive world was long past its prime and any life still existing there probably wouldn't be visible without a microscope.

Despite this disappointing turnabout, NASA somehow managed to gain the funding to incorporate four biological experiments on the two Viking landers that arrived on Mars in 1976. Three of the experiments gave negative results while the fourth was inconclusive, most researchers hypothesising a geochemical rather than biological explanation for the outcome. After a decade and a half of continuous missions to Mars, this lack of positive results - accompanied by experimental cost overruns - probably contributed to a sixteen-year hiatus (excluding two Soviet attempts at missions to the Martian moons). Clearly, Mars' geology by itself was not enough to excite the interplanetary probe funding czars.

In the meantime, it was some distinctly Earth-bound research that reignited interested in Mars as a plausible source of life. The 1996 report that Martian meteorite ALH84001 contained features resembling fossilised (if extremely small) bacteria gained worldwide attention, even though the eventual conclusion repudiated this. Analysis of three other meteorites originating from Mars showed that complex organic chemistry, lava flows and moving water were common features of the planet's past, although they offered no more than tantalising hints that microbial life may have flourished, possibly billions of years ago.

Back on Mars, NASA's 1997 Pathfinder lander delivered the Sojourner rover. Although it appeared to be little more than a very expensive toy, managing a total distance in its operational lifetime of just one hundred metres, the proof of concept led to much larger and more sophisticated vehicles culminating in today’s Curiosity rover.

The plethora of Mars missions over the past two decades has delivered immense amounts of data, including that the planet used to have near-ideal conditions for microbial life - and still has a few types of environment that may be able to support miniscule extremophiles.

Together with research undertaken in Earth-bound simulators, the numerous Mars projects of the Twenty-first Century have to date swung the pendulum back in favour of a Martian biota. Here are a few prominent examples:

  • 2003 - atmospheric methane is discovered (the lack of active geology implying a biological rather than geochemical origin)
  • 2005 - atmospheric formaldehyde is detected (it could be a by-product of methane oxidation)
  • 2007 - silica-rich rocks, similar to hot springs, are found
  • 2010 - giant sinkholes are found (suitable as radiation-proof habitats)
  • 2011 - flowing brines and gypsum deposits discovered
  • 2012 - lichen survived for a month in the Mars Simulation Laboratory
  • 2013 - proof of ancient freshwater lakes and complex organic molecules, along with a long-lost magnetic field
  • 2014 - large-scale seasonal variation in methane, greater than usual if of geochemical origin
  • 2015 - Earth-based research successfully incubates methane-producing bacteria under Mars-like conditions
  • 2018 - a 20 kilometre across brine lake is found under the southern polar ice sheet

Although these facts accumulate into an impressive package in favour of Martian microbes, they should probably be treated as independent points, not as one combined argument. For as well as finding factors supporting microbial life, other research has produced opposing ones. For example, last year NASA found that a solar storm had temporarily doubled surface radiation levels, meaning that even dormant microbes would have to live over seven metres down in order to survive. We should also bear in mind that for some of each orbit, Mars veers outside our solar system's Goldilocks Zone and as such any native life would have its work cut out for it at aphelion.

A fleet of orbiters, landers, rovers and even a robotic helicopter are planned for further exploration in the next decade, so clearly the search for life on Mars is still deemed a worthwhile effort. Indeed, five more missions are scheduled for the next three years alone. Whether any will provide definitive proof is the big question, but conversely, how much of the surface - and sub-surface - would need to be thoroughly searched before concluding that Mars has either never had microscopic life or that it has long since become extinct?

What is apparent from all this is that the quantity of Mars-based missions has fluctuated according to confidence in the hypothesis. In other words, the more that data supports the existence of suitable habitats for microbes, the greater the amount of research to find them. In a world of limited resources, even such profoundly interesting questions as extra-terrestrial life appear to gain funding based on the probability of near-future success. If the next generation of missions fails to find traces of even extinct life, my bet would be a rapid and severe curtailing of probes to the red planet.

There is a caricature of the stages that scientific hypotheses go through, which can ironically best be described using religious terminology: they start as heresy; proceed to acceptance; and are then carved into stone as orthodoxy. Of course, unlike with religions, the vast majority of practitioners accept the new working theory once the data has passed a certain probability threshold, even if it totally negates an earlier one. During the first stage - and as the evidence starts to be favourable - more researchers may join the bandwagon, hoping to be the first to achieve success.

In this particular case, the expense and sophistication of the technology prohibits entries from all except a few key players such as NASA and ESA. It might seem obvious that in expensive, high-tech fields, there has to be a correlation between hypothesis-supporting facts and the amount of research. But this suggests a stumbling block for out-of-the-box thinking, as revolutionary hypotheses fail to gain funding without at least some supporting evidence.

Therefore does the cutting-edge, at least in areas that require expensive experimental confirmation, start life as a chicken-and-egg situation? Until data providentially appears, is it often the case that the powers-that-be have little enticement for funding left-field projects? That certainly seems to have been true for meteorologist Alfred Wegener and his continental drift hypothesis, since it took several research streams to codify plate tectonics as the revolutionary solution. 

Back to Martian microbes. Having now read in greater depth about seasonal methane, it appears that the periodicity could be due to temperature-related atmospheric changes. This only leaves the scale of variation as support for a biological rather than geochemical origin. Having said that, the joint ESA/Roscosmos ExoMars Trace Gas Orbiter may find a definitive answer as to its source in the next year or so, although even a negative result is unlikely to close the matter for some time to come. Surely this has got to be one of the great what-ifs of our time? Happy hunting, Mars mission teams!

Monday, 30 July 2018

Biophilic cities: why green is the new black

I've previously discussed the notion that children who spend more time outside in natural surroundings are more likely to have improved mental and physical health compared to their indoors, gadget-centred peers, but does the same hold true for adults as well? After all, there have been many claims that the likes of the fractal geometry of natural objects, the sensual stimulation, the random behaviour of animals, even feeling breezes or better air quality can have a positive or 'wellness' (horrific term though it is) effect.

It is pretty much a given that the larger the percentage of nature existing within conurbations, the greater the improvement to the local environment. This begins at the practical level, with vegetation mitigating extremes of heat while its roots helps reduce flooding. In addition, fauna and flora gain more room to live in, with a greater number of species able to survive than just the usual urban adaptees such as rats and pigeons. What about the less tangible benefits to humans, culminating in a better quality of life? Science isn't wishful thinking, so what about the evidence for more nature-filled urban environments improving life for all its citizens, not just children?

Studies suggest that having window views of trees can increase concentration and wellbeing in the workplace, while for hospital patients there is a clear correlation between types of view and both the length of recovery periods and painkiller usage. Therefore it seems that even the appearance of close-at-hand nature can have an effect, without the necessity of immersion. Having said that, there are clear advantages to having a public green space, since it allows a wide range of activities such as flying kites, playing ball games, jogging and boot camps, or just having a picnic.

Our largely sedentary, over-caloried lives necessitate as much physical activity as we can get, but there is apparently something greater than just physical exercise behind nature as a promoter of wellbeing. Investigation appears to show that spaces with trees and the hint of wilderness are far more beneficial than the unnatural and restricted geometries of manicured lawns and neatly maintained flower beds. It seems that we are still very much beholden to the call of the wild. If this is a fundamental component of our highly civilised lives, are urban planners aware of this and do they incorporate such elements into our artificial environments?

The concept of integrating nature into our towns and cities certainly isn't a new one. As a child, I occasionally visited Letchworth Garden City, a town just north of London. As the name suggests, it was an early form of 'Green Belt' planning, created at the start of the Twentieth century and divided into sectors for residential, industrial and agricultural usage. In its first half century it tried to live up to its intention to be self-sufficient in food, water and power generation, but this later proved impractical. I don't recall it being anything special, but then its heyday as a mecca for the health conscious (at a time when the likes of exercise and vegetarianism were associated with far left-wing politics) has long since passed. As to whether the inhabitants have ever been mentally - or even physically - advantaged compared to the older conurbations elsewhere in the UK, I cannot find any evidence.

Across the Atlantic, the great American architect Frank Lloyd-Wright conceived of something similar but on a far larger scale. His Broadacre City concept was first published in 1932, with the key idea that every family would live on an acre-sized plot. However, Lloyd-Wright's concept - apart from being economically prohibitive - relied on private cars (later updated to aerator, a form of personal helicopter) for most transportation; sidewalks were largely absent from his drawings and models. Incidentally, some US cities today have partially adopted the sidewalk-free model but without Lloyd-Wright's green-oriented features. For example, there are suburbs in oil-centric Houston that are only reachable by car; you have to drive even to reach shopping malls you can see from your own home, with high pedestrian mortality rates proving the dangers of attempting to walk anywhere. Back to Lloyd-Wright: like many of his schemes, his own predilections and aesthetic sensibilities seem to have influenced his design rather more than any evidence-based insight into social engineering.

In recent years the term 'biophilic cities' has been used to describe conurbations attempting to increase their ratio of nature to artifice, often due to a combination of public campaigning and far-sighted local governments. Although these schemes cover much wider ground than just human wellbeing (prominent issues being reduction in power usage and waste, greater recycling and ecological diversity, etc), one of the side effects of the improvements is to quality of life. Thirteen cities joined the Biophilic Cities project in 2013, but others are just as committed in the long-term to offsetting the downsides of urban living. Here are three cities I have visited that are dedicated to improving their environment:

  1. Singapore. Despite the abundance of tower blocks, especially in its southern half, this city that is also a nation has a half-century history of planting vegetation in order to live up to the motto ‘Singapore - City in a Garden’. Despite its large-scale adoption of high-tech, high-rise architecture, Singapore has preserved an equivalent area of green space and now ranks top of the Green View Index. Even the maximal artificiality of the main highways is tempered by continuous rows of tall, closedly-packed trees while building regulations dictate replacement of ground-level vegetation lost to development. A new 280-metre tall office, retail and residential building, due for completion in 2021, is set to incorporate overtly green elements such as a rainforest plaza. It could be argued that it's easy for Singapore to undertake such green initiatives considering that much of city didn't exist before the late Twentieth century and what did has been subject to wide-scale demolition. However, it seems that Singapore's Government has a long-term strategy to incorporate nature into the city, with the resulting improvements in the mental and physical wellbeing of its inhabitants.
  2. Toronto. Although not as ecologically renowned as Vancouver, the local government and University of Toronto are engaged in a comprehensive series of plans to improve the quality of life for both humans and the rest of nature. From the green roof bylaw and eco-friendly building subsidies to Live Green Toronto Program, there is a set of strategies to aid the local environment and planet in general. It is already paying dividends in a large reduction of air pollution-related medical cases, while quality of life improvements are shown by the substantial bicycle-biased infrastructure and increase in safe swimming days. There's still plenty to do in order to achieve their long term goals, particularly around traffic-related issues, but the city and its inhabitants are clearly aiming high.
  3. Wellington. New Zealand's capital has wooded parks and tree-filled valleys that the council promotes as part of the city's quality of life. The recreated wetlands at Waitangi Park and the Zealandia (formerly Karori) predator-proof wildlife sanctuary are key components in the integration of large-scale nature into the urban environment. Indeed, the latter is proving so successful that rare native birds such as the kaka are being increasingly found in neighbourhood gardens. Both the city and regional councils are committed to improving both the quality of life for citizens as well as for the environment in general, from storm water filtering in Waitangi Park to the wind turbines on the hilltops of what may be the world's windiest city.

These cities are just the tip of the iceberg when it comes to conurbations around the world seeking to make amends for the appalling environmental and psychological consequences of cramming immense numbers of humans into a small region that cannot possibly supply all their needs. In some respects these biophilic cities appear too good to be true, as their schemes reduce pollution and greenhouse gas emissions, improve the local ecosystem, and at the same time appear to aid the physical and mental wellbeing of their inhabitants. Yet it shouldn't be surprising really; cities are a recent invention and before that a nomadic lifestyle embedded us in landscapes that were mostly devoid of human intervention. If we are to achieve any sort of comfortable equilibrium in these hectic times, then surely covering bare concrete with greenery is the key? You don't have to be a hippy tree hugger to appreciate what nature can bring to our lives.

Sunday, 15 July 2018

Minding the miniscule: the scale prejudice in everyday life

I was recently weeding a vegetable bed in the garden when out of the corner of my eye I noticed a centipede frantically heading for cover after I had inadvertently disturbed its hiding spot. In my experience, most gardeners are oblivious to the diminutive fauna and flora around them unless they are pests targeted for removal or obliteration. It's only when the likes of a biting or stinging organism - or even just a large and/or hairy yet harmless spider - comes into view do people consciously think about the miniature cornucopia of life around them.

Even then, we consider our needs rather greater than theirs: how many of us stop to consider the effect we are having when we dig up paving slabs and find a bustling ant colony underneath? In his 2004 essay Dolittle and Darwin, Richard Dawkins pondered what contemporary foible or -ism future generations will castigate us for. Something I consider worth looking at in this context is scale-ism, which might be defined as the failure to apply a suitable level of consideration to life outside of 'everyday' measurements.

I've previously discussed neo-microscopic water-based life but larger fauna visible without optical aids is easy to overlook when humans are living in a primarily artificial environment - as over half our species is now doing. Several ideas spring to mind as to why breaking this scale-based prejudice could be important:
  1. Unthinking destruction or pollution of the natural environment doesn't just cause problems for 'poster' species, predominantly cuddly mammals. The invertebrates that live on or around larger life-forms may be critical to these ecosystems or even further afield. Removal of one, seemingly inconsequential, species could allow another to proliferate at potentially great cost to humans (for example, as disease vectors or agricultural pests). Food webs don't begin at the chicken and egg level we are used to from pre-school picture books onwards.
  2. The recognition that size doesn't necessarily equate to importance is critical to the preservation of the environment not just for nature's sake but for the future of humanity. Think of the power of the small water mould Phytophthora agathidicida which is responsible for killing the largest residents of New Zealand's podocarp forests, the ancient coniferous kauri Agathis australis. The conservation organisation Forest and Bird claims that kauri are the lynchpin for seventeen other plant species in these forests: losing them will have a severe domino effect.
  3. Early detection of small-scale pests may help to prevent their spread but this requires vigilance from the wider public, not just specialists; failure to recognise that tiny organisms may be far more than a slight nuisance can be immensely costly. In recent years there have been two cases in New Zealand where the accidental import of unwanted insects had severe if temporary repercussions for the economy. In late 2017 three car carriers were denied entry to Auckland when they were found to contain the brown marmorated stink bug Halyomorpha halys. If they had not been detected, it is thought this insect would have caused NZ$4 billion in crop damage over the next twenty years. Two years earlier, the Queensland fruit fly Bactrocera tryoni was found in central Auckland. As a consequence, NZ$15 million was spent eradicating it, a small price to pay for the NZ$5 billion per annum it would have cost the horticulture industry had it spread.
Clearly, these critters are to be ignored at our peril! Although the previous New Zealand government introduced the Predator Free 2050 programme, conservation organisations are claiming the lack of central funding and detailed planning makes the scheme unrealistic by a large margin (if anything, the official website suggests that local communities should organise volunteer groups and undertake most of the work themselves!) Even so, this scheme is intended to eradicate alien mammal species, presumably on the grounds that despite their importance, pest invertebrates are just too small to keep excluded permanently - the five introduced wasp species springing to mind at this point.

It isn't just smaller scale animals that are important; and how many people have you met who think that the word animal means only a creature with a backbone, not insects and other invertebrates? Minute and inconspicuous plants and fungi also need considering. As curator at Auckland Botanic Gardens Bec Stanley is keen to point out, most of the public appear to have plant blindness. Myrtle rust is a fungus that attacks native plants such as the iconic pōhutukawa or New Zealand Christmas tree, having most probably been carried on the wind to New Zealand from Australia. It isn't just New Zealand's Department of Conservation that is asking the public to watch out for it: the Ministry for Primary Industries also requests notification of its spread across the North Island, due to the potential damage to commercial species such as eucalyptus. This is yet another example of a botanical David versus Goliath situation going on right under our oblivious noses.

Even without the economic impact, paying attention to the smaller elements within our environment is undoubtedly beneficial. Thinking more holistically and less parochially is often a good thing when it comes to science and technology; paradigm shifts are rarely achieved by being comfortable and content with the status quo. Going beyond the daily centimetre-to-metre range that we are used to dealing with allows us to comprehend a bit more of the cosmic perspective that Neil deGrasse Tyson and other science communicators endeavour to promote - surely no bad thing when it comes to lowering boundaries between cultures in a time with increasingly sectarian states of mind?

Understanding anything a little out of the humdrum can be interesting in and of itself. As Brian Cox's BBC documentary series Wonders of Life showed, a slight change of scale can lead to apparent miracles, such as the insects that can walk up glass walls or support hundreds of times their own weight and shrug off equally outsized falls. Who knows, preservation or research into some of our small-scale friends might lead to considerable benefits too, as with the recent discovery of the immensely strong silk produced by Darwin's bark spider Caerostris darwini. Expanding our horizons isn't difficult, it just requires the ability to look down now and then and see what else is going on in the world around us.