Showing posts with label Stephen Jay Gould. Show all posts
Showing posts with label Stephen Jay Gould. Show all posts

Thursday 28 May 2015

Presenting the universe: 3 landmark science documentary series

They say you carry tastes from your formative years with you for the rest of your life, so perhaps this explains why there are three science documentary television series that still have the power to enchant some decades after first viewing. Whilst there has been no shortage of good television science programming since - Planet Earth and the Walking with... series amongst them - there are three that remain the standard by which I judge all others:
  1. The Ascent of Man (1972) - an account of how humanity has evolved culturally and technology via biological and man-made tools. Presented by mathematician and renaissance man Jacob Bronowski.
  2. Cosmos (1980) - the history of astronomy and planetary exploration, interwoven with the origins of life. Presented by Carl Sagan (as if you didn't know).
  3. The Day the Universe Changed (1985) - a study of how scientific and technological breakthroughs in Western society generate paradigm shifts. Presented by the historian of science James Burke.

All three series have been proclaimed 'landmark' shows so it is interesting to compare their themes, viewpoints and production techniques, discovering just how similar they are in many ways. For a start, their excellent production values allowed for a wide range of international locations and historical recreations. They each have a charismatic presenter who admits to espousing a personal viewpoint, although it's quite easy to note that they get progressively more casual: if Jacob Bronowski has the appearance of a warm elder statesman then Carl Sagan is the father figure for a subsequent generation of scientists; James Burke's on-screen persona is more akin to the cheeky uncle, with a regular supply of puns, some good, some less so.

To some extent it is easy to see that the earliest series begat the second that in turn influenced the third. In fact, there is a direct link in that Carl Sagan hired several of the producers from The Ascent of Man for his own series, clearly seeing the earlier show as a template for Cosmos. What all three have is something extremely rare in other science documentaries: a passion for the arts that promotes a holistic interpretation of humanity's development; science does not exist in isolation. As such, the programmes are supported by superbly-illustrated tie-in books that extend the broadcast material from the latter two series whilst Bronowski's book is primarily a transcript of his semi-improvised monologue.

In addition to considering some of the standard examples of key developments in Western civilisation such as Ancient Greece and Galileo, the series include the occasional examination of Eastern cultures. The programmes also contain discussions of religions, both West and East. In fact, between them the series cover a vast amount of what has made the world the way it is. So not small potatoes, then!

The series themselves:

The Ascent of Man

To some extent, Jacob Bronowski was inspired by the earlier series Civilisation, which examined the history of Western arts. Both series were commissioned by David Attenborough, himself a natural sciences graduate who went on to present ground-breaking series in his own discipline as well as commissioning these landmark programmes. (As an aside, if there are any presenters around today who appears to embody the antithesis of C.P. Snow's 'the two cultures' then Sir David is surely in the top ten).

Bronowski's presentation is an astonishingly erudite (for all its improvisation) analysis of the development of our species and its technological society. Although primarily focused on the West, there is some consideration of other regions, from the advanced steel-making technology of medieval Japan to Meso-American astronomy or the relatively static culture of Easter Island. Time and again, the narrative predates the encumbrance of political correctness: that it was the West that almost solely generated our modern technological society - the 'rage for knowledge' for once outshining dogma and inertia.

Of course, it would be interesting to see how Bronowski might have written it today, in light of Jared Diamond's ground-breaking (in my humble opinion) Guns, Germs and Steel. Although he works hard to present science, the plastic arts, literature and myth as emerging from the same basic elements of our nature, it is clear that Bronowski considers the former to be much rarer - and therefore the more precious - discipline. Having said that, Bronowski makes a large number of Biblical references, primarily from the Old Testament. In light of the current issues with fundamentalism in the USA and elsewhere, it is doubtful that any science documentary today would so easily incorporate the breadth of religious allusions.

If there is a thesis underlying the series it is that considering how natural selection has provided humanity with a unique combination of mental gifts, we should use them to exploit the opportunities thus presented. By having foresight and imagination, our species is the only one capable of great heights - and, as he makes no pretence of - terrible depths. As he considers the latter, Bronowski admits that we should remain humble as to the state of contemporary knowledge and technology, which five hundred years hence will no doubt appear childlike. In addition, he states that belief in absolute knowledge can lead to arrogance; if we aspire to be gods, it can only end in the likes of Auschwitz. But his final speeches contain the wonderful notion that the path to annihilation can be avoided if science is communicated to all of society with the same vigour and zest as given to the humanities.

Cosmos

I was already an astronomy and astronautics fan when I saw this series. Its first UK broadcast slot was somewhat later than my usual bedtime, so it seemed a treat to be allowed to stay up after the rest of the family had gone to bed. Like Star Wars a few years before, it appeared to me to be an audio-visual tour-de-force; not surprisingly, both the tie-in hardback and soundtrack album arrived on my birthday that year.

Nostalgia aside, another key reason for the series' success was the charisma of the presenter himself. Much has been written of Sagan's abilities as a self-publicist, and the programmes do suffer from rather too many staring-beatifically-into-the-distance shots (as to some extent replicated more recently by Brian Cox in his various Wonders Of... series). Of course, it must have taken considerable effort to get the series made in the first place, especially in gaining a budget of over $6 million. After all, another great science populariser, the evolutionary biologist Stephen Jay Gould, never managed to gain anything beyond the occasional one-off documentary.

What is most apparent is Sagan's deep commitment to presenting science to the widest possible audience without distorting the material through over-simplification. However, in retrospect it is also obvious that he was using ideas from several scientific disciplines, such as the Miller-Urey experiment, to bolster his opinions on the likelihood of extra-terrestrial life. To some extent his co-writers reined him in, the final episode given over not to SETI but to plea for environmental stewardship.

Whilst the series is primarily concerned with a global history of astronomy and astrophysics, supplemented with first-hand accounts of planetary exploration, Sagan like Bronowski is equally at home with other scientific disciplines. He discusses the evolution of intelligence and incorporates elements of the humanities with equal aplomb. Another key element is the discussion of the role superstition and dead ends have played in the hindrance or even advancement of scientific progress, from Pythagorean mysticism, via Kepler's conflation of planetary orbits with the five Platonic solids, to Percival Lowell's imaginary Martian canals. Although Sagan repeats his earlier debunking of astrology, UFO sightings and the like, he doesn't rule out the role of emotions in the advancement of science and technology, citing for example the rocket pioneer Robert Goddard's Mars-centred epiphany.

Perhaps the primary reason that the series - despite the obvious dating of some of the knowledge - is still so engaging and why Sagan's narration is so widely quoted, is that he was a prose poet par excellence. Even when discussing purely scientific issues, his tone was such that the information could be effortlessly absorbed whilst allowing the viewer to retain a sense of wonder. Of course, Sagan had ample assistance from his two co-writers Ann Druyan and Steven Soter, as clearly proven by their scripts for the Neil deGrasse Tyson-hosted remake Cosmos: A Spacetime Odyssey. Nonetheless, it is hard to think of another presenter who could have made the original series the success it was on so many levels.

The Day the Universe Changed

Although James Burke had already made a large-scale history of science and technology series called Connections in 1978, it contained a rather different take on some of the same material. By focussing on interactive webs, the earlier series was somewhat glib, in that some of the connections could probably be replaced by equally valid alternatives.

In contrast, The Day the Universe Changed uses a more conventional approach that clearly shares some of the same perspectives as the earlier programmes. Like The Ascent of Man and the Cosmos remake, mediaeval Islamic science is praised for its inquisitiveness as well as the preservation of Classical knowledge. Burke was clearly influenced by his predecessors, even subtitling the series 'A Personal View by James Burke'. Perhaps inevitably he covers some of the same material too, although it would be difficult to create a brief history without reference to Newton or Ancient Greece.

As with Bronowski, Burke integrates scientific advances within wider society, a notable example being the rediscovery of perspective and its profound effect on contemporary art. He also supports the notion that rather than a gradual series of changes, paradigm shifts are fundamental to major scientific breakthroughs. In effect, he claims that new versions of the truth - as understood by a scientific consensus - may rely on abandonment of previous theories due to their irreconcilable differences. Having recently read Rachel Carson's 1950 The Sea Around Us I can offer some agreement: although Carson's geophysical analysis quietly screams in favour of plate tectonics, the contemporary lack of evidence lead her to state the no doubt establishment mantra of the period concerning static land masses.

What Burke constantly emphasises even more than his predecessors is that time and place has a fundamental influence on the scientific enquiry of each period. Being immersed in the preconceived notions of their culture, scientists can find it as difficult as anyone else to gain an objective attitude. In actuality, it is all but impossible, leading to such farcical dead-ends as Piltdown Man, a hoax that lasted for decades because it fulfilled the jingoistic expectations of British scientists. Burke's definition of genius is someone who can escape the givens of their background and thus achieve mental insights that no amount of methodical plodding can equal. Well, perhaps, on occasion.

The series also goes further than its predecessors in defining religion as anti-scientific on two grounds: its demand for absolute obedience in the face of logic and evidence, with reference to Galileo; or the lack of interest in progress, as with the cyclical yet static Buddhist view, content for the universe to endlessly repeat itself. Burke also shows how scientific ideas can be perverted for political ends, as with social Darwinism. But then he goes on to note that as the world gets ever more complex, and changes at an ever faster rate, non-specialists are unable to test new theories in any degree and so are having to rely on authority just as much as before the Enlightenment. How ironic!

All in all, these common threads are to my mind among the most important elements of the three series:
  1. Science and the humanities rely on the same basic processes of the human brain and so are not all that different;
  2. Scientific thinking can be as creative an endeavour as the arts;
  3. Scientists don't live in a cultural vacuum but are part and parcel of their world and time;
  4. Religion is the most change-resistant of human activities and therefore rarely appears sympathetic to science's aims and goals.

As Carl Sagan put it, "we make our world significant by the courage of our questions and the depth of our answers." For me, these three series are significant for their appraisal of some of those courageous explorers who have given us the knowledge and tools we call science.


Thursday 26 March 2015

A roaring success? The Walking with Dinosaurs Arena Spectacular

Surely these days everyone loves dinosaurs? After all, the original Jurassic Park movie made over a billion US dollars worldwide, enough to generate a plethora of merchandise and three sequels. In a less fictional vein, the BBC's television series' Walking with Dinosaurs broke viewing records - perhaps just as well, considering its equally record-breaking budget - and led to several TV spin-offs, including a 3D feature film aimed at very young children.

But it's rare for a television documentary (or should that be docudrama?) series to spawn a live show, which is exactly what happened in 2007. Walking with Dinosaurs: The Arena Spectacular has to date has been seen by a worldwide audience of over eight million. Again, this probably all to the good, considering the enormous expense involved in the production. So having seen the television series on DVD, my daughters were desperate to go to the live show here in Auckland. Due to the expense of the tickets I hummed and hawed but eventually bowed under pressure. This was nothing to do with my own interest in seeing the event, of course!

So was it worth it? The ninety minute show followed the chronological order of the series, from late Triassic to the Cretaceous-Tertiary boundary. My first impression wasn't particularly good, as the narrator Huxley (incidentally I'm not sure what Thomas Henry Huxley would make of the enterprise, considering he was even against opening the Natural History Museum to the general public) explained about dinosaur footprints whilst lights projected some very oversized examples of the same. I assume the scale was to allow visibility from the furthest rows, but even so it seemed a bit clumsy. In my book, there's a fine line between artistic licence and poor science communication.

However, things improved with the arrival of the first beasts. Although it looked as if it was immediately heading in a Disneyesque direction when several cute herbivorous Plateosaurus hatched from a nest of eggs, this was quickly quelled when one hatchling was gobbled up by a Liliensternus. It was excellent to see Nature in warts and all mode - or should that be a literal 'red in tooth and claw' - considering that the audience largely consisted of pre-teen children and their parents? Talking of which, in some cases the roaring monsters and dramatic lighting proved too much, with a girl sitting near me spending more time cradled under her father's armpit rather than looking at the show. I was in general surprised by the lack of anthropomorphising elements that the 3D movie was criticised for, a brave move considering the target audience. Perhaps the major concession to the junior spectators was the young T. rex, whose weak attempts at imitating its far more powerful parent induced laughter from the audience.

In addition to describing the behaviour of the dinosaurs – and one pterosaur (a decent-enough marionette hung in front of poorly projected background footage, although my younger daughter initially thought it was a giant bat) Huxley also covered plate tectonics and the development of vegetation. At one point he even stuck his hand into a steaming pile of fresh herbivore poop to retrieve a dung beetle, leading to an explanation of food chains past and present. Both the inflatable growing ferns and a forest fire were particularly well done, as well as some simple yet charming butterflies made of what looked like coloured paper blown around by hidden fans. My children agreed that the only thing they didn't like were the skate platforms required to move the larger dinosaurs, although I found these less distracting than the marginally camouflaged operator legs in the smaller species. Interestingly, neither of my daughters asked how the larger species were controlled. I guess they've grown up in an age of electronic wonders and this was seen to be just another example of impressive technology.

Walking with Dinosaurs: The Arena Spectacular

So what about the educational element of the show? Edutainment can be a difficult balance as well as an appalling word. In addition to the lavish praise that it deserved, the original television series was criticised for presenting speculation as fact. In particular, the large size of some of the species has been questioned. However, the arena event did acknowledge some of the developments since the series was first broadcast fifteen years ago, such as by adding feathers (or proto-feathers) to the mother Tyrannosaurus and even more so to her juvenile.

Judging by the appreciative audience, many of the younger crowd members were already familiar with a wide range of dinolore. For example, as each animal starting entering the arena I could hear children as young as four or five shouting some of the names - and correctly. This created a pleasing contrast to many of the adult visitors to London's Natural History Museum, whom I recall not only failed to differentiate a sauropod from a T. rex but assumed that every large skeleton they saw must be a dinosaur (for example, the giant sloth Megatherium in the Fossil Marine Reptiles gallery).

But just how much of an interest in the giant beasts of the Mesozoic is likely to lead to a more detailed understanding of the wider world of palaeontology as the audience members grow older? Unfortunately, at times it was difficult to hear the narrator's details due to a combination of the sound effects and intense music, which whilst emotive and dramatic, had a tendency to drown out Huxley's description of the antediluvian scenes. Combined with the palpable excitement that most of the younger audience members were clearly experiencing, it's dubious just how much anyone learned during the show. The associated website does contain some educational material, although it makes such basic mistakes as listing the pterosaur Ornithocheirus in the list of dinosaurs.

You could suggest that dinosaurs have become just another part of the great consumerist machine, with any associated science a lucky by-product of flogging stuff. After all, dinosaur-related merchandise features highly in the range at many museum gift shops, even those with a marginal connection to the fauna, as discussed unfavourably several decades ago by evolutionary palaeontologist Stephen Jay Gould. It could be argued that any attempt to introduce science-based knowledge to the general public is a good idea, but with the quality of special effects in this live-action show as well as in film and television it may be difficult for children brought up on this material to separate fact from fiction. It is undoubtedly an exciting time for dinosaur discoveries, but science is more than just a series of facts: without the rigour and understanding, the material is subject to the same whims of fashion as the rest of popular culture. If science is to be promoted as the most objective methodology our species has for understanding such fascinating subjects as ancient mega fauna, we need to ensure that audiences are given enough of the reasoning besides all the roaring.

Tuesday 28 October 2014

Sandy strandings: the role of contingency in the beach biosphere

At irregular intervals over the past fifteen years I've been visiting the east coast beaches of New Zealand's Northland between Warkworth and Paihia. Although it's frequently good territory for finding shallow marine fauna via rock pools or along the tideline, a recent visit was enhanced by exciting finds unique in my experience. I usually expect to see the desiccated remains of common species such as sand dollars, scallops, whelks and assorted sea snails, but coastal storms just prior to my arrival brought an added bonus. Two days of exploration along three beaches was rewarded with a plethora of live - but presumably disorientated - creatures such as common sea urchins (Evechinus chloroticus) and large hermit crabs (Pagurus novizealandiae), along with some recently-deceased 5- and 7-arm starfish. As you might imagine, several species of seabird, notably terns and gulls, were having a gastronomic time of it with all these easy pickings.

At the nearby Goat Island Marine Discovery Centre run by the University of Auckland I told our marine biologist guide about my two daughters' attempts to save some of the homeless hermit crabs from the gulls by offering suitable shells as new abodes. The biologist responded with a story of a visitor who had thrown live starfish back into the water after a mass stranding. Someone else commented that his actions wouldn't make a difference; our guide said that as he continued throwing them, the man replied "It made a difference to that one...and that one...and that one..."

Sea urchin

Common sea urchin (Evechinus chloroticus)

Of course we cannot hope to make much of a difference with such good intentions: nature, after all, is essentially immune to human morality and empathy, with survival at a genetic level the only true sign of success. But do small-scale events whose aftermath I recently experienced - in this case a few days of stormy weather and the resultant strandings - have any long-term effects on the local ecosystem?

Apart from a mass marooning of the large barrel jellyfish Rhizostoma pulmo on a North Wales beach around thirty years ago, I haven't experienced anything similar before. But then until three years ago I didn't live near the sea, so perhaps that's not unlikely! There are fairly frequent news stories from around the world about mass whale or dolphin beachings put down to various causes, some man-made such as military sonar. But as these events involve animals larger than humans they make it onto the news: for smaller creatures such as the crabs and urchins mentioned above, there are unlikely to be any widely-disseminated stories.

7 arm starfish

Australian southern sand star (Luidia australiae)

It may seem improbable that the balance between organisms could be profoundly altered by local events, but it should be remembered that a few, minor, outside influences over the course of less than a century can wipe out entire species. For example, although the story of how a single cat was responsible for the demise of the Stephens Island wren around the start of the Twentieth Century is an oversimplification of the events, there is evidence that current human activity is inadvertently causing regional change.

One well-known recent illustration is from the Sea of Cortez, where too much game fishing, especially of sharks, may have led to the proliferation a new top predator, the rapidly spreading Humboldt squid. Estimates suggest that the current population in the region is over 20 million individuals (which suits the local squid-fishing industry just fine), but extraordinary considering none were known in the region before about 1950. Two-metre squid may not sound menacing compared to sharks, but the Humboldt squid is a highly-intelligent pack hunter with a razor-sharp beak and toothed suckers on its tentacles, so diving amongst them is probably not for the faint-hearted.

The TV series Cosmos: A Spacetime Odyssey contained a good introduction to the five mass extinctions of the past 450 million years, but it isn't just these great dyings or even El NiƱo that can upset ecosystems; we may find out too late that relatively minor, local changes are able to trigger a chain reaction at a far wider level. The evolutionary biologist Stephen Jay Gould repeatedly emphasised the importance of historical contingency and the impact of unpredictable, ad-hoc events on natural history. The modern synthesis of evolutionary biology includes the notion that speciation can result from isolation of a population within an 'island'. This latter differs from the strictly geographical definition: a lake, or even an area within a lake, can be an island for some species. If, for example, local changes cause a gap in the ecosystem, then this gap might be filled by an isolated population with the 'fittest' characteristics, in the sense of a jigsaw piece that fits the relevant-shaped hole.

Hermit crab

Hermit crab (Pagurus novizealandiae)

Back to the beach. American marine biologist Rachel Carson's 1951 award-winning classic The Sea Around Us contains an early discussion of the recycling of nutrients within the oceans, but we are now aware that the sea isn't remotely self-contained. My favourite example of an intricate web of land, sea and even aerial fauna and flora centres on the Palmyra Atoll in the Pacific Northern Line Islands. Various seabirds nest in the atoll's high trees, their nutrient-rich guano washing into the sea where it feeds plankton at the base of the offshore food chain. The plankton population feeds larger marine fauna, with certain fish and squid species in turn providing meals for the seabirds, thus completing the cycle. Such a tightly-knit sequence is likely to undergo major restructuring of population densities if just one of the players suffers a setback.

I appear to have followed Stephen Jay Gould's method of moving from the particular to the general and may be a little out of my depth (okay, call it a feeble attempt at a pun) but it certainly gives food for thought when local shallow marine populations appear to suffer after only a few days of mildly inclement weather. If there’s a moral to any of this, it’s that if natural events can affect an ecosystem in unpredictable ways, what havoc could we be causing, with our pesticide run-off, draining of water tables, high-energy sonar, over-fishing and general usage of the oceans as a rubbish dump? The details may require sophisticated mathematics, but the argument is plain for all to see.

Saturday 15 March 2014

Cutting remarks: investigating five famous science quotations

If hearing famous movie lines being misquoted seems annoying, then misquoted or misused science citations can be exasperating, silly or downright dangerous. To this end, I thought that I would examine five well-known science quotations to find the truth behind the soundbite. By delineating the accurate (as far as I'm aware) words in the wider context in which they were said/written down/overheard by someone down the hallway, I may be able to understand the intended meaning, and not the autopilot definition frequently used. Here goes:

1) God does not play dice (Albert Einstein)

Possibly Einstein's most famous line, it sound like the sort of glib comment that could be used by religious fundamentalists to denigrate science in two opposing fashions: either Einstein is being facetious and therefore sacrilegious; or he supports an old-fashioned version of conventional Judeo-Christian beliefs in which God can be perceived in the everyday world. Talk about having your cake and eating it!

Einstein is actually supposed to have said: "It is hard to sneak a look at God's cards. But that he would choose to play dice with the world...is something that I cannot believe for a single moment." This gives us much more material to work with: it was actually a quote Einstein himself supplied to a biographer. Some years earlier he had communicated with physicist Max Born along similar lines: "Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one'. I, at any rate, am convinced that He does not throw dice."

So here is the context behind the quote: Einstein's well-known disbelief in the fundamental nature of quantum mechanics. As I've discussed in a previous post Einstein's opinions on the most accurate scientific theory ever devised was completely out of step with the majority of his contemporaries - and physicists ever since. Of course we haven't yet got to the bottom of it; speaking as a non-scientist I find the Copenhagen Interpretation nonsense. But then, many physicists have said something along the lines of that if you think you understand quantum mechanics, you haven't understood it. Perhaps at heart, Einstein was stuck in a Nineteenth Century mind set, unable to conceive of fundamental limits to our knowledge or that probability lies at the heart of reality. He spent decades looking for a deeper, more obviously comfortable, cause behind quantum mechanics. And as for his interest in the 'Old One', Einstein frequently denied his belief in a Judeo-Christian deity but referred to himself as an agnostic: the existence of any presence worthy of the name 'God' being "the most difficult in the world". Now there's a quote worth repeating!

2) Science is a way of thinking much more than it is a body of knowledge (Carl Sagan)

As I've mentioned before, Bill Bryson's A Short History of Nearly Everything is chock full of the results of scientific investigation but rarely stops to consider the unique aspects that drive the scientific method, or even define the limits of that methodology. Sagan's full quote is: "Science is more than a body of knowledge. It is a way of thinking; a way of sceptically interrogating the universe with a fine understanding of human fallibility. If we are not able to ask sceptical questions, to interrogate those who tell us that something is true, to be sceptical of those in authority, then, we are up for grabs for the next charlatan (political or religious) who comes rambling along."

It is interesting because it states some obvious aspects of science that are rarely discussed, such as the subjective rather than objective nature of science. As human beings, scientists bring emotions, selective memory and personal preferences into their work. In addition, the socio-cultural baggage we carry is hardly ever discussed until a paradigm shift (or just plain, old-fashioned time has passed) and we recognise the idiosyncrasies and prejudices embedded into research. Despite being subject to our frailties and the zeitgeist, once recognised, these limitations are part of the strength of the discipline: it allows us, at least eventually, to discover their effect on what was once considered the most dispassionate branch of learning.

Sagan's repeated use of the word sceptical is also of great significance. Behind the multitude of experimental, analytical and mathematical methods in the scientific toolkit, scepticism should be the universal constant. As well as aiding the recognition of the biases mentioned above, the sceptical approach allows parsimony to take precedence over authority. It may seem a touch idealistic, especially for graduate students having to kowtow to senior faculty when seeking research positions, but open-minded young turks are vital in overcoming the conservative old guard. Einstein's contempt for authority is well-known, as he made clear by delineating unthinking respect for it as the greatest enemy of truth. I haven't read Stephen Jay Gould's Rocks of Ages: Science and Religion in the Fullness of Life, but from what I understand of his ideas, the distinction concerning authority marks a clear boundary worthy of his Non-Overlapping Magisteria.

3) The mystery of the beginning of all things is insoluble by us; and I for one must be content to remain an agnostic (Charles Darwin)

From the original publication of On the Origin of Species in 1859 to the present day, one of the most prominent attacks by devoutly religious critics to natural selection is the improbability of how life started without divine intervention. If we eventually find microbial life on Mars - or larger organisms on Titan, Europa or Enceladus - this may turn the tide against such easy a target, but one thing is for certain: Darwin did not attempt to detail the origin of life itself. Although he stated in a letter to a fellow scientist: "But if (and Oh! What a big if!) we could conceive in some warm little pond, with all sorts of ammonia and phosphoric salts, lights, heat, electricity etc., present that a protein compound was chemically formed ready to undergo still more complex changes" there are no such broad assumptions in his public writings.

As it turns out, Darwin may have got some of the details correct, although the 'warm little pond' is more likely to have been a deep sea volcanic vent. But we are still far from understanding the process by which inert chemicals started to make copies of themselves. It's been more than sixty years since Harold Urey and Stanley Miller at the University of Chicago produced amino acids simply by recreating what conditions were then thought to resemble on the early Earth. Despite numerous variations on this classic experiment in subsequent decades, we are little closer to comprehending the origin of life. So it was appropriate that Darwin, who was not known for flights of fancy (he once quipped "My mind seems to have become a kind of machine for grinding general laws out of large collections of facts") kept speculation out of his strictly evidence-based publications.

Just as Darwin has been (at times, deliberately) misquoted by religious fundamentalists determined to undermine modern biology, his most vociferous disciple today, Richard Dawkins, has also been selectively quoted to weaken the scientific arguments. For example, printing just "The essence of life is statistical improbability on a colossal scale" as opposed to the full text from The Blind Watchmaker discussing cumulative natural selection, is a cheap literary device that lessens the critique, but only if the reader is astute enough to investigate the original source material.

4) Anybody who has been seriously engaged in scientific work of any kind realizes that over the entrance to the gates of the temple of science are written the words: 'Ye must have faith.' (Max Planck)

Thomas Henry Huxley (A.K.A. Darwin's Bulldog) once wrote that "Science is organized common sense where many a beautiful theory was killed by an ugly fact." But that was back in the Nineteenth Century, when classical physics ruled and scientists predicted a time in the near future when they would understand all the fundamentals of the universe. In these post-modern, quantum mechanical times, uncertainty (or rather, Uncertainty) is key, and common sense goes out of the window with the likes of entanglement, etc.

Back to Planck. It seems fairly obvious that his quote tallies closely with the physics of the past century, in which highly defined speculation and advanced mathematics join forces to develop hypotheses into theories long before hard evidence can be gleaned from the experimental method. Some of the key players in quantum physics have even furthered Copernicus' preference for beautiful mathematics over observation and experiment. Consider the one-time Lucasian Professor of Mathematics Paul Dirac's partiality for the beauty of equations over experimental results, even though he considered humanity's progress in maths to be 'feeble'. The strangeness of the sub-atomic world could be seen as a vindication of these views; another of Planck's quotes is "One must be careful, when using the word, real."

Leaving aside advanced physics, there are examples in the other scientific disciplines that confirm Planck's view. In the historical sciences, you can never know the full story. For example, fossils can provide some idea of the how and when a species diverged into two daughter species, but not necessarily the where and why (vis-Ć -vis ecological 'islands' in the wider sense). Not that this lack of precision should be taken as doubt of validity. As evolutionary biologist Stephen Jay Gould once said, a scientific fact is something "confirmed to such a degree that it would be perverse to withhold provisional assent."  So what might appear to primarily apply to one segment of the scientific endeavour can be applied across all of science.

5) Space travel is utter bilge (Richard van der Riet Woolley, Astronomer Royal)

In 1956 the then-Astronomer Royal made a prediction that was thoroughly disproved five years later with Yuri Gagarin's historic Vostock One flight. The quote has been used ever since as an example of how blind obedience to authority is unwise. But Woolley's complete quote was considerably more ambiguous: "It's utter bilge. I don't think anybody will ever put up enough money to do such a thing...What good would it do us? If we spent the same amount of money on preparing first-class astronomical equipment we would learn much more about the universe...It is all rather rot." He went on say: "It would cost as much as a major war just to put a man on the moon." In fact, the latter appears to be quite accurate, and despite the nostalgia now aimed at the Apollo era, the lack of any follow-up only reinforces the notion that the race to the moon was simply the ultimate example of Cold War competition. After all, only one trained geologist ever got there!

However, I'm not trying to defend the edited version of Woolley's inopportune statement since he appears to have been an armchair naysayer for several decades prior to his most famous quote. Back in 1936, his review of Rockets Through Space: The Dawn of Interplanetary Travel by the first president of the British Interplanetary Society (BIS) was even more pessimistic: "The whole procedure [of shooting rockets into space]...presents difficulties of so fundamental a nature, that we are forced to dismiss the notion as essentially impracticable, in spite of the author's insistent appeal to put aside prejudice and to recollect the supposed impossibility of heavier-than-air flight before it was actually accomplished." Again, it might appear in hindsight that Woolley deserves scorn, were it not for the fact that nearly everyone with some knowledge of space and aeronautics was of a similar opinion, and the opposition were a few 'cranks' and the like, such as BIS members.

The moral of the this story is that it is far from difficult to take a partial quote, or a statement out of context, and alter a sensible, realistic attitude (for its time and place) into an easy piece of fun. A recent tweet I saw was a plaintive request to read what Richard Dawkins actually says, rather than what his opponents claim he has says. In a worst-case scenario, quote-mining makes it possible to imply the very opposite of an author's intentions. Science may not be one hundred percent provable, but it's by the far the best approach we have to finding out that wonderful thing we humans call 'the truth'.

Monday 27 January 2014

An index of possibilities: defining science at a personal level

"If a little knowledge is dangerous, where is the man who has so much as to be out of danger?" - T.H. Huxley

With a sense of revitalisation following the start of a new year - and since the number of misconceived notions of the scientific method are legion - I thought I should put my cards on the table and delineate my personal ideas of what I believe science to be.

I suppose you could say it's a self-learning exercise as much as anything. Most people consider science the least comprehensible of all disciplines, removed from every day experience and only accessible by a select few (a.k.a. an intellectual elite), albeit at the loss of the creativity that drives so many other aspects of our lives. But hopefully the incredible popularity of British physicist Brian Cox and other photogenic scientist-cum-science-communicators is more than a passing fad and will help in the long term to break down this damaging myth. Science is both part and parcel of our existence and will only increase in importance as we try to resolve such vital issues as environmental degradation whilst still providing enough food and water for an ever-increasing population (fingers very much crossed on that one, folks!)

So here goes: my interpretation of the scientific method in ten bite-size, easy-to-swallow, chunks.
  1. A large amount of science is not difficult to comprehend
    Granted, theoretical high-energy physics is one of several areas of science difficult to describe meaningfully in a few, short sound bites. But amidst the more obtuse volumes aimed at a popular readership there are some gems that break down the concepts to a level that retains the essential details without resorting to advanced mathematics. Evolutionary biologist Stephen Jay Gould noted that the fear of incompetence put many intelligent enthusiasts off learning science as a leisure activity, but with the enormity of these popular science sections in many bookstores - there are over 840,000 books in Amazon.com's science section - there is no longer an excuse for not dipping a toe. Leaving physics aside, there are plenty of areas of science that are easy to understand too, especially in the 'historical' disciplines such as palaeontology (more on that later).
  2. Science is not a collection of facts but a way of exploring reality
    This is still one of the most difficult things to convey. Bill Bryson's prize-winning best seller A Short History of Nearly Everything reminds me of the genre of boy's own bumper book of true facts that was still around when I was a child: Victorian-style progress with a capital 'P' and science just a compilation of theories and facts akin to say, history. The reality is of course rather more complicated. The scientific method is a way of examining nature via testable questions that can be resolved to a high degree of certainty by simplified models, either by practical experiments (both repeatable and under 'laboratory conditions') - and including these days, computer simulations - or via mathematics.
  3. Science requires creativity, not just rigor
    The stereotype of scientists as rational, unemotional beings has been broken down over the past thirty years or so, but many non-scientists still have little idea of the creative thinking that can be involved in science, particularly in cutting-edge theorising. From Einstein's thought experiments such as what it would be like to ride alongside a beam of light to the development of string theory - which has little likelihood of experimental evidence in the near future - scientists need to utilise creative thought at least as much as data collation and hard mathematics.
  4. Scientists are only human
    Scientists are far from immune to conditioned paths of thought ingrained via their social and cultural background. Therefore, rather than all scientists being equally adept at developing particular hypotheses, they are subject to the same whims and sense of normality as everyone else. In addition, individual idiosyncrasies can hinder their career. I've discussed previously how Einstein (who famously said his contempt of authority was punished by him becoming an authority himself) refused to accept some of the aspects of quantum theory long after his contemporaries had.
    Scientists could be said then to follow the stereotype visible elsewhere, namely that young radicals frequently evolve into old conservatives.
  5. If there's no proof, is it still science?
    Thomas Henry Huxley (a.k.a. Darwin's Bulldog) once said that the 'deepest sin against the human mind is to believe things without evidence'. Yet scientific hypotheses are sometimes formed prior to any support from nature or real-world experimentation. Although Charles Darwin had plenty of the evidence revealing artificial selection when he wrote On the Origin of Species, the fossil record at the time was extremely patchy and he had no knowledge of Mendelian inheritance. In addition, the most prominent physicists of his day were unaware of nuclear fusion and so their theories of how stars shone implied a solar system far too young for natural selection to be the primary mechanism of evolution. By sticking to his ideas in spite of these issues, did this make Darwin a poor scientist? Or is it feasible that many key advances require a leap of faith - a term unlikely to please Richard Dawkins - due to lack of solid, physical evidence?
  6. Are there two schools of science?
    New Zealand physicist Ernest Rutherford once disparagingly remarked something along the lines of physics being the only real science, and that other so-called scientific disciplines are just stamp collecting. I prefer to think of science as being composed of historical and non-historical disciplines, only occasionally overlapping. For instance, cutting-edge technological application of physics required repeatable and falsifiable experiments, hence the deemed failure of cold fusion, whilst the likes of meteorology, evolutionary biology, and palaeontology are composed of innumerable historical events and/or subject to the complexities of chaos theory and as such are unlikely to provide duplicate circumstances for testing or even capable of being broken down into simplified models that can be accurately tested.
  7. An accepted theory is not necessarily final
    A theory doesn't have to be the absolute end of a quest. For example, Newton's law of universal gravitation had to wait over two centuries for Einstein's general theory of relativity to explain the mechanism behind the phenomenon. Although quantum mechanics is the most accurate theory ever developed (in terms of the match between theory and experimental results), the root cause is yet to be understood, with wildly varying interpretations offered instead. The obvious problem with this approach is that a hypothesis may fit the facts but without an explanatory mechanism, scientists may reject it as untenable. A well-known instance of this scientific conservatism (albeit for good reasons) involved Alfred Wegener's hypothesis of continental drift, which only achieved orthodoxy decades later once plate tectonics was discovered.
  8. Scientific advance rarely proceeds by eureka moments
    Science is a collaborative effort. Few scientists work in a vacuum (except astronauts, of course!) Even the greatest of 'solo' theories such as universal gravitation was on the cards during Newton's lifetime, with contemporaries such as Edmond Halley working along similar lines. Unfortunately, our predilection for simple stories with identifiable heroes means that team leaders and thesis supervisors often receive the credit when many researchers have worked towards a goal. In addition, the priority rule is based on first publication, not when a scientist formulated the idea. Therefore many theories are named after scientists who may not have been the earliest discoverer or formulator. The work of unsung researchers is frequently neglected in favour of this simplified approach that glorifies the work of one pioneer at the expense of many others.
  9. Science is restricted by the necessity of using language to describe it
    Richard Dawkins has often railed against Plato's idealism (a.k.a. Essentialism), using the phrase 'the tyranny of the discontinuous mind'. I recall a primary example of this as a child, whilst contemplating a plastic model kit I had of a Neanderthal. I wondered how the human race had evolved: specifically, how could parents of a predecessor hominid species give birth to a modern human, i.e. a child of a different species? Of course, such discontinuity is nonsense, but it is surprising how frequently our mind interprets the world in this format of neat boundaries. A large part of the problem is how do we define transitional states as the norm, since our language is bound up with intrinsic categories? In addition, we rely on metaphor and analogy to describe aspects of the universe that do not conform to everyday experience, the nature of quantum probability being an obvious example. As with the previous point on our innate need for heroes, we are always constructing narratives, thus restricting our ability to understand nature at a fundamental level.
  10. Science does not include a moral dimension
    Science, like nature, is neither moral nor immoral and cannot provide a framework for human behaviour. Of course, this doesn't prevent scientists from being greedy or stupid, or even just naĆÆve: witness British evolutionary biologist J.B.S. Haldane who recommended the use of poison gas as a war weapon due to it being more humane than conventional weapons (in terms of the ratio of deaths to temporarily incapacitation). This suggests that non-scientists should be involved in the decision-making process for the funding of some science projects, especially those with clear applications in mind. But in order for this to be tenable, the public needs to be considerably more scientifically literate than at present. Otherwise the appalling scare-mongering engendered by the likes of the British tabloid press - think genetically modified crops labelled as 'Frankenstein foods' - will only make matters far worse. GM crops themselves are a perfect example of why the Hollywood approach for clear-cut heroes and villains fails with most of science. Reality is rarely black or white but requires careful analysis of the myriad shades of grey.
In conclusion, it might be said that there are as many variants of science as there are human beings. Contrary to many other disciplines, mistakes and ignorance are clear strengths: as Darwin stated in The Descent of Man, 'Ignorance more frequently begets confidence than does knowledge.' Above all, there are aspects of science that are part and parcel of our everyday experience and as such, we shouldn't just consider it as something to save for special occasions.

Tuesday 24 December 2013

The great outdoors: getting children back to nature

With Christmas just around the corner it seems like a good time to look at the benefits of persuading children to swap their hi-tech electronic gadgets for the wonders of the great outdoors. The recently-slated Toys 'R' Us television advert that promotes their plastic junk at the expensive of a 'dull and boring' nature field trip only highlights a trend that as the rural population decreases, natural phenomena such as animals, weather and good, clean soil are deemed solely of interest to farmers. Some years ago, a London acquaintance who teaches English at a senior school reported that during a woodland walk - to explore nature poetry rather than nature itself - several of her female teenage students cried due to getting mud on their shoes. Just how distanced are children becoming from the world beyond their front door!
A sense of scale: humans against California redwoods

The last few decades have seen a move away from the outdoor adventures that typified my childhood: catching butterflies; building woodland dens; even exploring a derelict house. Instead, sitting in front of computers, TVs and games consoles has become prevalent, sometimes all at once. Not that this has gone unnoticed, as discussed in Richard Louv's best-selling Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder. Although the phenomenon is common across the developed world, some countries fare better than others. For example, recent reports suggest New Zealand children (feeling a bit smug at this point) spend rather more time outdoors than their Australian, American or British counterparts. However, I'm sure there's room for improvement just about everywhere. There are many reasons behind the stay-at-home trend in addition to the obvious delights of being cosily tucked up with digital devices, but I believe it is more important to explore the effects this is having on our children:
  1. The most obvious problem caused by a shortage of physical activity outdoors - which after all is free, compared to the indoor play centres often used for children's parties - is the lack of opportunity to develop coordination and motor skills beyond the mouse or joystick. Since we've experienced a generation-on-generation increase in the number of calories, sugar and fat in our diet, then clearly there should also be an increased amount of time spent burning this off. Obviously this hasn't happened, and various groups such as the International Association for the Study of Obesity have tracked the post-war growth in overweight children. If you haven't seen any of the resulting graphs, they make for troubled reading...
  2. But it isn't just physical health that is affected. As a species, we are still coming to terms with urban living and the psychological problems of existence in near-identical cuboids in residential estates frequently bereft of greenery. The World Health Organization's definition of health includes mental well-being, which can incorporate the notion that regular playing outdoors confers benefits on children. I don't consider this as just referring to strenuous exercise: exploring the randomness of nature - from building sand castles to snowball fights - as well as the simple joys of experiencing weather at first hand, are also important. As if to confirm the problems that a lack of balance in indoor/outdoor activities can lead to, a work colleague recently informed me that his twenty-year-old son, a business degree student, was reduced to tears when he was unable to log on to his online gaming account for a few days. Oh, for an adequate sense of perspective!
  3. Does the changing emphasis from natural to man-made environments mean are we losing a vital part of our humanity? Or are we seeing a new form of evolution for our species? The differences between nature and artifice are profound, from the seemingly (although only from our viewpoint) haphazardness of the former to the non-messy convenience sought as a given via the latter. Even a basic understanding of processes from food at its source might be useful as an educative tool to engender empathy for a planet we are so rapidly despoiling. It's very easy for children to overlook the natural wonders that still exist in even the most densely populated of nations when they primarily associate the rural environment with the exotic non-developed locales usually favoured by natural history documentary programme makers.

    Viewing nature at second hand is no substitute for - literally - getting your fingers dirty, whether it is planting flowers or foodstuffs, or simply scrabbling over muddy terrain. A 2010 survey conducted in the UK indicated that between one quarter and one half of British children lack basic knowledge concerning familiar native and introduced species such as horse chestnut trees and grey squirrels. Not that I'm convinced an appreciation of the facts might lead to more environmental awareness; after all, how many times has the 'closer to nature' sustainability of pre-industrial societies been shown to be a myth? But considering for example the enormous amount of bought food that is thrown away uneaten (perhaps reaching over 40% in the USA) surely any understanding of the complex cycles within the far from limitless ecosystem may engender some changes in attitude towards reduce, reuse and recycle? As evolutionary biologist Stephen Jay Gould once said, we will not fight to save what we do not love.
  4. Further to the last point, knowledge as a safety net might come in handy, should the need arise. There's an old adage that even the most 'civilised' of societies is only nine missed meals away from anarchy, as the citizens of New Orleans learnt all too well in the wake of Hurricane Katrina in 2005. Considering just how much food manufacturers rely on oil for everything from transport to packaging (did you know North Sea prawns are flown on a 12,000 mile round trip to be cleaned and de-shelled?) it doesn't just have to be a natural disaster to generate such chaos. In October 2011 a leak in the Maui gas pipeline here in New Zealand led for a few days to empty bread shelves nationwide, highlighting the fragility of our infrastructure.

    A 2008 UK report concluded that British food retailers would exhaust their stocks in just three days in the event of a Hurricane Katrina-scale emergency, thus suggesting that those who follow chef and forager Hugh Fearnley-Whittingstall or adventurer/survivalist Bear Grylls will be the victors. I'm not suggesting children should be taught to distinguish edible from poisonous fungi but considering the potential dangers of even cultivated food crops (did you know that potatoes turning green may be a sign of the poison solanine?) any knowledge of foraging and food preparation may prove useful as well as fun.
  5. Encouraging children to explore outside is as good a method as any to beget a new generation of biologists, ecologists and their ilk. Ironically, Toys 'R' Us list over 370 items in the science and discovery section of their online catalogue. Indeed, their advert includes several seconds' footage of a boy looking through the eyepiece of small reflecting telescope labelled 'science', although judging by the angle the telescope is pointing into the ground! As I've explored previously, doing practical science seems to be a far better way to introduce young children to the discipline than mere passive viewing or reading. It can also demonstrate that - with several exceptions such as high-energy physics - many of the basic structures of scientific procedure and knowledge are well within the grasp of non-scientists (perceptions are hard to shift: I recently heard a law graduate declare she wasn't sure she would be able to understand this blog, as science is of course 'very difficult'! )

    Each one of the above alone would be reason enough to encourage children to spend more time outside, but taken together they suggest that there is likely to be severe repercussions across many aspects of society if the adults of tomorrow don't get enough fresh air today. It may sound like something out of a Boys' Own Journal from the era of the British Empire, but there's something to be said for the simpler pleasures in life. I know I'd rather go for a forest walk or rock pooling than play Grand Theft Auto 5 any day...

Wednesday 26 September 2012

Moulds, mildew and mushrooms: living cheek by jowl with fungi

There is a form of life that probably exists in every house, office and workplace on the planet (operating theatres and clinical laboratories largely excepted) that is so ubiquitous that it goes chiefly unnoticed. The organisms are stationary yet spread rapidly, are composed of numerous species - some of which include common foodstuffs - and are neither animal nor plant. In other words they belong to the third great kingdom of macroscopic life: fungi. But what are these poor relations of the other two groups, seen as both friend and foe?

Having moved last year from a one hundred and thirty year old, centrally-heated and double-glazed terrace house in the UK to a single-glazed, largely unheated detached house less than a quarter that age in New Zealand, I've been able to conduct a comparative domestic mycology experiment. Without sounding  too much like a mould-and-spores collector out of a P.G. Wodehouse story, the subject has proved interesting and reasonably conclusive: a family of four moving to an annual climate on average four degrees warmer but with twice the rainfall has not substantially changed the amount or placement of mould in the home; if anything, it has slightly decreased. But then the amount of bathing, laundry and pans on the hob hasn't changed, so perhaps it's not too surprising. The more humid climate has been tempered by having more windows and doors to open, not to mention being able to dry more of the laundry outside. Mind you, one big plus of the move has been not having to use electric dehumidifiers or salt crystal moisture traps, so a few degrees warmth seems to be making a difference after all.

There appears to be a wide range of dubious stories, old wives' tales and assorted urban myths regarding fungi, no doubt being due to the lack of knowledge: after all, if you ask most people about the kingdom they will probably think of edible mushrooms followed by poisonous toadstools. Yet of the postulated 1.5 million species of fungi, only about 70,000 have so far been described. They are fundamentally closer to animals than they are to plants, but as they live off dead organic matter (and some inorganic substances too), thriving in darkness as unlike plants they do not photosynthesise, their reputation is more than a little sinister. The fact they will grow on just about any damp surface, hence the kitchen and bathroom mould populations, reinforces the opinion of them as being unwelcome visitors. So just how bad are they?

Firstly, fungi play a vital role in the nitrogen cycle, supplying nutrients to the roots of vegetation. The familiar fruiting bodies are, as Richard Dawkins describes them, pretty much the tip of iceberg compared to the enormous network of fungal material under the soil. Even so, they are given short shrift in popular natural history and science books: for example, they only warrant five pages in Richard Fortey's Life: An Unauthorised Biography, whilst Bill Bryson's A Short History of Nearly Everything spends much of its four pages on the subject concerned with the lack of knowledge about the number of species. Of my five Stephen Jay Gould volumes totalling over two thousand pages, there are just several, short paragraphs. And at least one of my books even refers to fungi as a simple form of plant life! Yet we rely on fungi for so many of our staple foodstuffs; it's just that they are so well hidden we don't consider them if they're not labelled as mushrooms.  But if you eat leavened bread, yoghurt, cheese or soy sauce, or drink beer or wine, fungi such as yeast will have been involved somewhere along the line. On another tack, fungi are party to yet another knife in the coffin of human uniqueness, since both ants and termites cultivate fungi: so much for Man the Farmer.

As this point I could start listing their uses in health cures, from traditional Chinese medicine to Penicillin, but my intention has been to look at fungi in the home. Anyone who has seen the fantastic BBC television series Planet Earth might recall the parasitical attack of the genus Cordyceps upon insects, but our much larger species is far from immune to attack. Minor ailments include Athlete's Foot and Ringworm whilst more serious conditions such as Candidemia, arising from the common Candida yeast, can be life- threatening . The spores are so small that there is no way to prevent them entering buildings, with commonly found species including Cladosporium, Aspergillus, and our old friend Penicillium.

Once they have a presence, moulds and mildew are almost impossible to eradicate. They are extremely resilient, with the poison in Amanita species such as the death cap failing to be destroyed by heat. An increasingly well-known example is the toxin of the cereal-infecting ergot, capable of surviving the bread-making process, even the baking. Indeed, ergot has seemingly become a major star of the fungi world, being used in pharmaceuticals at the same time as being nominated the culprit behind many an historic riddle, from the Salem witch trials to the abandonment of the Marie Celeste. Again, lack of knowledge of much of the fungal world means just about anything can be claimed with only dubious evidence to support it.

Varieties of domestic mould
A rogue's gallery of household fungi

Although we are vulnerable to many forms of fungus, an at least equally wide range attack our buildings. Whether the material is plaster, timber or fabrics, moulds and mildew can rapidly spread across most surfaces containing even a hint of dampness, often smelt before they are seen. At the very least, occupants of a heavily infested property can suffer allergies, sinus problems and breathing problems. As an asthmatic I should perhaps be more concerned, but other than keeping windows and doors open as much as possible there doesn't seem much that can be done to counter these diminutive foes.  As it is, vinegar is a favourite weapon, particularly on shower curtains and the children's plastic bath toys. Even so, constant vigilance is the watchword, as can be seen by the assorted examples from around the house above. For any mycophobes wondering how large fungi can get indoors, I once worked on a feature film shot in a dilapidated Edwardian hotel in central London about to be demolished which had fungal growths on the top floor (saturated with damp thanks to holes in the roof) which were the size of dinner plates.

So whether you've played with puffballs or like to dine on truffles, remember there's no escape: fungi are a fundamental element of our homes, our diet, and if we're unlucky, us too. Seemingly humble they may be, but even in our age of advanced technology, there's just no escape...

Monday 27 February 2012

Predators vs poisons: the ups and downs of biological control

Ever since Darwin, islands and island groups have been known as prominent natural laboratories of evolution. Their isolation leads to radiation of species from a single common ancestor, the finches and giant tortoises of the Galapagos Islands providing a classic example. But a small population restricted in range also means that many island species are extremely susceptible to external factors, rapid extinction being the ultimate result - as can be seen from the dodo onwards. Living as I do on an island (New Zealand counts within the terms of this discussion, as I will explain) has led me to explore what a foreign invasion can do to a local population.

Either through direct hunting or the actions of imported Polynesian dogs and rats, almost half the native vertebrate fauna was wiped out within a few centuries of humans arriving in New Zealand; so much for the myth of pre-technological tribes living in ecological harmony! But the deliberate introduction of a new species to pray on another is now a much-practised and scientifically-supported technique. One of the late Stephen Jay Gould's most moving essays concerned the plight of the Partula genus of snails on the Society Islands of Polynesia. The story starts with the introduction of edible Achatina snails to the islands as food, only for some to escape and become an agricultural pest. In 1977 the Euglandina cannibal wolfsnail was brought in as a method of biological control, the idea being that they would eat the crop munchers. Unfortunately, the latest wave of immigrant gastropods ignored the Achatina and went after the local species instead. The results were devastating: in little more than a decade, many species of Partula had become extinct in their native habitat.

(As an interesting aside, the hero of Gould's Partula vs. Euglandina story is gastropod biologist Henry Crampton, whose half century of research into the genus is presumably no longer relevant in light of the decimation of many species. Yet Crampton, born in 1875, worked in typical Victorian quantitative fashion and during a single field trip managed to collect 116,000 specimens from just a single island, Moorea. I have no idea how many individual snails existed at the time, but to me this enormous number removed from breeding population in the name of scientific research was unlikely to do anything for the genus. I wonder whether comparable numbers of organisms are still being collected by researchers today: somehow I doubt it!)

The Society Islands is not the only place where the deliberate introduction of Euglandina has led to the unintended devastation of indigenous snail species: Hawaii and its native Achatinella and Bermuda's Poecilozonites have suffered a similar fate to Partula. Gould used the example of the Partula as a passionate plea (invoking 'genocide' and 'wholesale slaughter') to prevent further inept biological control programmes, but do these examples justify banning the method in totality?

The impetus for this post came from a recent visit to my local wetlands reserve, when my daughters played junior field biologists and netted small fish in order to examine them in a portable environment container (alright, a jam jar) - before of course returning them to the stream alive. The main fish species they caught was Gambusia, which originates from the Gulf of Mexico but was introduced to New Zealand in the 1930s as a predator of mosquito larvae. However, akin to Euglandina it has had a severe impact on many other fish species and is now rightly considered a pest. In fact, it's even illegal to keep them in a home aquarium, presumably just in case you accidentally aid their dispersion. Australia has also tried introducing Gambusia to control the mosquito population, but there is little data to show it works there either. The latter nation also provides a good illustration of environmental degradation via second- and third-hand problems originating from deliberate introduction. For example, the cane toad was imported to control several previously introduced beetle species but instead rapidly decimated native fauna, including amphibians and reptiles further up the food chain, via toad-vectored diseases.

Gambusia: the aggressive mosquito fish
Gambusia affinis: a big problem in a small fish

This isn't to say that there haven't been major successes with the technique. An early example concerns a small insect called the cottony cushion scale, which began to have a major impact on citrus farming in late Nineteenth Century California. It was brought under control by the introduction of several Australian fly and beetle species and without any obvious collateral damage, as the military might phrase it. But considering the extinction history of New Zealand since humans arrived, I've been amazed to discover just how many organisms have been deliberately introduced as part of biological control schemes, many in the past quarter century. For instance, twenty-one insect and mite species have been brought over to stem the unrestrained growth of weeds such as ragwort and gorse, although the rates of success have been extremely mixed (Old man's beard proving a complete failure, for example). As for controlling unwelcome fauna in New Zealand, a recent promising research programme involves the modification of parasites that could inhibit possum fertility. This is something of a necessity considering possums (first imported from Australia in the 1830s and now numbering around sixty million) are prominent bovine tuberculosis vectors.

Stephen Jay Gould was a well-known promoter of the importance of contingency within evolution, and how a re-run of any specific branch of life would only lead to a different outcome. So the question has to be asked, how do biologists test the effect of outsider species on an ecosystem (i.e. within laboratory conditions) when only time will show whether the outcome is as intended? No amount of research will show whether an unknown factor might, at an unspecified time during or after the eradication programme, have a negative impact. It could have been argued in the past that the relative cheapness of biological control compared to alternatives such as poison or chemicals made it the preferable option. However, I imagine the initial costs, involving lengthy testing cycles, mean that it is no longer a cut price alternative.

Considering the recent developments in genetic modification (GM), I wonder whether researchers have been looking into ways of minimising unforeseen dangers? For example, what about the possibility of tailoring the lifespan of the control organism? In other words, once the original invasive species has been eliminated, the predator would also rapidly die out (perhaps by something as simple as being unable to switch to an alternative food source, of which there are already many examples in nature). Or does that sound too much like the replicant-designing Dr Eldon Tyrell in Blade Runner?

One promising recent use of GM organisms as a biological control method has been part of the fight to eradicate disease-carrying (female) mosquitos. Any female offspring of the genetically altered male mosquitos are incapable of flight and thus are unable to infect humans or indeed reproduce. However, following extremely positive cage-based testing in Mexico, researchers appear to have got carried away with their achievements and before you could say 'peer review' they conducted assessments directly in the wild in Malaysia, where I assume there is little GM regulation or public consultation. Therefore test results from one location were extrapolated to another with a very different biota, without regard for knock-on effects such as what unwelcome species might come out of the woodwork to fill the gap in the ecosystem. When stakes are so high, the sheer audacity of the scientists involved appears breathtaking. Like Dr Tyrell, we play god at our peril; let us hope we don't come to an equally sticky end at the hands of our creation...

Monday 30 January 2012

Sell-by date: are old science books still worth reading?

As an outsider to the world of science I've recently been struck by an apparent dichotomy that I don't think I've ever heard discussed, namely that if science is believed by non-practitioners to work on the basis of new theories replacing earlier ones, then are out-of-date popular science (as opposed to text) books a disservice, if not positive danger, to the field?

I recently read three science books written for a popular audience in succession, the contrast between them serving as the inspiration for this post. The most recently published was Susan Conner and Linda Kitchen's Science's Most Wanted: the top 10 book of outrageous innovators, deadly disasters, and shocking discoveries (2002). Yes, it sounds pretty tacky, but I hereby protest that I wanted to read it as much to find out about the authors and their intended audience as the subject material itself. Although only a decade old the book is already out of date, in a similar way that a list of top ten grossing films would be. In this case the book lists different aspects of the scientific method and those involved, looking at issues ranging from collaborative couples (e.g. the Curies) to prominent examples of scientific fraud such as the Chinese fake feathered dinosaur fossil Archaeoraptor.

To some extent the book is a very poor example of the popular science genre, since I found quite a few incorrect but easily verifiable facts. Even so, it proved to be an excellent illustration of how transmission of knowledge can suffer in a rapidly-changing, pop-cultural society. Whilst the obsession with novelty and the associated transience of ideas may appear to somewhat fit in with the principle that a more recent scientific theory always replaces an earlier one, this is too restrictive a definition of science. The discipline doesn't hold with novelty for the sake of it, nor does an old theory that is largely superseded by a later one prove worthless. A good example of the latter is the interrelationship between Newton's classical Law of Gravitation (first published in 1687) and Einstein's General Relativity (1916), with the former still used most of the time (calculating space probe trajectories, etc, etc).

The second of the three books discusses several different variants of scientific practice, although far different from New Zealand particle physicist Ernest Rutherford's crude summary that "physics is the only real science. The rest are just stamp collecting." Stephen Jay Gould's first collection of essays, Ever Since Darwin (1977), contains his usual potpourri of scientific theories, observations and historical research. These range from simple corrections of 'facts' – e.g. Darwin was not the original naturalist on HMS Beagle – to why scientific heresy can serve important purposes (consider the much-snubbed Alfred Wegener, who promoted a precursor to plate tectonics long before the evidence was in) through to a warning of how literary flair can promote poor or even pseudo-science to an unwary public (in this instance, Immanuel Velikovsky's now largely forgotten attempts to link Biblical events to interplanetary catastrophes).

Interestingly enough, the latter element surfaced later in Gould's own career, when his 1989 exposition of the Early Cambrian Burgess Shale fossils, Wonderful Life, was attacked by Richard Dawkins with the exclamation that he wished Gould could think as clearly as he could write! In this particular instance, the attack was part of a wider critique of Gould's theories of evolutionary mechanisms rather than material being superseded by new factual evidence. However, if I'm a typical member of the lay readership, the account of the weird and wonderful creatures largely outweighs the professional arguments. Wonderful Life is still a great read as descriptive natural history and I suppose serves as a reminder that however authoritative the writer, don't take accept everything on face value. But then that's a good lesson in all subjects!

But back to Ever Since Darwin. I was surprised by just how much of the factual material had dated in fields as disparate as palaeontology and planetary exploration over the past thirty-five years. As an example, Essay 24 promotes the idea that the geophysical composition of a planetary body is solely reliant on the body's size, a hypothesis since firmly negated by space probe data. In contrast, it is the historical material that still shines as relevant and in the generic sense 'true'. I've mentioned before (link) that Bill Bryson's bestseller A Short History of Nearly Everything promotes the idea that science is a corpus of up-to-date knowledge, not a theoretical framework and methodology of experimental procedures. But by so short-changing science, Bryson's attitude could promote the idea that all old material is essentially worthless. Again, the love of novelty, now so ingrained in Western societies, can cause public confusion in the multi-layered discipline known as science.

Of course, this doesn't mean that something once considered a classic still has great worth, any more than every single building over half a century old is worthy of a preservation order. But just possibly (depending on your level of post-modernism and/or pessimism) any science book that stands the test of time does so because it contains self-evident truths. The final book of the three is a perfect example of this: Charles Darwin's On the Origin of Species, in this case the first edition of 1859. The book shows that Darwin's genius lay in tying together apparently disparate precursors to formulate his theory; in other words, natural selection was already on the thought horizon (as proven by Alfred Russel Wallace's 1858 manuscript). In addition, the distance between publication and today gives us an interesting insight into the scientist as human being, with all the cultural and linguistic baggage we rarely notice in our contemporaries. In some ways Darwin was very much a man of his time, attempting to soften the non-moralistic side to his theory by subtly suggesting that new can equal better, i.e. a form of progressive evolution. For example, he describes extinct South American mega fauna as 'anomalous monsters' yet our overtly familiar modern horse only survived via Eurasian migration, dying out completely in its native Americas. We can readily assume that had the likes of Toxodon survived but not Equus, the horse would seem equally 'anomalous' today.

Next, Darwin had limited fossil evidence to support him, whilst Nineteenth Century physics negated natural selection by not allowing enough time for the theory to have effect. Of course, if the reader knows what has been discovered in the same field since, they can begin to get an idea of the author's thought processes and indeed world view, and just how comparatively little data he had to work with. For example, Darwin states about variations in the sterility of hybrids whilst we understand, for example that most mules are sterile because of chromosomal issues. Yet this didn’t prevent the majority of mid-Victorian biologists from accepting natural selection, an indication that science can be responsive to ideas with only circumstantial evidence; this is a very long way indeed from the notion of an assemblage of clear-cut facts laid out in logical succession.

I think it was the physicist and writer Alan Lightman who said: "Science is an ideal but the application of science is subject to the psychological complexities of the humans who practice it." Old science books may frequently be dated from a professional viewpoint but can still prove useful to the layman for at least the following reasons: understanding the personalities, mind-sets and modes of thought of earlier generations; observing how theories within a discipline have evolved as both external evidence and fashionable ideas change; and the realisation that science as a method of understanding the universe is utterly different from all other aspects of humanity. Of course, this is always supposing that the purple prose doesn’t obscure a multitude of scientific sins...

Sunday 17 January 2010

Shall I compare thee to a charming quark? When mitochondria meets metaphor

Many years ago whilst holidaying in Cyprus I experienced an event commonplace to our ancestors but increasingly rare to us light-polluted urbanites today. Sitting outside one evening a spectacular glow appeared over a nearby hill, slowly gaining a floodlight intensity until the full moon rose, casting shadows and obscuring the Milky Way. Small wonder previous centuries have written so much about the beauty of the "starry realm"; but can poetry survive when having discovered the secrets of the stars, we have ironically lost touch with them as a sensory experience? As the late Richard Feynman asked, "do I see less or more?" His answer, proving him a worthy successor to Emily Dickinson and Robert Frost, encapsulates the view that knowledge gained need not lessen the wonder: "stuck on this carousel my little eye can catch one million year old light..."

But then the night sky (and the natural world in general) is an easy poetic target compared to other aspects of science. Yet historical examples of British scientist-poets abound, from Charles Darwin's grandfather Erasmus, whose verse included copious footnotes explaining the ideas within, to chemist Humphry Davy, physicist James Clerk Maxwell, and more recently biologist Julian Huxley. You might ask who are today's equivalents - who writes paeans to Messenger RNA or odes to nuclear fusion? There are poets who exchanged science for versifying (David Morley) and scientists who edit poetry (Jocelyn Bell Burnell), but few who simultaneously practice both sides of C.P. Snow's infamous The Two Cultures. Apart from several astronomy compilations (featuring verse largely by non-astronomers) there are hardly any recent science-orientated volumes aimed at adults except for James Muirden's The Cosmic Verses: A Rhyming History of the Universe. Informative as it is, Muirden's charming couplets hardly push the boundaries of poetry or science exposition.

One obvious (and therefore not necessarily correct) reason for the lack of contemporary science poetry is that the complexity of modern theories and terminology create a prohibitive first hurdle: the likes of phagocytosis and inhomogeneous magnetic fields hardly trip off the tongue. However, ecologist and 'lapsed physicist' Mario Petrucci, a rare example of a contemporary scientist with an actively-employed poetic gift, argues that science-inspired poetry shouldn't rely on technological name-dropping but look at the defining methodologies. He provides an exquisite example via a (prose) description of the physiological response to listening to verse, which he defines as the "subliminal scent of aroused communication".

Then again, modes of writing have changed dramatically over the past century, with the florid, highfalutin prose of the Victorians replaced by a detached, matter-of-fact style developed to avoid ambiguity. Thomas Henry Huxley (Julian's grandfather) was, like many of his contemporaries, capable of prose that to the modern ear is to all intents and purposes poetry: "...intellectually we stand on an islet in the midst of an illimitable ocean of inexplicability. Our business in every generation is to reclaim a little more land..." In contrast, today's technical papers achieve universal comprehension by austerity of language. This is of course the complete antithesis of poetry, wherein each reader brings their own personal history to enhance imagery and meaning.

At a practical level, does the constant 21st century babble of communications and background noise (not just aural) deprive would-be poets of time to reflect? This implies a somewhat rose-tinted view of earlier times, even though the virtual disappearance of a Classics-based education system has certainly divested us of the safety net of enduring metaphors. In addition, as scientists becoming ever-more specialist in narrower fields (not to mention polymathism seemingly frowned upon), is there a fear from practitioners and publishers alike that the profession has little worth versifying? Even the romantic image of the stargazer spending their nights in a chilly dome has seemingly been replaced by observation via computer screen.

Despite there probably being more books arguing the relationship between arts and sciences than there are volumes of science-themed poetry (from Mary Midgley versus Richard Dawkins to Stephen Jay Gould's attack on E.O. Wilson's definition of consilience), there is plenty for scientist-poets, or just writers with scientific knowledge, to write about. The late 19th century arrogance that the quest for knowledge was nearing its end has been superceded by the view that there may even not be any final answers to life, the universe, and everything. Far from being a list of dry facts and equations, the methods of science demand creativity to achieve paradigm shifts, as anyone with an understanding of Einstein's thought experiments knows. Other natural philosophers have achieved major breakthroughs via aesthetic considerations, such as harmonic proportions for Johannes Kepler, symmetry for Clerk Maxwell and patterns and linguistic analogies for Mendeleyev. As theoretical physicist Lee Smolin has stated, his discipline is based around an aesthetic mode of working, fashioning constructs that capture some essence of understanding about reality. Are theories such as loop quantum gravity that different from poetic metaphors? After all, even the subatomic particle we call a quark was named after the sound of ducks, and then later linked to the rhyme in Finnegans Wake.

But then there is the difficulty of finding a universal definition for poetry anyway. The title of Michael Guillen's Five Equations that Changed the World: The Power and Poetry of Mathematics suggests an aesthetic form on par with verse. If we can accept a wider meaning then perhaps there is a solution as to where science poetry is still to be found: hidden in the mellifluous prose of popularisers. The poetic style of Carl Sagan and his successors can clearly be traced to Loren Eiseley, thence to the pre-war British polymath James Jeans, who in turn was not so far removed from T.H. Huxley at his most rhapsodical. In addition to his writing, Sagan was also capable of poetic gestures that clearly represent our multi-media age's continuation of Erasmus Darwin's verses. When Voyager 1 had passed the orbits of Neptune and Pluto, Sagan persuaded NASA to turn the probe's cameras back towards the sun and make a family portrait of the Solar System, including our very own pale blue dot. Surely this is a superlative example of the amalgamation of science and poetry? And as to the future, the English author Eden Phillpotts once wrote: "The universe is full of magical things, patiently waiting for our wits to grow sharper."

Technorati Tags: , ,

Saturday 9 January 2010

Quis custodiet ipsos custodes? (Or who validates popular science books?)

Gandhi once said "learn as if you were to live forever", but for the non-scientist interested in gaining accurate scientific knowledge this can prove rather tricky. Several options are available in the UK, most with drawbacks: there are few 'casual' part-time adult science courses (including the Open University); the World Wide Web is useful but inhibits organised, cohesive learning and there's always the danger of being taken in by some complete twaddle; whilst television documentaries and periodicals rarely delve into enough detail. This only leaves the ever-expanding genre of popular science books, with the best examples often including the false starts and failed hypotheses that make science so interesting.

However, there is a problem: if the book includes mistakes then the general reader is unlikely to know any better. I'm not talking about the usual spelling typos but more serious flaws concerning incorrect facts or worse still, errors of emphasis and misleading information. Admittedly the first category can be quite fun in a 'spot the mistake' sort of way: to have the particle physicists Brian Cox and Jeff Forshaw inform you that there were Muslims in the second century AD, as they do in Why does E=mc2? (and why should we care?) helps to make the authors a bit more human. After all, why should a physicist also have good historical knowledge? Then again, this is the sort of fact that is extremely easy to verify, so why wasn't this checked in the editing process? You expect Dan Brown's novels to be riddled with scientific errors, but are popular science book editors blind to non-science topics?

Since the above is an historical error many readers may be aware of the mistake, but the general public will often not be aware of inaccuracies relating to scientific facts and theories. Good examples of the latter can be found in Bill Bryson's A Short History of Nearly Everything, the bestselling popular science book in the UK in 2005. As a non-scientist Bryson admits that it's likely to be full of "inky embarrassments" and he's not wrong. For instance, he makes several references to the DNA base Thymine but at one point calls it Thiamine, which is actually Vitamin B1. However, since Bryson is presenting themed chapters of facts (his vision of science rather than any explanation of methods) these are fairly minor issues and don't markedly detract from the substance of the book.

So far that might seem a bit nitpicky but there are other works containing more fundamental flaws that give a wholly inaccurate description of a scientific technique. My favourite error of this sort can be found in the late Stephen Jay Gould's Questioning the Millennium and is howler that continues to astonish me more than a decade after first reading. Gould correctly states that raw radiocarbon dates are expressed as years BP (Before Present) but then posits that this 'present' relates directly to the year of publication of the work containing that date. In other words, if you read a book published in AD 2010 that refers to the date 1010 BP, the latter year is equivalent to AD 1000; whereas for a book published in AD 2000, 1010 BP would equate to AD 990. It's astounding that Gould, who as a palaeontologist presumably had some understanding of other radiometric dating methods, could believe such a system would be workable. The 'present' in the term BP was fixed at AD 1950 decades before Gould's book was published, so it doubly astonishes that no-one questioned his definition. You have to ask were his editors so in awe that they were afraid to query his text, or did his prominence give him copy-editing control of his own material? A mistake of this sort in a discipline so close to Gould's area of expertise can only engender doubt as to the veracity of his other information.

A more dangerous type of error is when the author misleads his readership through personal bias presented as fact. This is particularly important in books dealing with recent scientific developments as there will be few alternative sources for the public to glean the information from. In turn, this highlights the difference between professionals and their peer-reviewed papers and the popularisations available to the rest of us. There is an ever-increasing library of popular books discussing superstrings and M-theory but most make the same mistake of promoting this highly speculative branch of physics not just as the leading contender in the search for a unified field theory, but as the only option. Of course a hypothesis that cannot be experimentally verified is not exactly following a central tenet of science anyway. There has been discussion in recent years of a string theory Mafia so perhaps this is only a natural extension into print; nonetheless it is worrying to see a largely mathematical framework given so much premature attention. I suppose only time will tell...

It also appears that some publishers will accept material from senior but non-mainstream scientists on the basis of the scientist's stature, even if their hypotheses border on pseudoscience. The late Fred Hoyle was a good example of a prominent scientist with a penchant for quirky (some might say bizarre) ideas such as panspermia, who although unfairly ignored by the Nobel Committee seems to have had few problems getting his theories into print. Another example is Elaine Morgan, who over nearly four decades has written a string of volumes promoting the aquatic ape hypothesis despite lack of evidence in the ever-increasing fossil record.

But whereas Hoyle and Morgan's ideas have long been viewed as off the beaten track, there are more conventional figures whose popular accounts can be extremely misleading, particularly if they promote the writer's pet ideas over the accepted norm. Stephen Jay Gould himself frequently came in for criticism for overemphasising various evolutionary methods at the expense of natural selection, yet his peers' viewpoint is never discussed in his popular writings. Another problem can be seen in Bryan Sykes's The Seven Daughters of Eve, which received enormous publicity on publication as it gratifies our desire to understand human origins. However, the book includes a jumbled combination of extreme speculation and pure fiction, tailored in such a way as to maximise interest at the expense of clarification. Some critics have argued the reason behind Sykes's approach is to promote his laboratory's mitochondrial DNA test, capable of revealing which 'daughter' the customer is descended from. Scientists have to make a living like everyone else, but this commercially-driven example perhaps sums up the old adage that you should never believe everything you read. The Catch-22 of course is that unless you understand enough of the subject beforehand, how will you know if a popular science book contains errors?

A final example does indeed suggest that some science books aimed at a general audience prove to be just too complex for comprehensive editing by anyone other than the author. I am talking about Roger Penrose's The Road to Reality: A Complete Guide to the Laws of the Universe. At over one thousand pages this great tome is marketed with the sentence "No particular mathematical knowledge on the part of the reader is assumed", yet I wonder whether the cover blurb writer had their tongue firmly in their cheek? It is supposed to have taken Penrose eight years to write and from my occasional flick-throughs in bookshops I can see it might take me that long to read, never mind understand. I must confess all those equations haven't really tempted me yet, at least not until I have taken a couple of Maths degrees...