Saturday 9 January 2010

Quis custodiet ipsos custodes? (Or who validates popular science books?)

Gandhi once said "learn as if you were to live forever", but for the non-scientist interested in gaining accurate scientific knowledge this can prove rather tricky. Several options are available in the UK, most with drawbacks: there are few 'casual' part-time adult science courses (including the Open University); the World Wide Web is useful but inhibits organised, cohesive learning and there's always the danger of being taken in by some complete twaddle; whilst television documentaries and periodicals rarely delve into enough detail. This only leaves the ever-expanding genre of popular science books, with the best examples often including the false starts and failed hypotheses that make science so interesting.

However, there is a problem: if the book includes mistakes then the general reader is unlikely to know any better. I'm not talking about the usual spelling typos but more serious flaws concerning incorrect facts or worse still, errors of emphasis and misleading information. Admittedly the first category can be quite fun in a 'spot the mistake' sort of way: to have the particle physicists Brian Cox and Jeff Forshaw inform you that there were Muslims in the second century AD, as they do in Why does E=mc2? (and why should we care?) helps to make the authors a bit more human. After all, why should a physicist also have good historical knowledge? Then again, this is the sort of fact that is extremely easy to verify, so why wasn't this checked in the editing process? You expect Dan Brown's novels to be riddled with scientific errors, but are popular science book editors blind to non-science topics?

Since the above is an historical error many readers may be aware of the mistake, but the general public will often not be aware of inaccuracies relating to scientific facts and theories. Good examples of the latter can be found in Bill Bryson's A Short History of Nearly Everything, the bestselling popular science book in the UK in 2005. As a non-scientist Bryson admits that it's likely to be full of "inky embarrassments" and he's not wrong. For instance, he makes several references to the DNA base Thymine but at one point calls it Thiamine, which is actually Vitamin B1. However, since Bryson is presenting themed chapters of facts (his vision of science rather than any explanation of methods) these are fairly minor issues and don't markedly detract from the substance of the book.

So far that might seem a bit nitpicky but there are other works containing more fundamental flaws that give a wholly inaccurate description of a scientific technique. My favourite error of this sort can be found in the late Stephen Jay Gould's Questioning the Millennium and is howler that continues to astonish me more than a decade after first reading. Gould correctly states that raw radiocarbon dates are expressed as years BP (Before Present) but then posits that this 'present' relates directly to the year of publication of the work containing that date. In other words, if you read a book published in AD 2010 that refers to the date 1010 BP, the latter year is equivalent to AD 1000; whereas for a book published in AD 2000, 1010 BP would equate to AD 990. It's astounding that Gould, who as a palaeontologist presumably had some understanding of other radiometric dating methods, could believe such a system would be workable. The 'present' in the term BP was fixed at AD 1950 decades before Gould's book was published, so it doubly astonishes that no-one questioned his definition. You have to ask were his editors so in awe that they were afraid to query his text, or did his prominence give him copy-editing control of his own material? A mistake of this sort in a discipline so close to Gould's area of expertise can only engender doubt as to the veracity of his other information.

A more dangerous type of error is when the author misleads his readership through personal bias presented as fact. This is particularly important in books dealing with recent scientific developments as there will be few alternative sources for the public to glean the information from. In turn, this highlights the difference between professionals and their peer-reviewed papers and the popularisations available to the rest of us. There is an ever-increasing library of popular books discussing superstrings and M-theory but most make the same mistake of promoting this highly speculative branch of physics not just as the leading contender in the search for a unified field theory, but as the only option. Of course a hypothesis that cannot be experimentally verified is not exactly following a central tenet of science anyway. There has been discussion in recent years of a string theory Mafia so perhaps this is only a natural extension into print; nonetheless it is worrying to see a largely mathematical framework given so much premature attention. I suppose only time will tell...

It also appears that some publishers will accept material from senior but non-mainstream scientists on the basis of the scientist's stature, even if their hypotheses border on pseudoscience. The late Fred Hoyle was a good example of a prominent scientist with a penchant for quirky (some might say bizarre) ideas such as panspermia, who although unfairly ignored by the Nobel Committee seems to have had few problems getting his theories into print. Another example is Elaine Morgan, who over nearly four decades has written a string of volumes promoting the aquatic ape hypothesis despite lack of evidence in the ever-increasing fossil record.

But whereas Hoyle and Morgan's ideas have long been viewed as off the beaten track, there are more conventional figures whose popular accounts can be extremely misleading, particularly if they promote the writer's pet ideas over the accepted norm. Stephen Jay Gould himself frequently came in for criticism for overemphasising various evolutionary methods at the expense of natural selection, yet his peers' viewpoint is never discussed in his popular writings. Another problem can be seen in Bryan Sykes's The Seven Daughters of Eve, which received enormous publicity on publication as it gratifies our desire to understand human origins. However, the book includes a jumbled combination of extreme speculation and pure fiction, tailored in such a way as to maximise interest at the expense of clarification. Some critics have argued the reason behind Sykes's approach is to promote his laboratory's mitochondrial DNA test, capable of revealing which 'daughter' the customer is descended from. Scientists have to make a living like everyone else, but this commercially-driven example perhaps sums up the old adage that you should never believe everything you read. The Catch-22 of course is that unless you understand enough of the subject beforehand, how will you know if a popular science book contains errors?

A final example does indeed suggest that some science books aimed at a general audience prove to be just too complex for comprehensive editing by anyone other than the author. I am talking about Roger Penrose's The Road to Reality: A Complete Guide to the Laws of the Universe. At over one thousand pages this great tome is marketed with the sentence "No particular mathematical knowledge on the part of the reader is assumed", yet I wonder whether the cover blurb writer had their tongue firmly in their cheek? It is supposed to have taken Penrose eight years to write and from my occasional flick-throughs in bookshops I can see it might take me that long to read, never mind understand. I must confess all those equations haven't really tempted me yet, at least not until I have taken a couple of Maths degrees...

Sunday 3 January 2010

What's in a label? How words shape reality

With the start of a new year it seems appropriate to look at how our perception of the universe is created via language - after all, there's no position in space identifying an orbital starting point. We grow up with a notion of reality that is largely defined by convenience and historical accidents embedded into our language and therefore our thought patterns (and vice versa). For at least the last six hundred years many societies have called our planet Earth, whilst of course Ocean would be more appropriate. Whilst this is just an obvious chauvinism for a land-based species, there are other terms that owe everything to history. We count in base ten, position zero longitude through the Greenwich Meridian and usually show the Earth from one perspective, despite there not being an arrow in our galaxy stating 'this way up' (but then had Ancient Egyptians' view prevailed, Australia and New Zealand would be in the Northern Hemisphere.)

So how far can go with constructs? Our calendar is an archaic, sub-optimal mish-mash, with the interpolation of July and August meaning the last four months of the year are inaccurately named seven through ten. The changeover from the Julian to Gregorian calendar varied from nation to nation, meaning well-known events such as the birth of George Washington and the Bolshevik Revolution have several dates depending on the country defining that piece of history. As for the majority of humans agreeing that we are now in AD 2010, thanks to a fifteen hundred year-old mistake by Dionysius Exiguus our current year should really be at least AD 2014, if we accept that an historical figure called Jesus of Nazareth was born during the lifetime of Herod the Great. It appears that even the fundamentals that guide us through life are subjective at the very least if not far from accurate in many cases.

The philosopher of science Thomas Kuhn argues that all scientific research is a product of the culture of the scientists engaged on those projects, so whilst we might argue that Galileo was the first scientist in a strictly modern use of the word, can there be a definitive boundary between the quasi-mystical thought processes of Copernicus and Kepler (and even Newton), and that of the modern exponents typified by Einstein and Hawking? Whilst we would like to believe in a notion of pure objectivity, scientists are just as subjective as everyone else and their theories are therefore built on assumptions directly related to history, both cultural and biological.

We use labels to comfort ourselves, even boost our egos, via unconscious assumptions that are gradually looking more ridiculous as we delve ever deeper into the mysteries of creation. For example, the past sixty-five million years has been a period frequently named 'the Age of Mammals'. Yet as Stephen Jay Gould was fond of pointing out, most of the world's biomass is microbial and we macroscopic life forms are comparative newcomers, restricted to a far reduced range of environments compared to bacteria, protists and other small-scale organisms.

Despite such sense-expanding tools as infra-red telescopes and electron microscopes, we still process sensory input and use primarily audio-visual output to define scientific theories and methodology. We are in thrall to the languages we use define our thoughts, both conversational language and mathematics. Although the lingua franca of science has varied over the centuries, all languages from Latin to English have one thing in common: they are used to tell us stories. At a basic level, the history of science is riddled with fables and apocrypha, from Newton being hit by an apple (and inventing the reflecting telescope) to Galileo dropping weights from the Leaning Tower of Pisa, even Columbus believing the world was a sphere (he didn't - he thought it was pear-shaped!)

So if scientific history cannot be relied upon, what about the hypotheses and theories themselves? In the words of John Gribbin, we construct 'Just So' stories to create a comprehendible version of reality. Presumably this reliance on metaphor will only increase as our knowledge becomes further divorced from everyday experience but our technology fails to keep pace with confirming new theories; for example, it is far from likely that we will ever be able to directly view a superstring.

In addition, language doesn't just restrict our ideas: if a term has a scientific sense differing from vernacular meaning, problems frequently arise. A classic example would be quantum leap, which to most people means an enormous step forward but to physicists is an electron's miniscule change of energy level. However, even personal computer pioneer Sir Clive Sinclair used the term in its former meaning for his 1984 Quantum Leap microcomputer (at least I assume he did, although QL owners may disagree...)

Speaking of which, perhaps when we finally build (or machines build for us) computers capable of true artificial intelligence, new ways of exploring the universe not tied down to conventional linguistic-based thought patterns may arise. Then again, since we will be the parents of these machines, this may not be feasible. As one of Terry Pratchett's characters stated: "I think perhaps the most important problem is that we are trying to understand the fundamental workings of the universe via a language devised for telling one another where the best fruit is." But all things considered, we haven't done that badly so far.

Technorati Tags: , ,