Thursday 25 August 2011

Something sinister: the left handedness of creation

I'm embarrassed to admit it but the first home-grown science experiment I remember undertaking was to explore the validity of astrology. Inspired by the Carl Sagan book and television Cosmos I decided to see for myself if, after centuries of practice by millions of adherents, the whole thing really was a load of bunk. So for three months I checked the predictions for my star sign every week day and was amazed at the result: I found them so vague and generalised that I could easily find something in my life each day to fit the prediction. A sort of positive result that negates the hypothesis, as it were. As a young adult I encountered people with a rather less sceptical frame of mind, and if anything their astrological information only reinforced my earlier results. As my birthday is on the 'cusp' between two star signs, I found that about half the astrologically-inclined viewed me as a typical sign A whilst the other half dubbed me a typical sign B. At this point, I think I can rest my case...

Of course, astrology is a very old discipline so it's no wonder it's pretty easy to see the cracks. Over the past forty or so years there have been several generations of authors with a slightly more sophisticated approach, paying superficial lip service to the scientific method. Although their methodology fails due to the discarding or shoehorning of data, this hasn’t stopped the likes of L. Ron Hubbard from making mints. To this end, I decided to generate a hypothesis of my own and test it to a similar level of scrutiny as their material. Thus may I present my own idea for consideration: evidence suggests that our universe was created by an entity with a penchant for a particular direction, namely left-handed / anti-clockwise. Here are three selected cases to support the hypothesis, although I cannot claim them to have been chosen at random, for reasons that will soon become obvious.

The first argument: in the 1950s and 60s physicists found that the weak nuclear force or interaction, responsible for radioactivity, does not function symmetrically. Parity violation, to be technical about it, means that for example massless particles called neutrinos spin in a counter clockwise direction if they are created by beta decay. Like many other fundamental parameters to our universe, no-one has an explanation of why this is so: it just is.

The second argument: amino acids are usually described as the building blocks of proteins, but in addition to those used to make life on Earth, additional types are found in meteorites. It has been theorised that life was made possible by meteorites and comets delivering these chemicals to the primordial Earth, but radiation encountered on their journey may have affected them. Whereas amino acids synthesised in laboratories contain approximately equal amounts of mirror image (i.e. left- and right-handed) forms, nearly all life is constructed from the left-handed, or L-amino acids.

The third argument: a new catalogue of observations using the latest generation of telescopes indicate that from our viewpoint most galaxies rotate counter clockwise about their cores. Of course it's been a long time since humans believed the Earth to be the centre of the Universe, but even so, this is a disturbing observation. We now consider our planet just an insignificant component of the second-largest galaxy within a small group at one end of a super cluster. In which case, why is galactic rotation so far removed from random?

So how do these arguments stand up to scrutiny, both by themselves and collectively? Not very well, I'm afraid. Working backwards, the third argument shows the dangers of false pattern recognition: our innate ability to find patterns where none exist or to distort variations into a more aesthetic whole. In this particular case, it appears that the enthusiasts who classified the galaxies' direction of rotation were mistaken. Put it down to another instance of the less than perfect powers of perception we humans are stuck with (thanks, natural selection!)

The second argument initially bears up somewhat better, except that I deliberately ignored all of the biological elements against the argument. The best known of these is probably DNA itself, which is primarily helical in a clockwise direction. This seems to be a fairly common problem in the history of science, with well-known cases involving famous scientists such as Alfred Wegener, whose continental drift hypothesis was a precursor of plate tectonics but who deliberately ignored unsupportive data.

The first argument stands by itself and as such cannot constitute a pattern (obviously). Therefore it is essentially worthless: you might as well support the left-handed notion by stating that the planets in our solar system orbit the sun in a counter clockwise direction - which they do, unless you happen to live in the Southern Hemisphere!

Full moon viewed via a Skywatcher 130PM telescope
Once again, our ability to find patterns where none exist, or as with the rotation of galaxies, to misconstrue data, leaves little doubt that our brains are naturally geared more towards the likes of astrology than astronomy. Pareidolia, the phenomenon of perceiving a pattern in a random context, is familiar to many via the man in the moon. However, there are varying degrees to this sort of perception; I confess I find it hard to see the figure myself (try it with the image above, incidentally taken through my 130mm reflector telescope earlier this year – see Cosmic Fugues for further information on genuine space-orientated pattern-making).

Of course, these skills have at times combined with innate aesthetics to aid the scientific enterprise, from the recognition and assembly of Hominin fossil fragments from the Great Rift Valley to Mendeleev's element swapping within the periodic table. However, most of the time we need to be extremely wary if a pattern seems to appear just a little bit too easily. Having said that, there still seem to be plenty of authors who cobble together a modicum of research, combine it with a catchy hook and wangle some extremely lucrative book and television documentary deals. Now, where’s a gullible publisher when you need one?

Monday 1 August 2011

Weather with you: thundersnow, hosepipe bans and climate punditry

I must confess to have not watched any of the current BBC series The Great British Weather, since (a) it looks rubbish; and (b) I spend enough time comparing the short-range forecast with the view outside my window as it is, in order to judge whether it will be a suitable night for astronomy. Since buying a telescope at the start of the year (see an earlier astronomy-related post for more details) I've become just a little bit obsessed, but then as an Englishman it's my inalienable right to fixate on the ever-changeable meteorology of these isles. If I think that there is a chance of it being a cloud-free night I tend to check the forecast every few hours, which for the past two months or so has proved to be almost uniformly disappointing; as a matter of fact, the telescope has remained boxed up since early May.

There appears to be a grim pleasure for UK-based weather watchers that when a meteorology source states that it is currently sunny and dry in your location it may in fact be raining torrentially. We all realise forecasting relies on some understanding of a complex series of variables, but if they can't even get the 'nowcast' correct what chance do the rest of us have?

So just how has the UK's mercurial weather patterns affected the science of meteorology and our attitude towards weather and climate? As far back as 1553 the English mathematician and inventor Leonard Digges included weather lore and descriptions of phenomena in his A General Prognostication. Since then, British scientists have been in the vanguard of meteorology. Isaac Newton's contemporary and rival Robert Hooke may have been the earliest scientist to keep meteorological records, as well as inventing several associated instruments. Vice-Admiral Robert FitzRoy, formerly captain of HMS Beagle (i.e. Darwin's ship) was appointed as the first Meteorological Statist to the Board of Trade in 1854, which in today’s terms would make him the head of the Met Office; he is even reputed to be the inventor of the term 'forecast'.

Modern science aside, as children we pick up a few snippets of the ancient folk learning once used to inculcate elementary weather knowledge. We all know a variation of "Red sky at night, shepherd's delight; red sky in the morning, shepherd's warning", the mere tip of the iceberg when it comes to pre-scientific observation and forecasting. But to me it looks if all of us in ever-changeable Britain have enough vested interest in the weather (once it was for crop-growing, now just for whether it is a sunglasses or umbrella day – or both) to maintain our own, personal weather database in our heads. Yet aren't our memories and lifespan in general just too short to allow us a genuine understanding of meteorological patterns?

One trend that I consider accurate is that those 'little April showers' I recall from childhood (if you remember the song from 'Bambi') are now a thing of the past, with April receiving less rainfall than June. This is an innate feeling: I have not researched it enough to find out if there has been a genuine change over the past three decades. Unfortunately, a combination of poor memory and spurious pattern recognition means we tend to over-emphasise 'freak' events - from thundersnow to the day it poured down at so-and-so's June wedding - at the expense of genuine trends.

For example, my rose-tinted childhood memories of six largely rain-free weeks each summer school break centre around the 1976 drought, when my brother had to be rescued from the evil-smelling mud of a much-reduced reservoir and lost his shoes in the process. I also recall the August 1990 heat wave as I was at the time living less than 20 km from Nailstone in Leicestershire, home of the then record UK temperature of 37.1°C. In contrast, I slept through the Great Storm of 1987 with its 200+km/h winds and don’t recall the event at all! As for 2011, if I kept a diary it would probably go down as the 'Year I Didn't Stop Sneezing'. City pollution and strong continental winds have combined to fill the London air with pollen since late March, no doubt much to the delight of antihistamine manufacturers.

An Norfolk beach in a 21st century summer
An East Anglian beach, August 2008


Our popular media frequently run stories about the latest report on climate change, either supporting or opposing certain hypotheses, but rarely compare it to earlier reports or long-term records. Yet even a modicum of research shows that in the Nineteenth Century Britain experienced a large variation in weather patterns. For example, the painter J.M.W. Turner's glorious palette was not all artistic licence, but almost certainly influenced by the volcanic dust-augmented sunsets following the 1815 Tambora eruption. It wasn't just painting that was affected either, as the UK suffered poor harvests the following year whilst in the eastern United States 1816 was known as 'Eighteen Hundred and Froze to Death'.

The influence of the subjective on the objective doesn't sound any different from most other human endeavours, except that weather professionals too - meteorologists, climatologists, and the like - also rely on biases in their work. Ensemble forecasting, which uses slightly different initial conditions to create data reports which are then combined to provide an average outcome, has been shown to be a more accurate method of prediction. In other words, this sounds like a form of scientific bet hedging!

Recent reports have shown that once-promising hypotheses involving singular factors such as sunspot cycles can in no way account for most primary causes of climate change, either now or in earlier epochs. It seems the simple answers we yearn for are the prerogative of Hollywood narrative, not geophysical reality. One bias that can seriously skew data is the period being used in a report. It sounds elementary, but we are rarely informed that even the difference of a single year in the start date can significantly affect the outcome as to whether, for example, temperature is increasing over time. Of course, scientists may deliberately only publish results for periods that support their hypotheses (hardly a unique trait, if you read Ben Goldacre). When this is combined with sometimes counter-intuitive predictions – such as that a gradual increase in global mean temperature could lead to cooler European winters – is it little wonder we non-professionals are left to build our level of belief in climate change via a muddle of personal experience, confusion and folk tales? The use of glib phrases such as 'we're due another glaciation right about now' doesn't really help either. I'm deeply interested in the subject of climate change and I think there is serious cause for concern, but the data is open to numerous interpretations.

So what are we left with? (Help: I think I'm turning into Jerry Springer!) For one thing, the term 'since records began' can be about as much use as a chocolate teapot. Each year we get more data (obviously) and so each year the baseline changes. Meteorology and climatology are innately complex anyway, but so far both scientists and our media have comprehensively failed to explain to the public just how little is known and how even very short-term trends are open to abrupt change (as with the notorious 'don't worry' forecast the night of the 1987 Great Storm). But then you have only to look out of the window and compare it to the Met Office website to see we have a very long way to go indeed…