Tuesday 29 May 2012

How to be cyantific: connecting the laboratory to the artist's studio

Moving house - or more broadly speaking, hemispheres - last year was a good excuse for a spring clean on an epic scale. One of the items that didn't make the grade even as far as a charity shop was a framed painting I created several decades' ago, a clumsy attempt to describe scientific imagery in acrylics. In front of a false colour radar map of the surface of Venus was the head and neck of a raptor dinosaur above a bowler-hatted figure straight out of Rene Magritte. You can judge the work for yourself below; I seem to remember the bemusement of the framer but as I said at the time, it wasn't meant to be to everyone's taste...

But if my daub was rather wide of the mark, just how successful have attempts been to represent the theory and practice of science in the plastic, non-linear, arts such as painting and sculpture? Whereas musical and mathematical ability seem to readily connect and there has been some admirable science-influenced poetry, by comparison the visual arts are somewhat lacking. Much has been written about the Surrealist's use of psychoanalysis but as this discipline is frequently described as a pseudoscience I've decided to cut through the issue by ignoring it and concentrate on the 'hard' sciences instead.

Combining science and art - or failing to
One of the most difficult issues to resolve (especially for those who accept C.P. Snow's theory of 'two cultures') is that whilst most science books for a general readership describe a linear progression or definitive advancement to the history of science, art has no such obvious arrow of change. After all, a century has passed since the early non-realist movements (Cubism, les Fauves, etc.) but there are plenty of contemporary artists who avoid abstraction. Granted, they are unlikely to win any of the art world's top prizes, but the progression of science and its child technology over the past three or so centuries clearly differentiates the discipline from the arts, both the sequential schools of the West and the 'traditional' aesthetics of other cultures.

Of course, it's usual to differentiate the character of scientists and artists about as far apart as any human behaviour can get, but like most stereotypical ideas it doesn't take much to prove them wildly inaccurate. Anyone aware of Einstein's views ("Imagination is more important than knowledge") or his last unsuccessful decades spent on a unification theory that ignored quantum mechanics will understand that scientists can have as imaginative and colourful personality as any artist. Indeed, the cutting edge of theoretical science, especially physics, may rely on insights and creativity as much as advanced mathematics, a far cry from the popular image of dull, plodding scientists who follow dry, repetitive processes.

Another aspect worth mentioning is that our species appears unique in the ability to create representations of the world that can be recognised as such by most if not all of our species. Despite Congo the chimpanzee gaining enough kudos in the 1950s for Picasso and Miro to buy his paintings, as well as more recent media interest in elephant art works, there is no evidence that under controlled experimental conditions non-human artists can produce obviously realistic images unaided. Then again, could it be that we are so biased in our recognition patterns that we do not identify what passes for realism in other species? Might it be possible that other animals interpret their work as representational when to us it resembles the energetic daubs of toddlers? (This suggests shades of Douglas Adams's dolphins, who considered themselves more intelligent than humans because rather than build cities and fight wars, all do is muck about in water having a good time...)

So where do we start? Firstly, what about unintentional, science-generated art? Over the past decade or so there has been a spate of large format, text-light, coffee table books consisting of images taken by space probes, telescopes and Earth resources satellites. A recent internet success consisted of time lapse photography of the Earth taken by crew aboard the International Space Station; clearly, no-one spent a hundred billion US dollars or so just to make a breath-taking video, but the by-products of the project are a clear example of how science can incidentally create aesthetic work. This isn't just a contemporary phenomenon either: the earliest examples I can think of are Leonardo da Vinci's dissection drawings; in addition to being possibly the most detailed such illustrations until today's non-invasive scanning techniques they are also beautiful works of art in themselves. But then Leonardo's intentions appear to have been to both investigate the natural world for the sheer sake of learning as well as improve his painting technique by knowledge of the underlying anatomy. I wonder if there are any contemporary artists who use MRI technology or similar as a technical aid for their draftsmanship?

At the other end of the spectrum (groan), mathematician Marcus du Sautoy's 2010 BBC TV series The Beauty of Diagrams was an interesting discourse on how certain images created for a scientific purpose have become mainstream visual symbols. From Vitruvian Man, da Vinci's analysis of ideal human proportions, to the double helix diagram of DNA (incidentally first drawn by Odile Crick, an artist married to a scientist), these works integrate the transmission of information with a beautiful aesthetic. The latter example is particularly interesting in that the attempt to illustrate complex, miniscule structures in an easily understandable format has since become a mainstay of science diagrams, shorthand that is frequently interpreted by the non-specialist as a much closer representation of reality than the schematic it really is.

Physicist and writer John Gribbin has often stated that the cutting edge science of the past century, especially physics, has had to resort to allegory to describe situations at scales far removed from human sensual experience. This implies that an essential method by which science can be conveyed is via the written metaphor and visual symbolism. As we delve further into new phenomena, science may increasingly rely on art to describe ideas that cannot for the foreseeable future be glimpsed at first hand. But ironically this could have a deleterious effect on public understanding if the model is too successful, for then it becomes difficult to supplant with a more accurate theory. An obvious example is the architecture of the atom, with the familiar if highly inaccurate classical model of electrons orbiting the nucleus like a miniature solar system prevalent long after the development of quantum electrodynamics.

You might ask how difficult would it be to describe probabilities and world paths in conventional art media, but Cubism was a style attempting to combine different viewpoints of a subject into one composition. If this appears too simplistic, then it may seem more convincing once you know that physicist Niels Bohr was inspired by Cubist theories during the development of the Complementarity Principle on the wave-particle duality. Cubism is of course only one of the more obvious visual tricks but even the most photo-realistic painting requires techniques to convert three dimensional reality (well four, if you want to include time), into two dimensions. How often do we consider this conversion process in itself, which relies on a series of visual formula to produce the desired result? It may not be science, but the production of most art isn't a haphazard or random series of actions.

It's easy to suggest that a fundamental difference between science and the plastic arts is that the former is ideally built of a combination of method and results whilst the latter is firmly biased towards the works alone. An exception can be seen in abstract expressionism, a.k.a. action painting: at art college we were taught that to practitioners of this school the moment of creation was at least as important as the final result. To this end, Jackson Pollock was filmed painting from as early as 1950, with numerous other artists of various movements following suit soon after. In general though, the art world runs on the rich individuals and corporations who buy the works, not the theories of critics.

And what of art theory? Most of it isn't relevant here, but one of the fundamentals of composition is the harmony and rhythm generated by the use of mathematical ratios and sequences. The Golden section and Fibonacci series are frequently found in organic structures, so in a sense their use is a confirmation of that old adage that the purpose of art is to hold a mirror up to nature. If that sounds trite, why not examine works by contemporary artists inspired by scientific theories or methodologies? That's coming in the next post...

Sunday 1 April 2012

A very special relationship: NASA, BIS and the race to the moon

More years back than I care to remember I met a British satellite engineer who was part of a team investigating a loose component rattling around its latest project...which unfortunately was already in Earth orbit. By rolling the satellite via its attitude thrusters they hoped to discover the nature of the problematic item, which I glibly suggested might have been an absent-minded engineer's lunchbox. I don't believe my idea was followed up and as it was, I never did find out the outcome. Answers on a postcard, please!

The relevance of this anecdote is that as discussed in an earlier post on boffins, it's often been said that Britain stopped technologically trailblazing some decades back. Now, thanks to the Freedom of Information Act, newly-released material suggests the pipe-smoking 'backroom boys' might have played a more pivotal role in astronautics than has been generally made public. Some aviation experts consider the fabled TSR2 strike aircraft (envisioned in 1956 and cancelled a decade later) as the last project where Britain took the lead, but the most recently released FoI records offer tantalising evidence otherwise.

I realise this idea requires concrete evidence, but we have to remember that despite tiny budgets by American standards, Britain is the original home of numerous technological advances, from the Hawker Harrier 'jump' jet to the hovercraft. And never forget that the USA has never developed a supersonic airliner in the forty-plus years since Concorde first flew. One reason the UK has apparently failed to keep up could be that transatlantic politics have overridden the applied science. For example, the satellite engineer mentioned above also worked on the 1980's fiasco known as Project Zircon, a British military satellite that was cancelled allegedly due to skyrocketing costs (there's sort of a jest in there, if you look hard enough). But what if an additional, if not real primary reason, was pressure from the US Government? There have been hints over the years that the European Launch Development Organisation, a predecessor of the European Space Agency, was forced to cancel its remote-controlled space tug project as NASA (and therefore the White House) deemed it too advanced and therefore a potential competitor. So if post-war British technology has been deemed a commercial or security risk to the USA, might the latter have applied pressure to cancel the project or even take over the research, lock, stock and blueprint?

This might sound far-fetched, but many a former British security officer's memoirs have mentioned that the 'special relationship' between the two nations has led the UK to kowtow to the USA on numerous occasions. This ranges from automatically offering new military-biased technology such as signals intelligence software to the US, through to diverting national security listening resources to US-specified targets at the drop of a hat. So might it be possible that political pressure rather than rising costs and technological failures has caused the cancellation of advant-garde projects, or even that the US has unfairly appropriated British high-tech wizardry?

The main thrust of this post (pun on its way) concerns the Apollo/Saturn spacecraft and rocket system (geddit now?) and how the US apparently single-handedly managed to achieve a moon landing less than a decade after the start of manned spaceflight. After all, if you consider that the Saturn V was a completely reliable, purpose-built civilian launch vehicle, unlike earlier manned spacecraft which had relied on adapted ballistic missiles, and in addition was far larger and more powerful than any previous American rocket, it seems incredible how quickly the project came together. Also, one of the chief designers was Wernher von Braun, an idealistic dreamer whose primary life-long interest appears to have been a manned mission to Mars and who a decade before Apollo had been developing plans for 160-foot long rocket ships carrying crews of twenty astronauts! Even the doyen of technology prophets Arthur C. Clarke was sceptical that NASA could achieve President Kennedy's goal for a manned moon landing before 1970.

In which case, I hear you ask, how did Project Apollo succeed so magnificently, especially when the N1, the USSR's equivalent, pretty much failed to escape the launchpad? It wasn't with the help of alien technology, that's for sure. At this point it is worth going back into Clarke's past. In 1937 the Technical Committee of the British Interplanetary Society (BIS), of which Clarke was twice chairman, began a study for a manned moon landing mission. The launch vehicle was comparatively modest compared to Saturn V and the N1, utilising tiers of several thousand small solid-fuel rockets, each step being akin to the later real-life launch vehicle stages. Then in 1949, knowledge of the German V-2 rockets (in which Wernher von Braun had played a key role) led the BIS team to switch to liquid-fuelled engines.

But if the rocket seems highly impractical to modern eyes*, the manned component of the BIS scheme was remarkable for its similarity to NASA hardware, being a combination of the Apollo CMS and LM craft. Many of its features are fundamentally identical to the real thing, from carbon dioxide scrubbers to landing parachutes. Even the EVA suits bear a striking similarity to the NASA design, albeit using less advanced materials. The only big difference I can see was the lack of an onboard computer in the BIS design: hardly surprising, considering the first programmable electronic computer, the room-sized Colossus at Bletchley Park, didn't become operational until 1944 (beat that, ENIAC!) I assume the poor navigator would be stuck with a slide rule instead, provision having been made in the ship's larder for coffee to keep them awake.

*Since then, real launch vehicles have used the modular approach, including the private company OTRAG in the 1970s and '80s and even the Saturn V's predecessors, Saturn 1 and 1B, which used a cluster of eight boosters around the core of the first stage.

But the moon landing project wasn't totally restricted to paper: several instruments were actually built, including an inertial altimeter and a coelostat that was demonstrated at the Science Museum in London. The competence of the Technical Committee members shouldn't be underestimated, as in addition to Arthur C. Clarke they included A.V. Cleaver (another sometime BIS chairman) and R.A. Smith, both of whom later worked on British military rocket and missile projects.

British Interplanetary Society moon lander
The British boffin's ultimate pipe dream

It might not appear convincing that these British speculations could have been converted into NASA blueprints, but a combination of carrot and stick during the dark, paranoid days of the Cold War might have been enough to silence the BIS team's complaints at the appropriation of their work. After all, the project generated a lot of attention even before the Second World War, with coverage in Time Magazine and a visit from a presumed Nazi agent in 1939.

What's more, by the early 1950s Clarke was communicating with now US-based ex-V-2 rocketeers von Braun and Hermann Oberth, whilst R.A. Smith's son later worked for NASA on the Apollo programme! There is even an intriguing suggestion that the very idea of launching early satellites on adapted military missiles (a technique utilised by both the USA and USSR) was promoted in the former country by Alexander Satin, then chief engineer of the Air Branch of the Office of Naval Research, US Navy, after he witnessed a satellite project at the 1951 Second Astronautical Congress in London. And of course, that project's team included Clarke and Cleaver; the space community in those days must have been rather on the small side.

Despite the organisation's name, there have been many American BIS members over the decades, including senior NASA figures such as Dr. Kurt Debus, Director of the John F. Kennedy Space Center during the 1960s; and Gerald Griffin, a Lead Flight Director during the Apollo programme. NASA's primary contractors for Apollo were equally staffed with BIS members, including Grumman's project manager for the Lunar Module (LM), Joseph Gavin Jr. I'm not suggesting that every blivet and gubbins (to use Clarkian terms) on the BIS lunar ship was directly translated into NASA hardware, but the speed with which Project Apollo succeeded, especially compared to the USSR's failure despite its' initial head start, smacks of outside assistance. For an example of how rapidly NASA contractors appear to have cobbled together their designs, Thomas Kelly, Grumman's LM Chief Design Engineer, admitted he was one of only two employees working on LM designs for several years leading up to the NASA-awarded contract in 1962.

In addition to the BIS material, there are X-Files style hints that the British Government was making strides of a more nuts-and-bolts nature with its own lunar landing programme. In 1959 the UK's rocket launch site in Woomera, Australia, appears to have begun construction of a launch pad capable of handling the two- and three-stage man-rated rockets then under development by various British aerospace consortiums, the most prominent of which included winged orbiters akin to more recent NASA lifting body designs. (Incidentally, five UK companies at the time were involved in spacesuit development, with the final Apollo EVA suit owing a lot to the undergarment cooling system developed in the UK.)

Just to put a spanner in the works, one negative piece of evidence for my technology censorship hypothesis is that NASA clearly took no notice of the BIS crew menu. Even after Apollo 11 large strides in technology continued to be made, but the work of the food technologists was not amongst them: all Apollo astronauts lost weight and suffered electrolyte imbalance, which clearly would not have happened if they had stuck to the wholesome fare - ham and cheese sandwiches, porridge, and the like - envisioned by the British boffins. It's a shame that their health temporarily suffered, but at least Neil Armstrong and co. could take music cassettes of everyone from Dvorak to the Beatles on their journeys; imagine being stuck in a small cabin with scratchy recordings of Flanagan and Allen or Vera Lynn...

Monday 27 February 2012

Predators vs poisons: the ups and downs of biological control

Ever since Darwin, islands and island groups have been known as prominent natural laboratories of evolution. Their isolation leads to radiation of species from a single common ancestor, the finches and giant tortoises of the Galapagos Islands providing a classic example. But a small population restricted in range also means that many island species are extremely susceptible to external factors, rapid extinction being the ultimate result - as can be seen from the dodo onwards. Living as I do on an island (New Zealand counts within the terms of this discussion, as I will explain) has led me to explore what a foreign invasion can do to a local population.

Either through direct hunting or the actions of imported Polynesian dogs and rats, almost half the native vertebrate fauna was wiped out within a few centuries of humans arriving in New Zealand; so much for the myth of pre-technological tribes living in ecological harmony! But the deliberate introduction of a new species to pray on another is now a much-practised and scientifically-supported technique. One of the late Stephen Jay Gould's most moving essays concerned the plight of the Partula genus of snails on the Society Islands of Polynesia. The story starts with the introduction of edible Achatina snails to the islands as food, only for some to escape and become an agricultural pest. In 1977 the Euglandina cannibal wolfsnail was brought in as a method of biological control, the idea being that they would eat the crop munchers. Unfortunately, the latest wave of immigrant gastropods ignored the Achatina and went after the local species instead. The results were devastating: in little more than a decade, many species of Partula had become extinct in their native habitat.

(As an interesting aside, the hero of Gould's Partula vs. Euglandina story is gastropod biologist Henry Crampton, whose half century of research into the genus is presumably no longer relevant in light of the decimation of many species. Yet Crampton, born in 1875, worked in typical Victorian quantitative fashion and during a single field trip managed to collect 116,000 specimens from just a single island, Moorea. I have no idea how many individual snails existed at the time, but to me this enormous number removed from breeding population in the name of scientific research was unlikely to do anything for the genus. I wonder whether comparable numbers of organisms are still being collected by researchers today: somehow I doubt it!)

The Society Islands is not the only place where the deliberate introduction of Euglandina has led to the unintended devastation of indigenous snail species: Hawaii and its native Achatinella and Bermuda's Poecilozonites have suffered a similar fate to Partula. Gould used the example of the Partula as a passionate plea (invoking 'genocide' and 'wholesale slaughter') to prevent further inept biological control programmes, but do these examples justify banning the method in totality?

The impetus for this post came from a recent visit to my local wetlands reserve, when my daughters played junior field biologists and netted small fish in order to examine them in a portable environment container (alright, a jam jar) - before of course returning them to the stream alive. The main fish species they caught was Gambusia, which originates from the Gulf of Mexico but was introduced to New Zealand in the 1930s as a predator of mosquito larvae. However, akin to Euglandina it has had a severe impact on many other fish species and is now rightly considered a pest. In fact, it's even illegal to keep them in a home aquarium, presumably just in case you accidentally aid their dispersion. Australia has also tried introducing Gambusia to control the mosquito population, but there is little data to show it works there either. The latter nation also provides a good illustration of environmental degradation via second- and third-hand problems originating from deliberate introduction. For example, the cane toad was imported to control several previously introduced beetle species but instead rapidly decimated native fauna, including amphibians and reptiles further up the food chain, via toad-vectored diseases.

Gambusia: the aggressive mosquito fish
Gambusia affinis: a big problem in a small fish

This isn't to say that there haven't been major successes with the technique. An early example concerns a small insect called the cottony cushion scale, which began to have a major impact on citrus farming in late Nineteenth Century California. It was brought under control by the introduction of several Australian fly and beetle species and without any obvious collateral damage, as the military might phrase it. But considering the extinction history of New Zealand since humans arrived, I've been amazed to discover just how many organisms have been deliberately introduced as part of biological control schemes, many in the past quarter century. For instance, twenty-one insect and mite species have been brought over to stem the unrestrained growth of weeds such as ragwort and gorse, although the rates of success have been extremely mixed (Old man's beard proving a complete failure, for example). As for controlling unwelcome fauna in New Zealand, a recent promising research programme involves the modification of parasites that could inhibit possum fertility. This is something of a necessity considering possums (first imported from Australia in the 1830s and now numbering around sixty million) are prominent bovine tuberculosis vectors.

Stephen Jay Gould was a well-known promoter of the importance of contingency within evolution, and how a re-run of any specific branch of life would only lead to a different outcome. So the question has to be asked, how do biologists test the effect of outsider species on an ecosystem (i.e. within laboratory conditions) when only time will show whether the outcome is as intended? No amount of research will show whether an unknown factor might, at an unspecified time during or after the eradication programme, have a negative impact. It could have been argued in the past that the relative cheapness of biological control compared to alternatives such as poison or chemicals made it the preferable option. However, I imagine the initial costs, involving lengthy testing cycles, mean that it is no longer a cut price alternative.

Considering the recent developments in genetic modification (GM), I wonder whether researchers have been looking into ways of minimising unforeseen dangers? For example, what about the possibility of tailoring the lifespan of the control organism? In other words, once the original invasive species has been eliminated, the predator would also rapidly die out (perhaps by something as simple as being unable to switch to an alternative food source, of which there are already many examples in nature). Or does that sound too much like the replicant-designing Dr Eldon Tyrell in Blade Runner?

One promising recent use of GM organisms as a biological control method has been part of the fight to eradicate disease-carrying (female) mosquitos. Any female offspring of the genetically altered male mosquitos are incapable of flight and thus are unable to infect humans or indeed reproduce. However, following extremely positive cage-based testing in Mexico, researchers appear to have got carried away with their achievements and before you could say 'peer review' they conducted assessments directly in the wild in Malaysia, where I assume there is little GM regulation or public consultation. Therefore test results from one location were extrapolated to another with a very different biota, without regard for knock-on effects such as what unwelcome species might come out of the woodwork to fill the gap in the ecosystem. When stakes are so high, the sheer audacity of the scientists involved appears breathtaking. Like Dr Tyrell, we play god at our peril; let us hope we don't come to an equally sticky end at the hands of our creation...

Monday 30 January 2012

Sell-by date: are old science books still worth reading?

As an outsider to the world of science I've recently been struck by an apparent dichotomy that I don't think I've ever heard discussed, namely that if science is believed by non-practitioners to work on the basis of new theories replacing earlier ones, then are out-of-date popular science (as opposed to text) books a disservice, if not positive danger, to the field?

I recently read three science books written for a popular audience in succession, the contrast between them serving as the inspiration for this post. The most recently published was Susan Conner and Linda Kitchen's Science's Most Wanted: the top 10 book of outrageous innovators, deadly disasters, and shocking discoveries (2002). Yes, it sounds pretty tacky, but I hereby protest that I wanted to read it as much to find out about the authors and their intended audience as the subject material itself. Although only a decade old the book is already out of date, in a similar way that a list of top ten grossing films would be. In this case the book lists different aspects of the scientific method and those involved, looking at issues ranging from collaborative couples (e.g. the Curies) to prominent examples of scientific fraud such as the Chinese fake feathered dinosaur fossil Archaeoraptor.

To some extent the book is a very poor example of the popular science genre, since I found quite a few incorrect but easily verifiable facts. Even so, it proved to be an excellent illustration of how transmission of knowledge can suffer in a rapidly-changing, pop-cultural society. Whilst the obsession with novelty and the associated transience of ideas may appear to somewhat fit in with the principle that a more recent scientific theory always replaces an earlier one, this is too restrictive a definition of science. The discipline doesn't hold with novelty for the sake of it, nor does an old theory that is largely superseded by a later one prove worthless. A good example of the latter is the interrelationship between Newton's classical Law of Gravitation (first published in 1687) and Einstein's General Relativity (1916), with the former still used most of the time (calculating space probe trajectories, etc, etc).

The second of the three books discusses several different variants of scientific practice, although far different from New Zealand particle physicist Ernest Rutherford's crude summary that "physics is the only real science. The rest are just stamp collecting." Stephen Jay Gould's first collection of essays, Ever Since Darwin (1977), contains his usual potpourri of scientific theories, observations and historical research. These range from simple corrections of 'facts' – e.g. Darwin was not the original naturalist on HMS Beagle – to why scientific heresy can serve important purposes (consider the much-snubbed Alfred Wegener, who promoted a precursor to plate tectonics long before the evidence was in) through to a warning of how literary flair can promote poor or even pseudo-science to an unwary public (in this instance, Immanuel Velikovsky's now largely forgotten attempts to link Biblical events to interplanetary catastrophes).

Interestingly enough, the latter element surfaced later in Gould's own career, when his 1989 exposition of the Early Cambrian Burgess Shale fossils, Wonderful Life, was attacked by Richard Dawkins with the exclamation that he wished Gould could think as clearly as he could write! In this particular instance, the attack was part of a wider critique of Gould's theories of evolutionary mechanisms rather than material being superseded by new factual evidence. However, if I'm a typical member of the lay readership, the account of the weird and wonderful creatures largely outweighs the professional arguments. Wonderful Life is still a great read as descriptive natural history and I suppose serves as a reminder that however authoritative the writer, don't take accept everything on face value. But then that's a good lesson in all subjects!

But back to Ever Since Darwin. I was surprised by just how much of the factual material had dated in fields as disparate as palaeontology and planetary exploration over the past thirty-five years. As an example, Essay 24 promotes the idea that the geophysical composition of a planetary body is solely reliant on the body's size, a hypothesis since firmly negated by space probe data. In contrast, it is the historical material that still shines as relevant and in the generic sense 'true'. I've mentioned before (link) that Bill Bryson's bestseller A Short History of Nearly Everything promotes the idea that science is a corpus of up-to-date knowledge, not a theoretical framework and methodology of experimental procedures. But by so short-changing science, Bryson's attitude could promote the idea that all old material is essentially worthless. Again, the love of novelty, now so ingrained in Western societies, can cause public confusion in the multi-layered discipline known as science.

Of course, this doesn't mean that something once considered a classic still has great worth, any more than every single building over half a century old is worthy of a preservation order. But just possibly (depending on your level of post-modernism and/or pessimism) any science book that stands the test of time does so because it contains self-evident truths. The final book of the three is a perfect example of this: Charles Darwin's On the Origin of Species, in this case the first edition of 1859. The book shows that Darwin's genius lay in tying together apparently disparate precursors to formulate his theory; in other words, natural selection was already on the thought horizon (as proven by Alfred Russel Wallace's 1858 manuscript). In addition, the distance between publication and today gives us an interesting insight into the scientist as human being, with all the cultural and linguistic baggage we rarely notice in our contemporaries. In some ways Darwin was very much a man of his time, attempting to soften the non-moralistic side to his theory by subtly suggesting that new can equal better, i.e. a form of progressive evolution. For example, he describes extinct South American mega fauna as 'anomalous monsters' yet our overtly familiar modern horse only survived via Eurasian migration, dying out completely in its native Americas. We can readily assume that had the likes of Toxodon survived but not Equus, the horse would seem equally 'anomalous' today.

Next, Darwin had limited fossil evidence to support him, whilst Nineteenth Century physics negated natural selection by not allowing enough time for the theory to have effect. Of course, if the reader knows what has been discovered in the same field since, they can begin to get an idea of the author's thought processes and indeed world view, and just how comparatively little data he had to work with. For example, Darwin states about variations in the sterility of hybrids whilst we understand, for example that most mules are sterile because of chromosomal issues. Yet this didn’t prevent the majority of mid-Victorian biologists from accepting natural selection, an indication that science can be responsive to ideas with only circumstantial evidence; this is a very long way indeed from the notion of an assemblage of clear-cut facts laid out in logical succession.

I think it was the physicist and writer Alan Lightman who said: "Science is an ideal but the application of science is subject to the psychological complexities of the humans who practice it." Old science books may frequently be dated from a professional viewpoint but can still prove useful to the layman for at least the following reasons: understanding the personalities, mind-sets and modes of thought of earlier generations; observing how theories within a discipline have evolved as both external evidence and fashionable ideas change; and the realisation that science as a method of understanding the universe is utterly different from all other aspects of humanity. Of course, this is always supposing that the purple prose doesn’t obscure a multitude of scientific sins...

Thursday 1 December 2011

Questioning habits: monastic science in the medieval period

It’s not usual for a single book to inspire me to write a post, but on seeing a double page spread in Australian science writer Surendra Verna's The Little Book of Scientific Principles, Theories and Things I knew I had to investigate further. Published in 2006, this small book does just what it says in the title, being a concise chronological history of science from Ancient Greece to the present. So far, so good, except that after a fair few BC and early first millennium AD entries, I found that the article for AD150 was followed by one dated AD1202! Having double-checked there weren't any pages missing, I realised that the author had followed the all-too-common principle of 'here's the Dark Ages: nothing to see here; better move along quickly'. Therefore I thought it might be time to look into what exactly what, if anything, was happening science-wise during this thousand year gap, and why there appeared to be a sudden growth in scientific thought at the start of the thirteenth century.

Although much is known of the contemporary Muslim practitioners of natural philosophy such as Alhazen and Avicenna, I want to concentrate on Europe, as the era seems to contrast so profoundly with the later periods of scientific growth in the West known as the Renaissance and Enlightenment. Although historians have recently reappraised the Dark Ages, rebranding it 'early medieval', it's fairly obvious that post-Roman Britain and mainland Europe rapidly fell behind the scientific and technological advances of Middle- and Far-Eastern cultures. An obvious example can be shown by the Crab supernova of AD1054, which despite being recorded in non-Western literature (hardly surprising, since for some weeks it was four times the brightness of Venus) it has not been positively identified in any contemporary European chronicles. Is it feasible that no-one was observing the night sky over Europe, or was the 'guest star' simply too frightening to fit into their world picture?

The Catholic Church is considered the usual suspect for the lack of interest in scientific thought, but if anything the problem seems to have been on the horizon several centuries earlier. Although there were Ancient Greek philosophers such as Democritus and Empedocles whom we might consider experimenters, early Christianity adopted much of the mysticism and philosophy of thinkers such as Pythagoras and Plato. Therefore the culture of the early medieval period was ingrained with notions of archetypes and ideals: with a pre-arranged place for everything within a stultifying hierarchy, there was no need to seek deeper understanding of the physical world. What little astronomical observation there was had predominantly timekeeping and calendric purposes, such as for finding the date of Easter, whilst being at the same time completely intertwined with astrology. Therefore any attempt to understand developments in natural philosophy of the period must take into account various facets of human thought that are today considered completely separate from the scientific method.

However, this isn't to say that the era was completely devoid of intellectual curiosity. The eighth century English mathematician (and a deacon with decidely monastic habits) Alcuin of York could be said to have discussed ideas in the proto-scientific mould, who in addition developed a teaching system intended to propagate rational thought. What led to the pan-European interest in the methodologies we would recognise as key to science, such as detailed observation and careful experimentation, is usually traced to the translation of long-forgotten Ancient Greek texts from Arabic to Latin by such figures as the twelfth century Italian scholar Gerard of Cremona. Although Gerard wrote mathematical treatises and edited astronomical tables (no doubt at least in part for astrological use), the rapid dissemination of Ptolemy and other classical giants led to a chain reaction that should not be underestimated.

An early pioneer of the empirical process was Gerard's English near-contemporary and Bishop of Lincoln Robert Grosseteste, whilst the thirteenth century produced such luminaries as Dominican friar Albertus Magnus in Germany and the English Franciscan friar Roger Bacon, followed in the fourteenth century by fellow Franciscan William of Ockham, and so on. The fact that the translations of ancient texts made a rapid journey around Europe shows that Rome was not opposed to new ideas, although the arrest of Bacon in later life, possibly for writing unauthorised material, suggests that thought censorship was still very much the order of the day.

As can be noted, most of these men were either monks or senior clergy. The obvious point here is that nearly all of secular society was illiterate, which combined with the cost of books in the age before printing meant that only those within the Church had access to a wider world. I assume that this is an irony not lost on those who consider Western religion as antithetical to intellectual novelty (eat your heart out, Richard Dawkins!) Counter to this stereotype, there does seem to have been a form of academic competition between monastic orders, in addition to which chemical and biological experimentation was conducted in fields ranging from the production of manuscript pigments to herbal medicine.

Binham Priory, Norfolk, England
The eleventh century equivalent of a scientific laboratory: the remains of Binham Priory in Norfolk, UK

Of course by the eleventh and twelfth centuries the notion of formally inculcating knowledge, including elements of natural philosophy, was dramatically enhanced via the first universities. Starting in Italy, the new foundations removed the monopoly of the monastic and cathedral schools, thus setting into motion, if somewhat hesitantly, the eventual separation of scientific learning from a religious environment (and of course, Church decree).

So how much could it be argued that from a scientific viewpoint the European Dark Ages weren’t really that dark after all? Compared to the glories of what was to follow, and to a lesser extent the tantalising fragments we know about Ancient Greek thought, the period was certainly a bit grey. But there were definitely a few candles scattered around Europe, whilst such hoary old clichés as everyone believing the Earth to be flat should long since have been consigned to the dustbin of history, Monty Python notwithstanding. So if you are planning to write a history of science, why not undertake a bit of original research and find out what was happening during that much-maligned millennium? The truth, as always, is much more interesting than fiction.

Monday 26 September 2011

Full steam ahead: is there a future in revisiting obsolete science and technology?

Several weeks ago I was looking towards Greenwich in south-east London when I spotted an airship. A small one to be sure, but nevertheless a reminder of the time when Britain not only had a large manufacturing industry but in some sectors was even in the vanguard of technological development. The blimp in question was probably the 39 metre-long Goodyear Spirit of Safety II, which although nominally an American craft was assembled at RAF Cardington in Bedfordshire. I visited this site about 20 years ago and managed to go inside one of its' two enormous air sheds, once home to such giants of the skies as the 237 metre-long R101. Sadly, these days the hangers are mostly used for filming and rock band rehearsals, and recently a housing estate was built inside the base perimeter. However, it's not all a case of rust and nostalgia, as Hybrid Air Vehicles Ltd are making use of Cardington in a joint project with the aeronautical heavyweight Northrop Grumman to build three unmanned hybrid airships. The 76 metre-long Long Endurance Multi-Intelligence Vehicle or LEMV - a classic boffin-flavoured acronym, hurrah - is being developed for a US military surveillance role. The company's future plans include eco-tourist airships, so are we seeing the glimmer of an airship renaissance?

On the whole this seems rather unlikely. In the 1980s Cardington was home to Hybrid Air Vehicles' predecessor Airship Industries, one of who's Skyship 500s appeared in the James Bond film A View to a Kill (the same design as seen in my circa 1984 photograph below). Unfortunately the innovations in materials and engines weren't enough to save the company from liquidation.

An air display at RAF Henlow, Bedfordshire - late 1970s
Although Hybrid Air Vehicles has grandiose plans for vehicles up to twice the LEMV's length, it's doubtful there will be a near-future resurgence in long-haul civilian airships. After all, even during their interwar heyday a transatlantic ticket on the likes of the Hindenburg cost more than double that of an ocean liner. Therefore, military usage and cargo delivery to aircraft-unfriendly terrain are a far safer bet from an economic viewpoint, despite the obvious advantages of aerial craft less reliant on fossil fuels. Indeed, there are even schemes afoot in several countries to develop solar-powered cargo airships.

Another UK-based proposal that seeks to put new life into old technology sadly appears to have rather less chance of success. The Class 5AT (Advanced Technology) Steam Locomotive Project plans to develop a steam engine capable of matching current main line high-speed stock. After ten years' effort, the team have put together a very detailed study for a 180 km/h locomotive, but as you might expect there hasn't exactly been a rush of investors. The typical short-term mentality of contemporary politicians and shareholder-responsive industry means few appear willing to support the initial start up costs, especially when Britain's current rail network operates so wonderfully (hint: that's called irony). If you think any of this sounds familiar, check out the post on boffins and their pipe dreams, where the science and technology were frequently superlative and the economics frankly embarrassing.

Then again, a resurgence in motive steam might appear to have little relevance outside of alternative history novels, but a point to remember is that it was only when James Watt started to repair a working model of Thomas Newcomen's atmospheric pumping engine in 1763 - a design by then half a century old - that the development of true steam engines began.

The steam car has even less chance of a reawakening, although there appear to be good engineering reasons behind this, namely difficulty coping with the constantly-changing speeds required in urban driving. As it is, steam on the road seems to have mostly novelty value these days. A good example is the British Steam Car, winner in 2009 of the Guinness World Land Speed Record for a steam powered car. It may have a dull name, but with a Batmobile aesthetic and top speed of 225km/h, the world's fastest kettle has certainly proved a point that steam needn't be associated with slow.

Somewhat less romantic and rather more pragmatic, NASA has returned to tried and tested capsule technology for their space shuttle replacement, Orion. The "Apollo on steroids" design is now accompanied by the Space Launch System or SLS (another uninspired moniker), which refers to a rocket slightly taller than the Saturn V that will have second stage engines developed from those used on this famous forebear - which incidentally last flew in 1973.

But reappraising old science isn't restricted to high technology, as can be seen by the resurgence of biotherapeutic methods in the past few decades. Most people have heard of the fish pedicure fad but the rather more important use of disinfected maggots to clean flesh wounds has received NHS support following some years of trials in the USA. A 2007 preliminary assessment even showed success using maggot therapy to treat wounds infected with the 'superbug' MRSA. Yet the technique is known from Renaissance Europe, Mesoamerican and Australian Aboriginal cultures: sometimes low-tech really could be the way forward.

Possibly that's the key as to whether these revitalisations are likely to succeed; if the start-up costs are relatively cheap then there's a good chance of adoption. Otherwise, the Western obsession with the now makes it all too easy to dismiss these projects as idealistic dreams by out-of-touch eccentrics. Not that new technologies have always followed the rational approach when initially developed anyway, since historical backstories have probably been as much a driving force as objective analysis. I guess we're back again to disproving that old Victorian notion of continuous upward progression, but then as the philosophically-minded would say, we do live in postmodern times.

Thursday 25 August 2011

Something sinister: the left handedness of creation

I'm embarrassed to admit it but the first home-grown science experiment I remember undertaking was to explore the validity of astrology. Inspired by the Carl Sagan book and television Cosmos I decided to see for myself if, after centuries of practice by millions of adherents, the whole thing really was a load of bunk. So for three months I checked the predictions for my star sign every week day and was amazed at the result: I found them so vague and generalised that I could easily find something in my life each day to fit the prediction. A sort of positive result that negates the hypothesis, as it were. As a young adult I encountered people with a rather less sceptical frame of mind, and if anything their astrological information only reinforced my earlier results. As my birthday is on the 'cusp' between two star signs, I found that about half the astrologically-inclined viewed me as a typical sign A whilst the other half dubbed me a typical sign B. At this point, I think I can rest my case...

Of course, astrology is a very old discipline so it's no wonder it's pretty easy to see the cracks. Over the past forty or so years there have been several generations of authors with a slightly more sophisticated approach, paying superficial lip service to the scientific method. Although their methodology fails due to the discarding or shoehorning of data, this hasn’t stopped the likes of L. Ron Hubbard from making mints. To this end, I decided to generate a hypothesis of my own and test it to a similar level of scrutiny as their material. Thus may I present my own idea for consideration: evidence suggests that our universe was created by an entity with a penchant for a particular direction, namely left-handed / anti-clockwise. Here are three selected cases to support the hypothesis, although I cannot claim them to have been chosen at random, for reasons that will soon become obvious.

The first argument: in the 1950s and 60s physicists found that the weak nuclear force or interaction, responsible for radioactivity, does not function symmetrically. Parity violation, to be technical about it, means that for example massless particles called neutrinos spin in a counter clockwise direction if they are created by beta decay. Like many other fundamental parameters to our universe, no-one has an explanation of why this is so: it just is.

The second argument: amino acids are usually described as the building blocks of proteins, but in addition to those used to make life on Earth, additional types are found in meteorites. It has been theorised that life was made possible by meteorites and comets delivering these chemicals to the primordial Earth, but radiation encountered on their journey may have affected them. Whereas amino acids synthesised in laboratories contain approximately equal amounts of mirror image (i.e. left- and right-handed) forms, nearly all life is constructed from the left-handed, or L-amino acids.

The third argument: a new catalogue of observations using the latest generation of telescopes indicate that from our viewpoint most galaxies rotate counter clockwise about their cores. Of course it's been a long time since humans believed the Earth to be the centre of the Universe, but even so, this is a disturbing observation. We now consider our planet just an insignificant component of the second-largest galaxy within a small group at one end of a super cluster. In which case, why is galactic rotation so far removed from random?

So how do these arguments stand up to scrutiny, both by themselves and collectively? Not very well, I'm afraid. Working backwards, the third argument shows the dangers of false pattern recognition: our innate ability to find patterns where none exist or to distort variations into a more aesthetic whole. In this particular case, it appears that the enthusiasts who classified the galaxies' direction of rotation were mistaken. Put it down to another instance of the less than perfect powers of perception we humans are stuck with (thanks, natural selection!)

The second argument initially bears up somewhat better, except that I deliberately ignored all of the biological elements against the argument. The best known of these is probably DNA itself, which is primarily helical in a clockwise direction. This seems to be a fairly common problem in the history of science, with well-known cases involving famous scientists such as Alfred Wegener, whose continental drift hypothesis was a precursor of plate tectonics but who deliberately ignored unsupportive data.

The first argument stands by itself and as such cannot constitute a pattern (obviously). Therefore it is essentially worthless: you might as well support the left-handed notion by stating that the planets in our solar system orbit the sun in a counter clockwise direction - which they do, unless you happen to live in the Southern Hemisphere!

Full moon viewed via a Skywatcher 130PM telescope
Once again, our ability to find patterns where none exist, or as with the rotation of galaxies, to misconstrue data, leaves little doubt that our brains are naturally geared more towards the likes of astrology than astronomy. Pareidolia, the phenomenon of perceiving a pattern in a random context, is familiar to many via the man in the moon. However, there are varying degrees to this sort of perception; I confess I find it hard to see the figure myself (try it with the image above, incidentally taken through my 130mm reflector telescope earlier this year – see Cosmic Fugues for further information on genuine space-orientated pattern-making).

Of course, these skills have at times combined with innate aesthetics to aid the scientific enterprise, from the recognition and assembly of Hominin fossil fragments from the Great Rift Valley to Mendeleev's element swapping within the periodic table. However, most of the time we need to be extremely wary if a pattern seems to appear just a little bit too easily. Having said that, there still seem to be plenty of authors who cobble together a modicum of research, combine it with a catchy hook and wangle some extremely lucrative book and television documentary deals. Now, where’s a gullible publisher when you need one?

Monday 1 August 2011

Weather with you: thundersnow, hosepipe bans and climate punditry

I must confess to have not watched any of the current BBC series The Great British Weather, since (a) it looks rubbish; and (b) I spend enough time comparing the short-range forecast with the view outside my window as it is, in order to judge whether it will be a suitable night for astronomy. Since buying a telescope at the start of the year (see an earlier astronomy-related post for more details) I've become just a little bit obsessed, but then as an Englishman it's my inalienable right to fixate on the ever-changeable meteorology of these isles. If I think that there is a chance of it being a cloud-free night I tend to check the forecast every few hours, which for the past two months or so has proved to be almost uniformly disappointing; as a matter of fact, the telescope has remained boxed up since early May.

There appears to be a grim pleasure for UK-based weather watchers that when a meteorology source states that it is currently sunny and dry in your location it may in fact be raining torrentially. We all realise forecasting relies on some understanding of a complex series of variables, but if they can't even get the 'nowcast' correct what chance do the rest of us have?

So just how has the UK's mercurial weather patterns affected the science of meteorology and our attitude towards weather and climate? As far back as 1553 the English mathematician and inventor Leonard Digges included weather lore and descriptions of phenomena in his A General Prognostication. Since then, British scientists have been in the vanguard of meteorology. Isaac Newton's contemporary and rival Robert Hooke may have been the earliest scientist to keep meteorological records, as well as inventing several associated instruments. Vice-Admiral Robert FitzRoy, formerly captain of HMS Beagle (i.e. Darwin's ship) was appointed as the first Meteorological Statist to the Board of Trade in 1854, which in today’s terms would make him the head of the Met Office; he is even reputed to be the inventor of the term 'forecast'.

Modern science aside, as children we pick up a few snippets of the ancient folk learning once used to inculcate elementary weather knowledge. We all know a variation of "Red sky at night, shepherd's delight; red sky in the morning, shepherd's warning", the mere tip of the iceberg when it comes to pre-scientific observation and forecasting. But to me it looks if all of us in ever-changeable Britain have enough vested interest in the weather (once it was for crop-growing, now just for whether it is a sunglasses or umbrella day – or both) to maintain our own, personal weather database in our heads. Yet aren't our memories and lifespan in general just too short to allow us a genuine understanding of meteorological patterns?

One trend that I consider accurate is that those 'little April showers' I recall from childhood (if you remember the song from 'Bambi') are now a thing of the past, with April receiving less rainfall than June. This is an innate feeling: I have not researched it enough to find out if there has been a genuine change over the past three decades. Unfortunately, a combination of poor memory and spurious pattern recognition means we tend to over-emphasise 'freak' events - from thundersnow to the day it poured down at so-and-so's June wedding - at the expense of genuine trends.

For example, my rose-tinted childhood memories of six largely rain-free weeks each summer school break centre around the 1976 drought, when my brother had to be rescued from the evil-smelling mud of a much-reduced reservoir and lost his shoes in the process. I also recall the August 1990 heat wave as I was at the time living less than 20 km from Nailstone in Leicestershire, home of the then record UK temperature of 37.1°C. In contrast, I slept through the Great Storm of 1987 with its 200+km/h winds and don’t recall the event at all! As for 2011, if I kept a diary it would probably go down as the 'Year I Didn't Stop Sneezing'. City pollution and strong continental winds have combined to fill the London air with pollen since late March, no doubt much to the delight of antihistamine manufacturers.

An Norfolk beach in a 21st century summer
An East Anglian beach, August 2008


Our popular media frequently run stories about the latest report on climate change, either supporting or opposing certain hypotheses, but rarely compare it to earlier reports or long-term records. Yet even a modicum of research shows that in the Nineteenth Century Britain experienced a large variation in weather patterns. For example, the painter J.M.W. Turner's glorious palette was not all artistic licence, but almost certainly influenced by the volcanic dust-augmented sunsets following the 1815 Tambora eruption. It wasn't just painting that was affected either, as the UK suffered poor harvests the following year whilst in the eastern United States 1816 was known as 'Eighteen Hundred and Froze to Death'.

The influence of the subjective on the objective doesn't sound any different from most other human endeavours, except that weather professionals too - meteorologists, climatologists, and the like - also rely on biases in their work. Ensemble forecasting, which uses slightly different initial conditions to create data reports which are then combined to provide an average outcome, has been shown to be a more accurate method of prediction. In other words, this sounds like a form of scientific bet hedging!

Recent reports have shown that once-promising hypotheses involving singular factors such as sunspot cycles can in no way account for most primary causes of climate change, either now or in earlier epochs. It seems the simple answers we yearn for are the prerogative of Hollywood narrative, not geophysical reality. One bias that can seriously skew data is the period being used in a report. It sounds elementary, but we are rarely informed that even the difference of a single year in the start date can significantly affect the outcome as to whether, for example, temperature is increasing over time. Of course, scientists may deliberately only publish results for periods that support their hypotheses (hardly a unique trait, if you read Ben Goldacre). When this is combined with sometimes counter-intuitive predictions – such as that a gradual increase in global mean temperature could lead to cooler European winters – is it little wonder we non-professionals are left to build our level of belief in climate change via a muddle of personal experience, confusion and folk tales? The use of glib phrases such as 'we're due another glaciation right about now' doesn't really help either. I'm deeply interested in the subject of climate change and I think there is serious cause for concern, but the data is open to numerous interpretations.

So what are we left with? (Help: I think I'm turning into Jerry Springer!) For one thing, the term 'since records began' can be about as much use as a chocolate teapot. Each year we get more data (obviously) and so each year the baseline changes. Meteorology and climatology are innately complex anyway, but so far both scientists and our media have comprehensively failed to explain to the public just how little is known and how even very short-term trends are open to abrupt change (as with the notorious 'don't worry' forecast the night of the 1987 Great Storm). But then you have only to look out of the window and compare it to the Met Office website to see we have a very long way to go indeed…

Saturday 25 June 2011

Amazed rats and super squirrels: urban animal adaptations

If I was the gambling sort I might be tempted to bet that the most of the large fauna in my neighbourhood was, like much of London, restricted to very few species: namely feral pigeons, rats, mice and foxes. The most interesting visitor to my garden is, judging by the size, a female common toad - the wondrously named Bufo bufo - which makes an appearance every couple of years to feast on snails and leave a shell midden behind.

After spotting a small flock of Indian-ringnecked Parakeets in our local park, I decided to look at the adaptations wildlife has undergone whilst living in an urban environment. After intermittently researching this topic over a month or so, I was surprised to find the BBC Science News website posting an article along similar lines. Synchronicity? I decided to plough ahead, since the subject is too interesting to abandon and I've got my very own experimental data as well, although it's hardly 'laboratory conditions' material.

Your friendly neighbourhood Bufo bufo
It's easy to see why animals are attracted to cities: the ever-present food scraps; the warmer microclimate; and of course plenty of places to use for shelter (my nickname for railway embankments is 'rodent condominiums'). Even the mortar in walls seems to offer smaller birds a mineral supplement (calcium carbonate) and/or mini-gastroliths (A.K.A. stomach grit) judging by the way they peck at them. Then there's also the plentiful sources of fresh water, which in my neighbourhood goes from birdbaths and guttering to streams and reservoirs. Who can blame animals for coming in from the cold? In the case of the London fox they have been arriving since the 1930s, whilst rodents were probably rubbing their paws together in glee as the first cities were being built many millennia ago in the Fertile Crescent.

There seem to be several, obvious behavioural changes that result from urban adaption, particularly when it comes to judging humans. I have found an astonishing lack of wariness in mice, squirrels and foxes, even in daylight, although rats are usually more circumspect. There are an increasing number of stories concerning foxes biting sleeping humans, including adults, even during the day. I was informed by a Clapham resident of how, having chased a noisy fox down the street at night, it then followed him back to his house, only stopping at the garden gate. Clearly there is some understanding of territorial boundaries here, too. This is supported by the behaviour of foxes in my area, which will happily chase cats in the local allotments even during the day, but once the cat emerges onto the street, the fox doesn't follow. Perhaps they have some understanding of connection between cats and humans?

City fauna has become more opportunist, prepared to scavenge meals from the enormous range of foodstuffs available in an urban environment, which around my area seems mostly to consist of fried chicken carcasses, usually still in the box. Even birds of prey such as the Red Kite (no small fry, with up to a one and three-quarter metre wing span) have recently been seen taking food off unwary children. This follows a period of finding food deliberately left out for them, so an association forms between people and food. This then is a two-way connection, with humans helping to generate changes in urban fauna by their own actions. Less time spent foraging means urban animals expend less physical energy, so there may a feedback loop at work here; if surplus energy can aid higher cognition, discrimination of humans and the urban environment increases, and thus even less time is required to source food. A facile conclusion perhaps, but read on for a possible real-life example.

My own experiments on grey squirrels took place about ten years ago, probably at least partially inspired by a television lager advertisement. It started when I found that my bird feeder was being misappropriated by a couple of squirrels. My first idea was to add radial spikes around the bird feeder using garden canes, but the squirrels were more nimble than I had thought, so after adding more and more spikes to create an object reminiscent of the Spanish Inquisition, I had to change tack. I next suspended the bird feeder on the end of a long rod that was too thin for the squirrels to climb on, but they managed to dislodge it at the wall end, causing it to drop to the ground for easy consumption. Rounds one and two to the pesky Sciurus carolinensis. My final design was a combination of spikes on the approach to the rod, the rod itself, then the feeder suspended from a long wire at the end of rod. I went off to work with an air of smug satisfaction that no mere rodent was going to get the better of me, only to find on my return that somehow the squirrels had leapt onto the rod and eaten through the wire!

One point to consider is that the bird food itself was in a transparent perspex tube, which is totally unlike any natural material. So when it comes down to it, are some animals, at least mammals and birds, over-endowed with grey matter when it comes to their usual environment, only utilising more of their potential when faced with artificial materials? Or do the challenges and rewards of being an urban sophisticate cause an increase in neurological activity or actual physiology? The latter gets my vote, if only for the evidence that supports this in human development. After all, the archaeological record suggests that modern humans and our ancestral/cousin species experienced an incredibly slow rate of technological development, with rapid increases only coming after disastrous setbacks such as the population bottleneck around 70,000 years ago, probably following a decade-long volcanic winter.

Experiments using rats in mazes over the past eighty years seem to agree with this thesis. However, there are clearly limits to animals' ability to learn new cognitive skills if they don't have time for repeated interactions, which may explain why most young foxes' first encounter with vehicular traffic is also their last. As for the BBC Science News report I mentioned earlier, research shows that birds with comparatively larger brain to body size ratios are those found to thrive in an urban environment. So it isn't all nature red in tooth and claw after all, but at least on occasion a case of brain over brawn for the city slickers.

Finally, I ought to mention a series of scare stories over the past year about another urban coloniser that seems to be returning after half a century's absence, namely the Cimicidae family of bloodsucking insects. With many of us using weaker laundry detergents at lower water temperatures, some researchers are predicting an imminent global pandemic of these unpleasant critters. So please be careful at night, and don't let the bed bugs bite!

Saturday 28 May 2011

Amazing animalcules: or how to create a jungle in a coffee jar

With frequently cloudy night skies preventing astrophotography of Saturn (even a few clouds are enough to ruin seeing, since they reflect the light pollution over London) I decided to head in the other direction, so to speak, and investigate the world of the very small. Last year my daughters and I had mixed success raising a batch of tadpole shrimp, a.k.a. Triops longicaudatus. Having seen the creatures lay their eggs in the adult tank, I kept some of the substrate in case we wanted to try round two this year. Therefore having had some warmer weather recently, I thought last week would be a good time for Triops Trek: the Next Generation.

Some enthusiasts - I can't really call them owners/keepers for such short-lived 'pets' - sift the half-millimetre diameter triops eggs from their tank substrate as if gold panning, but with the coral sand I used that frankly looked far too much like hard work. Therefore I just added about a 5mm deep layer of last year's substrate into a hatching tank of deionised water and hoped for the best. And...

...Success! Out of the thirty-five or so that hatched about half are still alive a week later, which surprised the hell out of me. The only problem being that the main tank is really only big enough for five or six adults. That is if they survive the transition and don't fall prey to problems with osmotic pressure, Ph balance, the nitrification cycle, etc, ad nauseum.

Meanwhile, a bit of research later, I discovered that each adult female (and most are) T. longicaudatus lays between 60 to 200 eggs per clutch. With up to one clutch a day, that's potentially an enormous number of eggs in my substrate. Looking at the nursery tank today I could see about sixty unhatched eggs stuck to the sides just above the water line, the latter having dropped slightly due to evaporation. All I have to do now is find a way of scraping them out...
Triops longicaudatus A.K.A. tadpole shrimps
Back to the current batch. The first problem was what to feed the nauplii (hatchlings for the uninitiated), as for the first few days they are too small to manage the shrimp food left over from last year's kit and I certainly wasn't going to bother buying anything. Luckily, last year I had found grow-your-own-infusoria instructions so had collected dried leaves from the local park during winter. So here's my recipe for happy hatchlings: collect some dead leaves, the more spore-covered the better; tear them into small pieces; soak them in rain or mineral water for three or four hours in a clean jar (e.g. coffee jar); tip out the water and dry the leaves; add them back to jar with fresh rain or mineral water and leave for three to four days. Voila - infusoria in abundance!

For those like me not in possession of a microscope, the best way to observe your new ecosystem (a slight Dr Frankensteinian moment) is at night. Place the jar against a dark background, turn off all the lights and view them via a torch and a magnifying glass with at least 3 times magnification. You'll be amazed at all the activity, especially the spiralling dance of the bdelloid rotifer. These half-millimetre creatures are extremely common but at this size it's perhaps not surprising that I've never noticed them before. There are hundreds of species, all of which seem to be asexual (or entirely female, depending on your viewpoint). But even these are just the tip of the diversity iceberg that is the world of the neo-microscopic. NASA has been experimenting on other similar-sized denizens, tardigrades, which can survive exposure to the vacuum, extreme temperatures and radiation of space. Otherwise known as water bears (despite their eight legs) tardigrades look more like a something off The Muppet Show than Doctor Who, but research has shown they can survive hundreds of times the lethal X-ray dose for humans, so perhaps long-duration spaceflights in the future will in some way benefit from the current endurance-testing of these remarkable little animals.

Back to the home-grown micro-jungle. Having reared a jarful of infusoria, I happily injected a few siphons' worth into the triops hatching tank. And then I felt a bit uneasy. I had heard that some fresh water aquarium owners breed triops just as food for their fish - perish the thought. And yet here I was, happily throwing the seals into the shark tank, as it were. Last year I had allowed a fairy shrimp and clam shrimp to go to their doom, along with countless daphnia (water fleas). So why was I worried now? Is there a threshold above which I consider a species should not become food (triops, obviously), whilst those below it can be eaten without qualms (clearly daphnia) and presumably bdelloid rotifers?

As a Westerner, I haven't grown up with Buddhist or other Eastern notions concerning animal welfare, ranging from veganism to reincarnation (although the latter clearly has self-interest at its core). Morals and empathy have a place in science too, and I consider pharmaceutical experimentation on animals as a necessary evil not to be thought about too often, but with the home-grown infusoria was it a case of size-based vulnerability or just cuteness that worried me how easily I had bred one animal as lunch for another? I suppose it's easy to argue that daphnia have the stigma of the name 'flea' with all its connotations, but the triops kit literature has an interestingly dismissive approach about associated fauna: it states that they won't live long (compared to triops, that is), but fails to mention that a primary reason for this is that the triops will hoover up the smaller species in next to no time.

Perhaps it was nothing more than the graceful, balletic movements of the rotifers that gave me pangs of guilt about serving them at the Café de Triops, but next time you pass a small puddle of dirty rainwater why not spare a moment's thought for the astonishing animalcules that live, largely unobserved, all around us? It really is a jungle out there!

Friday 1 April 2011

Moonage daydreams: lunacy, conspiracy and the Apollo moon landings

It's astonishing to think that in less than two weeks' time it will be half a century since Yuri Gargarin slipped the surly bonds of Earth in Vostock 1. Although a generation has grown up since the end of the Cold War, any study of early astronautics cannot exclude a major dollop of politics. This is particularly true of the Apollo moon landing programme and President Kennedy's commitment to achieve this goal by 1970. Now as much a part of history as a fading memory, a small but significant number of theorists doubt the veracity of the missions. But are they just the same crackpots/misguided types (delete as required) who claim to have been abducted by aliens, or is there anything more concrete to go on?

A wide range of conspiracy stories has been circulating since rocket engine company employee the (now late) Bill Kaysing self-published his 1974 opus We Never Went to the Moon: America's Thirty Billion Dollar Swindle. Of course conspiracy was very much in the American psyche during that period: the Watergate affair had occurred 6 months prior to the final moon landing mission in December 1972 whilst President Nixon's resignation followed the release of the crucial audio tape evidence in August 1974. In a sense, the world was ready for Kaysing's theories, but can an impartial assessment show how accurate they are? Much of his thesis can be dismissed with a little application of the scientific method: the alleged problems on photographs and movie footage such as disappearing cross-hairs or incorrect shadows and lighting are easy to resolve. In another vein, the waving of the US flag on the lunar surface, attributed to wind in an Earth-based moon simulator, is just foolish. Why would such amateur mistakes occur if an elaborate cover-up were true?

However, new evidence recently made public from former Soviet archives hints that the conspiracy theorists may be on to something after all. Telemetry tapes from the USSR's land- and ship-based deep space network suggest that there was an additional signal hidden, via frequency division multiplexing, underneath transmissions to the Apollo craft. This implies that what actually went to the moon were pairs of empty spacecraft: a robot version of the lander (or LM); and a command module (CSM) with an automated radio system. This latter set-up would isolate the hidden transmissions received from Earthbound astronauts and beam them back to fool the world into thinking the spacecraft was manned. The crew themselves would divide their time between Apollo mock-ups built inside a weightless training aircraft or 'vomit comet' (ironically also the technique used in the 1995 film Apollo 13) and a recreation of the lunar surface in the infamous Area 51 complex in Nevada. Of course the associated activities of sending robot sample-return missions to bring back massive quantities of moon rock (the same method used by the Soviet Luna missions from 1970 onwards) would presumably have eaten so deeply into NASA's budget as to be responsible for the cancellation of the last three moon-landing missions (or fake missions, as perhaps we should refer to them).

The obvious question is why go to all this length when the programme's fantastic achievements – the rockets, spacecraft, and their entire cutting-edge infrastructure - had already been built? Again, the USSR can add something to the picture. Fully six months before the Apollo 11 flight, the Soviet Union officially announced it was pulling out of the moon race and would not even attempt a manned flight to the moon. Then the month after Apollo 11's splashdown, the Soviets launched Zond 7, an unmanned variant of their Soyuz craft (a design still in use today to ferry crew to the International Space Station), on a circumlunar trajectory. What is interesting is that the craft carried 'special radiation protection'. Had they found a fundamental obstruction to a manned lunar landing mission? Less than one month prior to Apollo 11, when you would have thought NASA would have been completely focussed on that mission (and bearing in mind the massive amount of unpaid overtime required to maintain schedules), the US launched a pigtail monkey called Bonny into orbit aboard Biosat 3. This almost unknown mission was terminated more than twenty days early, with Bonny dying 8 hours after landing. What was so urgent it needed testing at this crucial time? In a word: radiation.

The Van Allen Belt consists of two tori (basically, doughnuts) of high-energy charged particles trapped by the Earth's magnetic field. After its existence was confirmed by the USA's first satellite, Explorer 1, continuous observation proved that the radiation intensity varies over time as well as space. Unfortunately, 1969-1970 was a peak period in the cycle, in addition to which it was accidentally augmented by artificially-induced radiation. In 1962 the USA detonated a 1.4 megaton atomic weapon at an altitude of 400 kilometres. Although by no means the largest bomb used during four years of high-altitude testing, Operation Starfish Prime generated far more radiation than any similar US or USSR experiment, quickly crippling a number of satellites, including some belonging to the Soviets.

The theory holds that this additional radiation belt would have had a profound effect on manned spacecraft travelling beyond low Earth orbit. An additional whammy would be the danger of deep-space radiation once away from the protection of the geomagnetic field. The BBC's 2004 docudrama series Space Odyssey: Voyage to the Planets showed this quite nicely when the interplanetary Pegasus mission lost its doctor to cosmic radiation. There is also speculation that the impact of cosmic rays on the lunar surface generates a spray of secondary particles that would prove hazardous to astronauts. Although it's not clear if the Russians were sending animals into space during the late 1960s as per the Biosat series, Bill Kaysing claimed he had been given access to a Soviet study that recommended blanketing lunar surface astronauts in over a metre of lead!

The Apollo missions of course utilised what was then cutting edge technology, but even so the payload capacity of the Saturn V rocket did not allow for spacecraft with anything but the lightest of construction techniques. Indeed, the Apollo lunar module had outer coverings of Mylar-aluminium alloy – a substance that appears to be a high-tech version of baking foil. In this instance it seems rather apt, in the sense that it may well have lead to self-basting astronauts, had they actually been on board. In all seriousness, the heaviest of the fuelled-up CSM-LM configurations was around 40 tonnes (for Apollo 17), only five tonnes short of the maximum lunar transfer trajectory capacity. Since it took an 111-metre tall Saturn V to launch these craft, it is clear that lead shielding wasn't really an option.

Some conspiracy theorists have argued that Stanley Kubrick, coming directly from four years of making 2001: A Space Odyssey, was involved in the hoax filming, but this seems rather ridiculous (although another irony is that 2010: Odyssey Two director Peter Hyams had earlier made the Mars mission conspiracy film Capricorn One, the film's hardware consisting of Apollo craft...) A far more plausible candidate to my mind is Gene Roddenberry, the originator of Star Trek. The Apollo 8 circumlunar flight over Christmas 1968 (including a reading from Genesis, no less), the 'happy' (from a ratings point of view) accident of Apollo 13, even the use of America's first rocket-launched astronaut Alan Shepard as commander of Apollo 14, hint back to the homely yet patriotic heroics of Kirk and co. As for the photographic effects crew, my money would be on one 2001's effects supervisors, namely the engineering genius Douglas Trumbull. Today even amateurs like myself can attempt to replicate their brilliant work: here's my take of Armstrong and Aldrin, done many moons ago, courtesy of Messer Airfix and Photoshop (shame you can't see the cross-hairs at this size):

Apollo lunar lander
As for how all those involved have managed to maintain silence over the decades, Neil Armstrong's publicity shyness is about the only example I can think of that bolsters the argument. Except there is also the curious case of Britain's own "pretty far out" David Bowie, who somehow seems to have been in the know. It sounds bizarre, but if you examine his oeuvre from Space Oddity onwards ("your circuits dead, there's something wrong") to the film The Man Who Fell to Earth (complete with a cameo from Apollo 13 commander James Lovell as himself) you begin to find a subliminal thematic thread. For me, these culminate in the 1971 song Moonage Daydream, with the deeply conspiratorial lyrics "Keep your mouth shut, you're squawking like a pink monkey bird...Don't fake it baby, lay the real thing on me..."

Couldn't have put it any better myself!

Friday 18 March 2011

Animal farm: agricultural revolutions happening in your own garden

Various forms of symbiosis - the mutual interactions between species - have long been recognised, not least the hundreds of microorganisms that co-exist within and upon us Homo sapiens. But going beyond mere symbiosis, there appear to be examples of interactions between species that are nothing less than astonishing. Following a recent spate of television documentaries on the Neolithic period, the time when humans started to farm first animals and then crops, it seemed a good excuse to look at examples of other animals that also farm. Although mostly restricted to arable farmers (technically speaking, fungi culturists) there is also one fascinating case of pastoralism.

The best-known examples are probably insects, with many species of leaf-cutter ant and termites known to farm strains of fungi as a food source. It has been assumed (although I’m not sure on what basis, since farming activity would presumably be invisible to the fossil record) that these insects developed their sophisticated social structures, including caste systems, prior to the adoption of farming. This is the direct reverse of the earliest human farmers, wherein the earliest cities of the Near East, for example, arose after livestock domestication. It’s difficult to see how insects started the process and raises the interesting question of whether it offers the farming species any superiority over non-farmers of similar genera. After all, in human cultures it appears that early farmers had to work far harder for their daily bread than the gatherer-hunters who preceded them, the latter being a way of life that continues in isolated pockets even to this day. So it may not be an improvement on non-farming lifestyles - just different. Another nail in the coffin for any followers of the Victorian notion of progress…

Staying with insects, a diverse group of over three thousand beetles cultivate the ambrosia fungus for food, in a relationship thought to stretch back tens of millions of years. Unlike ants and termites, these beetle species do not all live in large, strictly-organised colonies. Heading for wetter environments, marsh snails have also been found to cultivate fungus that is ‘sown’ from spores embedded in their own excrement! Then in the water itself, some species of damselfish farm algae on the remnants of coral they have themselves killed, a process that bares a striking resemblance to Amazonian deforestation for cattle ranching. Unfortunately, the fishing by humans of damselfish predators has had the effect of aiding the population of fishy farmers and thus only increased the rate of coral loss.

Finally, the pastoralist in the pack, our everyday common or garden ant. In a bizarre simulcrum of dairy farming, some ant species control, supervise and ‘milk’ aphids. Had the species involved been more cuddly (i.e. one of us mammals) then it might have seemed all the more astonishing – a real-life antidote to Beatrix Potter-esque anthropomorphism. As it is these genuine animal farmers, with individual brains weighing a few thousandths of a gram, will drug aphids, protect them from predators and bad weather, and even use biochemicals to affect their growth patterns. And all in return for the honeydew they extract from the aphids.

You may have noticed the use of very human activities in these descriptions: domestication; caste systems; protection, etc. We are only just beginning to understand the behavioural diversity to found amongst other species, only to find we are continuously removing yet more barriers that differentiate ourselves from the rest of the biosphere. It is tempting to suggest this last example of animal farmers includes a form of slavery, with drug-controlled drones and just a whif of Brave New World. If these examples of non-human farmers were found on another planet, would we possibly consider it to be a sign, incredibly alien to be sure, of intelligence? Clearly, the brain size of the individuals involved doesn’t count for much, but a colony of 40,000 ants has the collective number of brain cells of one human. If the ants were able to store information in chemical signatures, something akin to a library, then wouldn’t this be a form of hive mind? Speculative nonsense of course, but does anyone remember the 1970’s film Phase IV?

It’s difficult to be anything other than dumbfounded as we learn more about animal behaviour, especially at what seems to be a programmed/non-conscious level. If the permutations are like this on Earth, the possibilities on other worlds are seemingly limitless. Again, this questions whether we could even recognise whether another species is intelligent or not. Perhaps Douglas Adams put it best: "Man has always assumed that he was more intelligent than dolphins because he had achieved so much...the wheel, New York, wars and so on...while all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man...for precisely the same reason."

Enough said!

Tuesday 1 March 2011

Let us think for you; or how I learnt to stop worrying and just believe the hype

I was recently watching my cousin's sister-in-law (please keep up) on a BBC TV documentary, in which various Victorian super-cures were shown to be little more than purgatives thanks to ingredients such as rhubarb, liquorice, soap and syrup. Whilst we frequently scorn such olden days quackery, the popularity of Ben Goldacre's Bad Science and (Patrick) HolfordWatch show that times haven't really changed all that much. Bombarded as we are from the egg with immense amounts of consumerist 'information', it is maddening if unsurprising that we buy the dream with critical faculties switched firmly off.

As Goldacre points out, George Orwell noted that the true genius in advertising is to sell you both the solution and the problem. Since the above sites both detail some of the rather more bizarre pharmaceuticals on the market, I'll recommend you visit them for further information. The material dealing with a council allowing a trial of fish oil pills to boost school exam results is priceless.

Yet this area is just one of several related to the solution/problem model, namely that there is consumer product for every issue: "Want a smart child? Just buy a Mozart CD!" The Mozart Effect may finally be heading for the debunked heap, but it's small fry compared to the notion that pill-popping is often the most effective yet rapid remedy. The amount of health supplements now available (carefully niche-marketed, of course) is astonishing, as is the appeal for us to treat ourselves like professional athletes, thanks to the increasing obsession with hydration and hypertonic drinks and 'wellness' in general.

The past two decades have seen a sad litany of scandals involving food and pharmacology, from the salmonella in eggs to the MMR vaccine and autism. With the UK press only to willing to whip up a scandal without prior thorough investigation of the evidence (for the most part, presumably for the sake of sales rather than any anti-scientific leanings per se), the public has been cried wolf to so many times it's enough to make you turn your back on anything that looks vaguely scientific. I don't know enough about the avian flu and swine flu hyperbole to comment in detail, but there too the media reporting of Government planning has implied elements verging on the farcical.

So what have we learnt so far? Firstly, it's far easier to push a one-size-fits-all cure than to individually assess people's physical and mental health problems as if they were, well, individuals. Most of us rely on the media for our explanations of health and food science issues, and these reports tend to appeal to the emotions and intuition rather more than we might find in the primary reports, AKA the 'sterilised pages of scientific literature', as palaeontologist Richard Fortey refers to it.

Not that most of us would have the time to plough through and decode the latter anyway, which brings me to a second issue: there is now so much freedom of choice, and an emphasis on rapid pacing to match our speed of communications that 'noise' (not just aural) is increasingly blocking critical thinking. Twenty years ago, people could define their day as having a work part and a leisure part, but now the two are blurred if not superseded thanks to a wide variety of recent technological innovations. Obviously we can work longer hours (i.e. from home or in transit) via mobile computing and Wi-Fi, but there’s also online networking, blogging, email and webcam, online shopping, even printing our own photos and ploughing through endless television channels 'live' or on-demand. It's a nice thought that when electronic personal assistants can be tailored to our personality profiles (like an uber-Amazon personalised homepage), then we will no longer be slaves to the labour-saving devices we clutter our lives with. But even then, will consumerist culture trivia remain a primary component of our lives?

If all this sounds a bit Luddite, or just plain anti-Capitalist, then why not ask yourself do you feel technically savvy and cool, thanks to owning a range of up-to-the-minute high-tech consumer items? Do you even have a nutritionist or a lifestyle coach? Consider is it possible that you could be losing common sense, handing over large chunks of analytical thought to others so as to gain a little bit of quality time in a hectic world? It’s up to us to reclaim our critical thought processes before we evolve into H.G. Well's passive, leisure-obsessed Eloi. Otherwise the future's bright, the future's hyper-realistic 3D with added gubbins! Now where's my isotonic rehydration fluid?

Tuesday 1 February 2011

Cosmic fugues: the myriad connections between music and astronomy

Although there has been a surfeit of the damp dishrag that typifies British weather hanging over our night time skies recently, there have also been a few clear, crisp evenings allowing some fine views of Jupiter, even from my light-polluted suburban London garden. Having recently upgraded my stargazing equipment from a pair of ancient yet serviceable binoculars to a modest reflecting telescope (courtesy of an unexpected tax rebate), I thought this might be a good opportunity to sketch a few observations (pun intended) regarding the connections between astronomy and music. I was partly inspired by the BBC's Stargazing Live programmes earlier this month, whose co-host was the increasingly ubiquitous physicist and ex-keyboard player Brian Cox. Admittedly, Professor Cox is more space-orientated in his broadcasting than his professional work, but it does seem to be the case that astronomers have provided plenty of musically-attuned scientists, with the opposite direction also supplying musicians with astronomical interests.

Much has been written about the semi-mystical search to understand cosmic harmonies that motivated the research of both Kepler and Newton, so the phenomenon, if I can call it that, is hardly new. It has been a while since connections were formally recognised between music and mathematics, from harmonic progression to the idea that both subjects rely on similar cognitive processes. And of course, many aspects of astronomy rely to a large extent on mathematical underpinnings.

The correlation is not a recent one: in the Eighteenth Century composer William Herschel was inspired to switch to a career in astronomy after developing an interest in the mathematic aspects of musical composition. Today his symphonies are largely forgotten in favour of his key role in astronomy, including his discovery, with his sister Caroline, of the planet Uranus. There is at least anecdotal evidence, such as that provided by the musical Bachs and mathematical Bernoullis, for some degree of direct genetic inheritability in both disciplines. So perhaps utilisation of the same area of the brain may play a key role in the association between the two seemingly disparate fields. I feel much more research could be undertaken in this area.

Although increasing urbanisation (and therefore light pollution) may lead most people to consider stargazing as about as dynamic and interesting as fly fishing, the wonder of the night sky can offer a poetic experience free to all. This suggests an obvious aesthetic motivation or sensibility that links the discipline directly to music. But if this seems pretty facile, at a slighter more involved level I would like to consider the geometry, timing and mathematical relationships that are found in astronomy and which have their own aesthetic charm. There are projects currently in progress that cover many aspects of this, working from both sides. On the music-led approach, music professors at Yale, Princeton and Florida State University are attempting to reduce musical structure to geometries that seemingly echo the Pythagorean tradition. From the astronomy angle, Stargazing Live featured a scientist converting astrophysical phenomena into audible signals, even though the results couldn’t be classed as music in any traditional aesthetic sense.

It has to be said that there are little in the way of prominent musical works that utilise astronomical methodology or facts in the way that Diane Ackerman's wonderful volume of poetry The Planets: A Cosmic Pastoral succeeds. Contemporary astronomy-inclined musicians including Queen guitarist Brian May, who admittedly originally trained as an astronomer and finally completed his PhD on the Zodiacal Light in 2008, and sometime Blur bassist Alex James, he of Beagle 2 call sign fame. Yet neither has produced an astronomical-based piece that can complete with that most obvious example of space-related music, Holst's The Planets, which was inspired by purely astrological rather than astronomical themes. My own favourite of the genre is Vangelis' 1976 album Albedo 0.39, which culminates in the title track detailing a geophysical description of Earth. Whether the Open University astronomy degree taken by Myleene Klass will inspire her to an astronomy-orientated meisterwork is...err...possibly somewhat doubtful...