Sunday 30 December 2012

Software Samaritans: in praise of science-orientated freeware

In the midst of the gift-giving season it seems an appropriate time to look at a source of presents that keeps on giving, A.K.A. the World Wide Web. In addition to all the scientific information that can be gleaned at comparatively little effort, there is also an immense amount of fantastic freeware that is available to non-professionals. I have found that these can be broken down into three distinctive types of application:
  1. Simulated experiments such as microscope simulators or virtual chemistry laboratories
  2. Distributed computing projects, which are applications that do not require any user effort other than downloading and installation
  3. Aplications with specific purposes to actively aid amateur science practice, such as planetariums
I have to admit to not having any experience with the first category, but examples such as a molecular biology application Gene Designer 2.0, The Virtual Microscope and Virtual (chemistry) Labs - all suitable for school and university students - are astonishing in their ability to extend conventional textbook and lecture-based learning. All I can say is - I wish I had access to such software when I was at school!

I have a bit more experience with distributed computing projects, having been a volunteer on Seti@home - back in its first year (1999-2000). Only the second large-scale project of this type, the grandiose aim is to discover radio signals broadcast by alien civilisations. All the user has to do is download and install the application, which then runs when the computer is idling as per a glorified screensaver. In this particular case, the Seti@home signal-processing software is able to search for extra-terrestrial transmissions that might be only 10% the strength of earlier surveys, using data collected by the giant Arecibo radio telescope. The application has proved to be remarkably successful, having been downloaded to over 3 million personal computers.

But if this project is a bit blue sky for you, there are plenty of others with more down-to-earth objectives. For example, Folding@home and Rosetta@home are fantastic opportunities for all of us non-professionals to help molecular biologists studying protein folding in order to develop cures for diseases such as HIV, Alzheimer's, and Huntington's. So far, the research has generated over a hundred research papers, but the complexity of the subject means there's plenty of room for additional computers to get involved for many years to come.

The third class of software supplies the user with the same sort of functionality as commercially-available applications, but in many cases surpasses them in terms of capabilities and quantity of data. These tend to congregate into a few classes or themes, suitable for usage amongst amateurs of variable capability and commitment.

One popular category is planetarium applications such as Stellarium, which has plenty of features for city-bound (i.e. restricted vision) enthusiasts such as myself. It even includes a night vision mode, red-tinted so as to keep the observer's eye adjusted to the darkness, although unfortunately my telescope camera software doesn't have an equivalent and as I cannot reduce the laptop screen brightness until after I've achieved focus, I'm left stumbling and squinting until my eyes readjust. Stellarium seems reasonably accurate with regards to stars and planets but I've never managed to check if the satellite trajectories confirm to reality. 

For anyone lucky enough to live in a non-light polluted environment  there are more sophisticated free applications, such as Cartes du Ciel-SkyChart which allows you to create printable charts as well as remotely control telescope drives. If you are really an expert at the telescope then C2A (Computer Aided Astronomy) is the bee's knees in planetarium software, even able to simulate natural light pollution during the lunar cycle and allowing you to create your own object catalogues!

As an aside, what gets me with these applications is how they calculate the positioning of celestial objects from any location on Earth, at any time, in any direction, and at varied focal lengths. After all, there is a well-known issue with calculating the gravitational interactions of more than two celestial objects known as the n-body problem. So how do the more sophisticated planetarium applications work out positioning for small objects such as asteroids? I used to have enough issues writing basic gravity and momentum effects in ActionScript when building games in Adobe Flash!  All I can say is that these programmers appear like mathematics geniuses compared to someone of my limited ability.

Processing astrophotography images

Generating Jupiter: from raw planetary camera frame to final processed image

Back to the astronomy freeware. Once I've aligned my telescope courtesy of Stellarium and recorded either video or a sequence of stills using the QHY5v planetary camera (wonder if they'll give me any freebies for plugging their hardware?) I need to intensively process the raw material to bring out the details. For this image processing I use another free application called RegiStax which again astonishes me as to the genius of the programmers, not to mention their generosity. Being a regular user of some extremely complex (and expensive) commercial image editing applications since the late 1990s, I undertook a little research into how such software actually works. All I can say is that unless you are interested in Perlin noise functions (seeded random number generators), stochastic patterns, Gaussian distribution and Smallest Univalue Segment Assimilating Nucleus (SUSAN) algorithms - nice! - you might just want to accept that these applications are built by programmers who, as with the planetarium software builders mentioned above, have advanced mathematics skills beyond the comprehension of most of us.

So in case you weren't aware, the World Wide Web provides far more to the amateur scientist or student than just a virtual encyclopaedia: thanks to the freeware Samaritans you can now do everything from finding the position of millions of astronomical objects to examining electron microscope images of lunar dust. It’s like having Christmas every day of the year!

Sunday 25 November 2012

Dark skies vs. light pollution: trying to keep in touch with the cosmos

A few minutes after witnessing the recent solar eclipse - reaching an 87% maximum here in Auckland, New Zealand - I was astonished to overhear the account director of an international advertising agency disparagingly state that all the people on the streets he had just seen staring at the sky would probably have been eaten by dinosaurs a few thousand years ago. I was so shocked by his lack of wonder (and this from the representative of an agency that claims to appeal to the heart as much as to head) that I couldn't even bring myself to ask if he was a creationist, considering his evolutionary timescale differed by approximately sixty-five million years from the scientifically accepted one. As much as the impression he gave of being a follower of the Sarah Palin school of history, what really got to me was his lack of wonder: have many first-worlders become so surrounded by electronic gizmos that they are immune to the marvels of nature?

One of the great natural sights anyone can enjoy is the night sky, but with more than 50% of the human race now living in conurbations we are rapidly cutting ourselves off from view that helped inspire our earliest mythologies. Could an argument be made that as our ability to observe the rest of creation declines, so does our ability to awe? Although my home city of Auckland is less light polluted than my last place of residence, London, a brief visit to rural Queensland, Australia earlier in the year reminded me just how much us city dwellers are missing: for example, Mars really is an angry 'Bringer of War' red whilst the Milky Way does seem like a river cutting through the sky. I also recall that once during a holiday in Cyprus I saw an extremely bright, eerie glow radiating from behind hills near our rural villa, only for the full moon to eventually rise as the source of the light.

Although Auckland isn't bad by the standards of some cities - it's dark enough in my back garden for even a half full (First Quarter) moon to cast strong shadows - the view directly west, currently home to interesting sights such as Mars, Mercury and Venus, is pretty much ruined by the stadium lights used in the local shopping centre car park, which remain on until very late. In addition, there is enough general light pollution from buildings and to a lesser extent street lamps that even a modicum of cloud is enough to reflect a diffuse glow and severely impact astronomical 'seeing'.

Crater Copernicus via a Skywatcher 130P telescope
The crater Copernicus, as seen from Auckland via a Skywatcher 130P reflector using a QHY5V camera.

With numerous forms of contamination now known to be causing environmental degradation it's hard to see where support can be garnered for this most poetic of forms of man-made pollution. After all, astronomers are hardly an endangered species and with professionals able to use the likes of the Very Large Telescope in Chile and plans afoot for the European Extremely Large Telescope to be operational by 2022 things are looking up for the discipline (an old astronomical pun, if you weren't aware). But as for us city-bound amateurs, we're stuck with poor viewing conditions thanks to all the artificial lighting, never mind the turbulence caused by heat radiating from asphalt and the like.

Research suggests that the USA alone loses billions of dollars per year on night lighting commercial and corporate premises. So why are shops lit up outside of opening hours: to advertise the company logo and wares for any passing punter, just on the off-chance it triggers a bell in the consumer's head? And what about office buildings? Since movement detectors have been installed in most office tower blocks I've worked in, why do companies still feel the need to have dozens of floors lit up like a Christmas tree? The USA currently imports over 20% of its energy so wouldn't make sense to for the largest consumers to cut down on usage rather than become increasingly beholden to other nations? The lifespan of most fossil fuels deposits is now understood and makes for grim reading, especially with regards to oil. European energy reserves for example are known to be extremely low, so how can non-practical nocturnal lighting be anything other than ridiculous?

And then there are street lights, which in most designs seem to radiate light in all directions. There are models that aim their light where it's needed, i.e. downwards, but the vast majority just aim their beam everywhere. I'm assuming that any street light that channels its light downwards in a tighter beam can utilise lower power bulbs than other styles but even with the obvious eventual power savings I can't see much chance of upgrades en masse; there are an estimated 35 million street lamps in the USA alone, so conversion wouldn't be an overnight process. What about tasking local authorities with switching to more efficient models as and when individual lamps require replacements? You would have thought any opportunity to save energy would be a basic tenet of legislation by now. Or is there a naïve belief that science will come up with a miracle solution in our darkest hour? Personally I'm not sure that nuclear fusion is going to be ready any time soon!

So apart from annoying amateur astronomers there are several strong arguments in favour of reduced nocturnal light pollution. A biological rather than economical one has been suggested by several studies investigating the effects strong nocturnal light levels may have on human health, such as reduced melatonin levels. In addition, various types of wildlife from hatching amphibians to migrating birds are affected by artificial night lighting, and as we are becoming increasingly aware, one small change in the ecosystem can rapidly cause a chain reaction up the food pyramid. As if these problems weren't enough, another issue that seems to have garnered minimal media attention is that artificial lighting at night may break down the nitrate radical NO3, which would otherwise help to neutralise other, smog-contributing, nitrogen oxides. All in all, there seems to be very few areas of concern to humanity that are not affected by nocturnal light pollution. By comparison, the inconvenience to us amateur astronomers seems like small fry!

However, it is not entirely doom and gloom. The International Dark-sky Association (IDA) was formed in 1988 to fight light pollution at a grass roots level and has put together information packs as well as organising the International Dark Sky Places programme. There are to date nearly twenty parks and reserves around the world that have qualified for this status, the majority to be found in Canada and the USA. The largest however is the Aoraki Mackenzie International Dark Sky Reserve on South Island, New Zealand, so I intend to get down there at some point in the next few years...

Another campaign that relies upon public participation is the GLOBE at Night programme, which has collated nocturnal light pollution levels using data supplied by volunteers from 115 countries. It has a family-friendly website with items in up to 14 languages, so for any parents looking to involve their children in an important global experiment, this is the place to go. It even includes instructions on making that essential tool for all night-time observations, a red light, so that you can view documentation without ruining your night vision sensitivity. Incidentally, I know the problems of ruined night vision all too well, since although the superb planetarium freeware I use has a night mode, my telescope camera software does not; I suppose I'll just have to find somewhere that sells sheets of red gel to tape over the laptop screen.

It would appear that the ever-increasing difficulty of viewing at first hand the stars, planets, nebulae and everything else that makes the observable universe is just the tip of the iceberg when it comes to the problems caused by too much artificial lighting at night. But thanks to the likes of the IDA and GLOBE at Night programmes there is now an opportunity for anyone to get involved, both to promote conservation of energy and our fragile ecosystem whilst preserving something of the wonders that previous generations took for granted. As the physicist Brian Greene, author of The Elegant Universe (that's the book with all the tricky stuff about Calabi-Yau spaces) puts it: "I have long thought that anyone who does not...gaze up and see the wonder and glory of a dark night sky filled with countless stars loses a sense of their fundamental connectedness to the universe. And as the astounding vastness of the universe becomes obscured, there is a throwback to a vision of a universe that essentially amounts to earth, or one's country, or state or city. Perspective becomes myopic. But a clear night sky...allows anyone to soar in mind and imagination to the farthest reaches of an enormous universe in which we are but a speck. And there is nothing more exhilarating and humbling than that."

If that's not a call to action, I don’t know what is: come on Brian Cox, please get the ball rolling!

Monday 29 October 2012

From geek to guru: can professional scientists be successful expositors (and maintain careers in both fields)?

The recent BBC TV series Orbit: Earth's Extraordinary Journey gave me food for thought: although presenter Helen Czerski is a professional physicist she was burdened with a co-presenter who has no formal connection with science, namely Kate Humble. You have to ask: why was Humble needed at all? I'll grant that there could have been a logistics issue, namely getting all the locations filmed in the right season within one year, but if that was the case why not use another scientist, perhaps from a different discipline? Were the producers afraid a brace of scientists would put the public off the series?

The old days of senior figures pontificating as if in a university lecture theatre are long gone, with blackboard diagrams and scruffy hair replaced by presenters who are keen to prove their non-geek status via participation in what essentially amount to danger sports in the name of illustrating examples. Okay, so the old style could be very dry and hardly likely to be inspirational to the non-converted, but did Orbit really need a non-scientist when Helen Czerski (who is hardly new to television presenting) can deliver to camera whilst skydiving? In addition, there are some female presenters, a prominent British example being Alice Roberts, who have been allowed to solely present several excellent series, albeit involving science and humanities crossovers (and why not?)

But going back to Kate Humble, some TV presenters seems to cover such a range of subject matter that it makes you wonder if they are just hired faces with no real interest (and/or knowledge) in what they are espousing: “just read the cue cards convincingly, please!” Richard Hammond - presenter of light entertainment show Top Gear and the (literally) explosive Brainiac: Science Abuse has flirted with more in-depth material in Richard Hammond's Journey To The Centre Of The Planet, Richard Hammond's Journey To The Bottom Of The Ocean and Richard Hammond's Invisible Worlds. Note the inclusion of his name in the titles – just in case you weren't aware who he is. Indeed, his Top Gear co-presenter James May seems to be genre-hopping in a similar vein, including James May's Big Ideas, James May's Things You Need to Know, James May on the Moon and James May at the Edge of Space amongst others, again providing a hint as to who is fronting the programmes. Could it be that public opinion of scientists is poor enough - part geek, part Dr Strangelove - to force producers to employ non-scientist presenters with a well-established TV image, even if that image largely consists of racing cars?

Popular science books from Cosmos to A Brief History of Time

Having said that, science professionals aren't infallible communicators: Sir David Attenborough, a natural sciences graduate and fossil collector since childhood, made an astonishing howler in his otherwise excellent BBC documentary First Life. During an episode that ironically included Richard 'Mr Trilobite' Fortey himself, Sir David described these organisms as being so named due to their head/body/tail configuration. In fact, the group's name stems somewhat obviously from tri-lobes, being the central and lateral lobes in their body plan. It was an astounding slip up and gave me food for thought as to whether anyone on these series ever double checks the factual content, just to make sure it wasn't copied off the back of a cereal packet.

Another possible reason for using non-science presenters is that in order to make a programme memorable, producers aim to differentiate their expositors as much as possible. I've already discussed the merits of two of the world's best known scientists, Stephen Hawking and Richard Dawkins, and the unique attributes they bring to their programmes, even if in Dawkins' case this revolves around his attitude to anyone who has an interest in any form of unproven belief. I wonder if he extends his disapprobation to string theorists?

What is interesting is that whereas the previous generation of popular science expositors achieved fame through their theories and eventually bestselling popularisations, the current crop, of whom Helen Czerski is an example, have become well-known directly through television appearances. That's not to say that the majority of people who have heard of Stephen Hawking and Richard Dawkins have read The Selfish Gene or A Brief History of Time. After all, the former was first published in 1976 and achieved renown in academic circles long before the public knew of Dawkins. Some estimates suggest as little as 1% of the ten million or so buyers of the latter have actually read it in its entirety and in fact there has been something of a small industry in reader's companions, not to mention Hawking's own A Briefer History of Time, intended to convey in easier-to-digest form some of the more difficult elements of the original book. In addition, the US newspaper Investors Business Daily published an article in 2009 implying they thought Hawking was an American! So can you define fame solely of being able to identify a face with a name?

In the case of Richard Dawkins it could be argued that he has a remit as a professional science communicator, or at least had from 1995 to 2008, due to his position during this time as the first Simonyi Professor for the Public Understanding of Science. What about other scientists who have achieved some degree of recognition outside of their fields of study thanks to effective science communication? Theoretical physicist Michio Kaku has appeared in over fifty documentaries and counting and has written several bestselling popular science books , whilst if you want a sound bite on dinosaurs Dale Russell is your palaeontologist. But it's difficult to think of any one scientist capable of inspiring the public as much as Carl Sagan post- Cosmos. Sagan though was the antithesis of the shy and retiring scientist stereotype and faced peer accusations of deliberately cultivating fame (and of course, fortune) to the extent of jumping on scientific bandwagons solely in order to gain popularity. As a result, at the height of his popularity and with a Pulitzer Prize-winning book behind him, Sagan failed to gain entry to the US National Academy of Sciences. It could be argued that no-one has taken his place because they don't want their scientific achievements belittled or ignored by the senior science establishment: much better to claim they are a scientist with a sideline in presenting, rather than a communicator with a science background. So in this celebrity-obsessed age, is it better to be a scientific shrinking violet?

At this point you might have noticed that I've missed out Brian Cox (or Professor Brian Cox as it states on the cover of his books, just in case you thought he was an ex-keyboard player who had somehow managed to wangle his way into CERN.) If anyone could wish to be Sagan's heir - and admits to Sagan as a key inspiration - then surely Cox is that scientist. With a recent guest appearance as himself on Dr Who and an action hero-like credibility, his TV series having featured him flying in a vintage supersonic Lightening jet and quad biking across the desert, Cox is an informal, seemingly non-authoritative version of Sagan. A key question is will he become an egotistical prima donna and find himself divorced from the Large Hadron Collider in return for lucrative TV and tie-in book deals?

Of course, you can't have science without communication. After all, what's the opposite of popular science: unpopular science? The alternative to professionals enthusing about their subject is to have a mouth-for-hire, however well presented; delineating material they neither understand nor care about. And considering the power that non-thinking celebrities appear to wield, it's vital that science gets the best communicators it can, recruited from within its own discipline. The alternative can clearly be seen by last years' celebrity suggestion that oceans are salty due to whale sperm. Aargh!

Wednesday 26 September 2012

Moulds, mildew and mushrooms: living cheek by jowl with fungi

There is a form of life that probably exists in every house, office and workplace on the planet (operating theatres and clinical laboratories largely excepted) that is so ubiquitous that it goes chiefly unnoticed. The organisms are stationary yet spread rapidly, are composed of numerous species - some of which include common foodstuffs - and are neither animal nor plant. In other words they belong to the third great kingdom of macroscopic life: fungi. But what are these poor relations of the other two groups, seen as both friend and foe?

Having moved last year from a one hundred and thirty year old, centrally-heated and double-glazed terrace house in the UK to a single-glazed, largely unheated detached house less than a quarter that age in New Zealand, I've been able to conduct a comparative domestic mycology experiment. Without sounding  too much like a mould-and-spores collector out of a P.G. Wodehouse story, the subject has proved interesting and reasonably conclusive: a family of four moving to an annual climate on average four degrees warmer but with twice the rainfall has not substantially changed the amount or placement of mould in the home; if anything, it has slightly decreased. But then the amount of bathing, laundry and pans on the hob hasn't changed, so perhaps it's not too surprising. The more humid climate has been tempered by having more windows and doors to open, not to mention being able to dry more of the laundry outside. Mind you, one big plus of the move has been not having to use electric dehumidifiers or salt crystal moisture traps, so a few degrees warmth seems to be making a difference after all.

There appears to be a wide range of dubious stories, old wives' tales and assorted urban myths regarding fungi, no doubt being due to the lack of knowledge: after all, if you ask most people about the kingdom they will probably think of edible mushrooms followed by poisonous toadstools. Yet of the postulated 1.5 million species of fungi, only about 70,000 have so far been described. They are fundamentally closer to animals than they are to plants, but as they live off dead organic matter (and some inorganic substances too), thriving in darkness as unlike plants they do not photosynthesise, their reputation is more than a little sinister. The fact they will grow on just about any damp surface, hence the kitchen and bathroom mould populations, reinforces the opinion of them as being unwelcome visitors. So just how bad are they?

Firstly, fungi play a vital role in the nitrogen cycle, supplying nutrients to the roots of vegetation. The familiar fruiting bodies are, as Richard Dawkins describes them, pretty much the tip of iceberg compared to the enormous network of fungal material under the soil. Even so, they are given short shrift in popular natural history and science books: for example, they only warrant five pages in Richard Fortey's Life: An Unauthorised Biography, whilst Bill Bryson's A Short History of Nearly Everything spends much of its four pages on the subject concerned with the lack of knowledge about the number of species. Of my five Stephen Jay Gould volumes totalling over two thousand pages, there are just several, short paragraphs. And at least one of my books even refers to fungi as a simple form of plant life! Yet we rely on fungi for so many of our staple foodstuffs; it's just that they are so well hidden we don't consider them if they're not labelled as mushrooms.  But if you eat leavened bread, yoghurt, cheese or soy sauce, or drink beer or wine, fungi such as yeast will have been involved somewhere along the line. On another tack, fungi are party to yet another knife in the coffin of human uniqueness, since both ants and termites cultivate fungi: so much for Man the Farmer.

As this point I could start listing their uses in health cures, from traditional Chinese medicine to Penicillin, but my intention has been to look at fungi in the home. Anyone who has seen the fantastic BBC television series Planet Earth might recall the parasitical attack of the genus Cordyceps upon insects, but our much larger species is far from immune to attack. Minor ailments include Athlete's Foot and Ringworm whilst more serious conditions such as Candidemia, arising from the common Candida yeast, can be life- threatening . The spores are so small that there is no way to prevent them entering buildings, with commonly found species including Cladosporium, Aspergillus, and our old friend Penicillium.

Once they have a presence, moulds and mildew are almost impossible to eradicate. They are extremely resilient, with the poison in Amanita species such as the death cap failing to be destroyed by heat. An increasingly well-known example is the toxin of the cereal-infecting ergot, capable of surviving the bread-making process, even the baking. Indeed, ergot has seemingly become a major star of the fungi world, being used in pharmaceuticals at the same time as being nominated the culprit behind many an historic riddle, from the Salem witch trials to the abandonment of the Marie Celeste. Again, lack of knowledge of much of the fungal world means just about anything can be claimed with only dubious evidence to support it.

Varieties of domestic mould
A rogue's gallery of household fungi

Although we are vulnerable to many forms of fungus, an at least equally wide range attack our buildings. Whether the material is plaster, timber or fabrics, moulds and mildew can rapidly spread across most surfaces containing even a hint of dampness, often smelt before they are seen. At the very least, occupants of a heavily infested property can suffer allergies, sinus problems and breathing problems. As an asthmatic I should perhaps be more concerned, but other than keeping windows and doors open as much as possible there doesn't seem much that can be done to counter these diminutive foes.  As it is, vinegar is a favourite weapon, particularly on shower curtains and the children's plastic bath toys. Even so, constant vigilance is the watchword, as can be seen by the assorted examples from around the house above. For any mycophobes wondering how large fungi can get indoors, I once worked on a feature film shot in a dilapidated Edwardian hotel in central London about to be demolished which had fungal growths on the top floor (saturated with damp thanks to holes in the roof) which were the size of dinner plates.

So whether you've played with puffballs or like to dine on truffles, remember there's no escape: fungi are a fundamental element of our homes, our diet, and if we're unlucky, us too. Seemingly humble they may be, but even in our age of advanced technology, there's just no escape...

Monday 27 August 2012

Ancestral claims: why has there been comparatively little research into human origins?

It has been said that we live in a golden age of dinosaur discoveries: from Liaoning Province in China to the Dakota Badlands, new species are being named on an almost monthly basis. But if there is a plethora of dinosaur palaeontologists why has there seemingly been so few scientists studying the origin of Homo sapiens? Surely deciphering the ancestry of mankind is one of the great challenges?

The image of hominins has certainly evolved over the past thirty years, even the naming changing in scientific circles (from the broader term hominid), although as the title of the 2003 BBC series' Walking With Cavemen showed, popular perception has been slow to adopt new research. As a child, I had an early 1970s plastic model kit of a Neanderthal Man. I seem to recall it bore more than a passing resemblance to the Morlocks from the 1960 film adaptation of H.G. Wells's The Time Machine, a far cry from the individuals portrayed in Walking With Cavemen and other, more recent, series. Yet this idea of a shambling, zombie-like creature is still to some extent prevalent. Why should this be, when there is now evidence for Neanderthal ritual and art? Are we simply afraid of finding yet more nails in the coffin of human uniqueness (apologies for the rusty metaphor)?

There are still clear elements of taboo to the subject: the humbling  notion of humans being but a 'monkey shaved' was also felt by early evolutionists, with even natural selection co-founder Alfred Russel Wallace believing humanity the product of divine fiat. Perhaps a sense of embarrassment (try watching zoo visitors as they observe apes) combined with Western religious thought has prevented the discipline becoming popular in the way the love of all things dinosaur has skyrocketed since the 1970s.

Then again, it still seems that people misunderstand evolution via natural selection, considering progress as via ladders rather than differentiating bushes. The 2004 discovery of yet another new hominin species, Sahelanthropus tchadensis, led the Christian Science Monitor to describe it as that hoary old misnomer the 'missing link'. This is despite three decades of popularising by the likes of Dawkins, Fortey, Jay Gould, etal, to dispel the notion. You only have to read archaeologist (note: not palaeontologist) Mark Roberts’ account of the seemingly shoestring Homo heidelbergensis excavations at Boxgrove in England to realise that hominin research has been attracting about one per cent of the news (and a zillionth of the funds) directed towards cutting-edge particle physics.

A primary cause for the dearth of public knowledge can be put down to the actual lack of direct fossil evidence. Although Neanderthal remains were the first actually recognised as belonging to a human ancestor, it took several decades after the initial 1829 discovery before the identification was scientifically confirmed. Into the Twentieth Century the lack of finds allowed such embarrassments as the poor-quality Piltdown fake to be taken at face value. It is easy to see at least one key reason why this should be: human ancestry carries so much emotional baggage that it took over forty years before British scientists saw the obvious, instead of following the patriotism and jingoism inspired by the finds.

As it is, the history of hominin palaeontology has been riddled with contention, serendipity, unfortunate accidents and amateur bungling. If anyone wants to disprove the myth of science as a sterile, laboratory-conditioned activity, this sphere provides key evidence par excellence (good to get a rhythm going). From Eugene Dubois hiding his Java Man (Homo erectus) remains for several decades early in the Twentieth Century to the disappearance of Peking Man (also Homo erectus) fossils during the Second World War - not to mention the grinding up of yet more erectus bones for Chinese traditional medicine - the fate of finds is enough to make a dedicated specialist weep.

In addition, the fact that humans and their ancestors primarily evolved in what are today remote African locations with limited infrastructure can only exacerbate the situation. The work can be tedious, physically arduous and rewards few and far between. Yet fossil remains are the backbone of the discipline (almost a pun there, if you really look for it). After all, an increase in the number of finds can also lead to a paradigm shift in understanding: in the last few years it has been possible to undermine the opinion given on the BBC documentary The making of Walking with Dinosaurs, first broadcast back in 2000, that we would never know the colour of any dinosaur, courtesy of feathered Chinese theropod fossils (try saying that three times fast).

However, the last few decades has seen an improvement in the number of finds as funding has been allocated and professional enthusiasm increased. The problem has been that rather than solidifying the story of our ancestral line the number of species has multiplied without aiding the overall picture; there are still plenty of dashed lines on the human family tree. This indeterminacy has meant that a consensus is hard to find. If you examine any two charts of human ancestry, the chances are that they won’t agree. In the face of limited evidence it seems relatively easy for palaeoanthropologists to promote their own theories as to which species are our direct ancestors. Human nature being what it is, the favoured species usually happen to be those discovered by the said promoter. Such behaviour led to a thirty-year rift between two of the key players, Richard Leakey and Donald Johanson, partially over the number of branches on the direct ancestral tree. If anyone thinks the days of feuding scientists as long past (consider for example the Nineteenth Century American dinosaur pioneers Cope and Marsh) this quarrel ought to set the record straight.

One area of research that has undoubtedly given a boost to the understanding of human origins is the ability to retrieve and read ancient DNA. That’s not to say that it has yet produced much in the way of definitive evidence, but it undoubtedly widens the knowledge that can be gained from a paucity of finds. A recent report suggested that Homo sapiens and Neanderthals did not after all interbreed but share a similar genome via common ancestry. This is a reversal of a previous report that in turn countered earlier genetic evidence...and so on.

The relatively recent demise of the Neanderthals has provoked some interesting theories that show how science can reflect the concerns of contemporary society, namely that the violent aspect our species may have been directly responsible. There is currently no firm evidence for deliberate genocide, with other likely culprits ranging from inability to adjust to climate change to a less flexible neural architecture (specifically, missing out on the 'Great Leap Forward' via imaginative cogitation). Recent texts have attempted to downplay innate human aggression but writers closer in time to the world wars and to the heyday of Freudianism, especially Australian anthropologist Raymond Dart and American author Robert Ardrey, had a major influence on the subject with their promotion of the 'killer ape' theory. From 1960 onwards the first serious, sustained research on wild chimpanzees by Jane Goodall inadvertently reinforced the notion of mankind as a predominantly violent species. Given such notions, it is perhaps little wonder that funding has been lacking.

The new century has so far seen something of an improvement, with a large increase in the number of popular books and television programmes reflecting and in turn further developing public interest. The controversy surrounding the nature of the Homo floresiensis finds of 2003 has proved fortuitous, with general news media at long last paying serious attention. The ball may have been started rolling by the Chalcolithic ice mummy Otzi, who was discovered in the Alps in 1991. A young upstart at a mere 5,300 years old, the incredible preservation of the man, his clothing and tools have helped bridge the gap in how we relate to our prehistoric ancestors.

So times they are a-changing. The Ancient Human Occupation of Britain project is a sustained, well-funded effort to examine the past 700,000 years of evidence in the United Kingdom using a plethora of cross-discipline techniques in addition to conventional archaeology and palaeontology. The use of advanced dating methods such as electron spin resonance and the ability to analyse ancient DNA suggest that even without new finds, hominin research in the near future will generate some surprises. All I can say is that it's about time, too!

Monday 30 July 2012

Buy Jupiter: the commercialisation of outer space

I recently saw a billboard for the Samsung Galaxy SIII advertising a competition to win a "trip to space", in the form of a suborbital hop aboard a Virgin Galactic SpaceshipTwo. This phrase strikes me as highly interesting: a trip to space, not into space, as if the destination was just another beach holiday resort. The accompanying website uses the same wording, so clearly the choice of words wasn't caused by space issues (that's space for the text, not space as in outer). Despite less than a dozen space tourists to date, is space travel now considered routine and the rest of the universe ripe for commercial gain, as per the Pan Am shuttle and Hilton space station in 2001: A Space Odyssey? Or is this all somewhat premature, with the hype firmly ahead of the reality? After all, the first fee-paying space tourist, Dennis Tito, launched only eleven years ago in 2001.

Vodafone is only the second company after Guinness Breweries to offer space travel prizes, although fiction was way ahead of the game: in Arthur C. Clarke's 1952 children's novel Islands in the Sky the hero manages a trip into low Earth orbit thanks to a competition loophole.  However, the next decade could prove the turning point. Virgin Galactic already have over 500 ticket-holders whilst SpaceX, developer of the first commercial orbital craft - the unmanned Dragon cargo ship - plan to build a manned version that could reduce orbital seat costs by about 60%.

If anything, NASA is pushing such projects via its Commercial Orbital Transportation Services (COTS) programme, including the aim of using for-profit services for the regular supply of cargo and crew to the International Space Station (ISS). The intention is presumably for NASA to concentrate on research and development rather than routine operations, but strong opposition to such commercialisation comes from an unusual direction: former NASA astronauts including Apollo pioneers Neil Armstrong and Eugene Cernan deem the COTs programme a threat to US astronautic supremacy. This seems to be more an issue of patriotism and politics rather than a consideration of technological or scientific importance. With China set to overtake the USA in scientific output next year and talk of a three-crew temporary Chinese space station within 4 years, the Eclipse of the West has already spread beyond the atmosphere. Then again, weren't pre-Shuttle era NASA projects, like their Soviet counterparts, primarily driven by politics, prestige, and military ambitions, with technological advances a necessary by-product and science very much of secondary importance?

Commerce in space could probably be said to have begun with the first communications satellite, Telstar 1, in 1962. The big change for this decade is the ability to launch ordinary people rather than trained specialists into space, although as I have mentioned before, the tourist jaunts planned by Virgin Galactic hardly go where no-one has gone before. The fundamental difference is that such trips are deemed relatively safe undertakings, even if the ticket costs of are several orders greater than any terrestrial holiday. A trip on board SpaceShipTwo is currently priced at US$200,000 whilst a visit to the International Space Station will set you back one hundred times that amount. This is clearly somewhat closer to the luxury flying boats of the pre-jet era than any modern package tour.

What is almost certain is that despite Virgin Galactic's assessment of the risk as being akin to 1920s airliners, very few people know enough of aviation history's safety record to make this statistic meaningful. After all, two of the five Space Shuttle orbiters were lost, the latter being the same number intended for the SpaceshipTwo fleet. Although Virgin Galactic plays the simplicity card for their design - i.e. the fewer the components, the less the chance of something going wrong - it should be remembered that the Columbia and Challenger shuttles were lost due to previously known and identified problems with the external fuel tank and solid rocket boosters respectively. In other words, when there is a known technical issue but the risk is considered justifiable, human error enters the equation.

In addition, human error isn't just restricted to the engineers and pilots: anything from passenger illness (about half of all astronauts get spacesick - headaches and nausea for up to several days after launch) to disruptive behaviour of the sort I have witnessed on airliners. Whether the loss of business tycoons or celebrities would bring more attention to the dangers of space travel remains to be seen. Unfortunately, the increase in number and type of spacecraft means it is almost certainly a case of when, not if.

Planet Saturn via a Skywatcher telescope

Location location location (via my Skywatcher 130PM)

But if fifteen minutes of freefall might seem a sublime experience there are also some ridiculous space-orientated ventures, if some of the ludicrous claims found on certain websites are anything to go by. Although the 1967 Outer Space Treaty does not allow land on other bodies to be owned by a nation state, companies such as Lunar Embassy have sold plots on the Moon to over 3 million customers. It is also possible to buy acres on Mars and Venus, even if the chance of doing anything with it is somewhat limited. I assume most customers treat their land rights as a novelty item, about as useful as say, a pet rock, but with some companies issuing mineral rights deeds for regions of other planets, could this have serious implications in the future? Right now it might seem like a joke, but as the Earth's resources dwindle and fossil fuels run low, could private companies race to exploit extra-terrestrial resources such as lunar Helium 3?

Various cranks/forward thinkers (delete as appropriate) have applied to buy other planets since at least the 1930s but with COTs supporting private aerospace initiatives such as unmanned lunar landers there is at least the potential of legal wrangling over mining rights throughout the solar system. The US-based company Planetary Resources has announced its intention to launch robot mining expeditions to some of the 1500 or so near-Earth asteroids, missions that are the technological equivalent of a lunar return mission.

But if there are enough chunks of space rock to go round, what about the unique resources that could rapidly become as crowded as low Earth orbit? For example, the Earth-Moon system's five Lagrange points are gravitationally stable positions useful for scientific missions, whilst geosynchronous orbit is vital for commercial communication satellites. So far, national governments have treated outer space like Antarctica, but theoretically a private company could cause trouble if the law fails to keep up with the technology, in much the same way that the internet has been a happy harbour for media pirates.

Stephen Hawking once said "To confine our attention to terrestrial matters would be to limit the human spirit". Then again, no-one should run before they can walk, never mind fly. We've got a long way to go before we reach the giddy heights of wheel-shaped Hiltons, but as resources dwindle and our population soars, at some point it will presumably become a necessity to undertake commercial space ventures, rather than just move Monte Carlo into orbit. Now, where's the best investment going to be: an acre of Mars or two on the Moon?

Monday 25 June 2012

Ultramarine and ultraviolet: scientific theories and technological techniques in contemporary art

If one of your first thoughts when considering science is of a scruffy-headed physicist chalking equations on a blackboard - interactive whiteboards somehow being not quite the same - then it's easy to see how the subject might offer limited appeal to artists. So is it possible in our visually sophisticated society to create satisfying works of art that utilise elements of scientific thought processes, theories or techniques?

It's difficult to define what constitutes contemporary art, since the majority of people seemingly find it difficult to relate to installations, video art or ready-mades, never mind more traditional media. On the other hand, it can be argued that scientists might have a sense of aesthetic that differs profoundly from the mainstream. A well-known example of this was electro-magnetism pioneer James Clerk Maxwell's addition of a term to an equation in order to achieve an aesthetic balance, prior to him working out the actual meaning of the term.  Novelist and physicist Alan Lightman promotes the notion that scientists have a difference perspective on aesthetics, from the familiar consideration of particle symmetries to more abstruse mathematical harmonies. He describes Steven Weinberg's 1967 paper on the weak nuclear interaction in these terms: "to a physicist, (this) Langrangian…is a work of art." As someone of very limited mathematical ability like me it might as well be written in ancient cuneiform, but you can judge for yourself below:


But then aren't all aesthetic judgements subjective? One familiar chain of urban myths concerns art galleries who have suffered the embarrassment of finding their installations thrown out by over-zealous cleaners who were unaware the material was art. This leads to the interesting point that although much contemporary art is roundly ignored outside the cognoscenti, new technology and the social changes engendered by it, especially mobile communications and the World Wide Web, have been rapidly assimilated and rarely questioned. When it comes to the shock of the new, scientific ideas and the resulting technology appear much more comfortable than post-Second World War art. Or should that be qualified by the statement that if the technology is seen (albeit via persuasive advertising) as an improvement to everyday life, then it will be unquestioningly accepted, whereas art is ignored since it is rarely seen as serving a purpose?

At this point it might be good to consider two distinct approaches to how the two disciplines can be integrated:
  1. visual representations of and/or responses to science
  2. the use of scientific theories and methods to produce art
Approach 1:
In the Eighteenth Century Joseph Wright of Derby produced several atmospheric scenes of experiments, but the art history of the past century has made such clear-cut reportage unfashionable. The visual sophistication of our age would probably deem equivalent work today as both pedestrian and irrelevant to contemporary needs. After all, a straightforward painting of the Large Hadron Collider or a theorist lecturing in front of an equation-covered black board would hardly prove satisfying either from an aesthetic standpoint or as journalistic commentary. Changing technology has also eliminated the innate visual romanticism of peering through the eyepiece of a microscope or telescope; sitting at a computer screen is hardly inspiring material for the heirs to Wright of Derby.

Over the years I've attended several exhibitions that emphasised collaborations between both disciplines and have to confess I usually find the works have little depth beyond obvious, facile connections. Last year I saw a series of works reminiscent of my juvenilia (see the previous post). It consisted of a sequence of photographs of birds in flight, overlaid with the relevant motion equations. A slightly better result comes from the world of fashion, via collaboration between designer Helen Storey and her developmental biologist sister Kate. In the late 1990s they created a series of dresses elucidating the first thousand hours of human life, from fertilization through to recognizable human form.

One of my favourite examples is Yukinori Yanagi's World Flag Ant Farm, in which ants were introduced into a series of interconnected Perspex boxes containing national flags made of coloured sand. Once the human artist finished the initial setup, the wandering ants rearranged the pictorial elements as they used the sand to construct their colony. Yanagi stated his intention was to examine how much the animals rely on programmed instructions rather than free thought, but ironically the end result appeared far more expressive of individual freedom than the robot-like mentality considered essential for a hive mind.

Since 2005 Princeton University has been holding an irregular Art of Science competition, but again the resonance of the work varies enormously. Many entries are photographs of experiments or equipment, frequently at nano- to microscopic scales: good to look at but nothing that could not be faked by a skilled Photoshop user. However, a few submissions have proven to be the ultimate achievement of an aesthetic work integrated within an active experiment, including how computer memory degrades following power loss and a study of individual ants within a colony by painting unique patterns of dots on them. By and large though, most examples I have seen are woefully inadequate attempts to combine art and science.

Approach 2:
Originating with Hamlet's dictum to actors, it has been said that art's task is to hold a mirror up to nature. There have been concerted efforts by artists to deconstruct the world by adapting scientific knowledge, from the Impressionists attempt to understand how objects are modelled by light (consider Monet's haystacks and Rouen cathedral at different times of day and year), via the Pointillist's experiments to understand how the eye builds an image from minute elements, to the Futurists and Vorticists attempts to create apparent movement in a still image. Now that science shows us brave new worlds (apologies for mixing my Shakespeares) via electron microscopes, telescopes in numerous wavelengths, etc., what attempts have been made to illustrate this?

Luke Jerram is a colour-blind artist who has created glass sculptures of viruses at approximately one million times life size. What is so interesting apart from the novelty value of the subject matter is that unlike most representations in popular science books, the sculptures are transparent and therefore colourless. The works therefore immediately impart useful knowledge: viruses exist at a scale below the wavelengths of visible light and so cannot be the beautiful if  randomly-hued images we see in computer-generated illustrations. In fact, the only direct visualisation of viruses is produced by high resolution, transmission electron microscopy, the results being monochromatic, grainy and from the layman's point of view, distinctly samey. Jerram's works are not only a complex example of art meeting science, but in a tribute to their accuracy, have been used in medical texts and journals.

American artist Hunter Cole has created interesting works using techniques derived from her geneticist background, such as drawing in bioluminescent bacteria. At an even more experimental level, Brazilian Eduardo Kac has not just used life forms as media but has created novelty organisms as the artworks themselves, such as a fluorescing rabbit courtesy of a jellyfish protein gene; Doctor Frankenstein, come on down! Finally, at yet another step, Luke Jerram's 2007 Dream Director installation even made the viewer the subject of an experiment, although not exactly under laboratory conditions: visitors could stay in the gallery overnight, sleeping in pods which played themed sounds trigged by their own rapid eye movement.

If there is anything the recent history of science, especially cutting-edge physics, has taught us, it is that we need metaphors to visualise ideas that cannot be directly observed by our limited senses. But as astrophysicist and science writer John Gribbin has frequently pointed out, linguistic metaphor is often inadequate to the task, causing the analogy to return upon itself. Thus without help from the visual arts, anyone who isn't a maths genius has little hope of understanding the more arcane aspects of post-classical physics. Both art and science challenge perceptions, but it is likely that the latter will increasingly need the former to elucidate novel facts and theories. So any artist seeking a purpose need look no further: here's to many a fruitful collaboration!

Tuesday 29 May 2012

How to be cyantific: connecting the laboratory to the artist's studio

Moving house - or more broadly speaking, hemispheres - last year was a good excuse for a spring clean on an epic scale. One of the items that didn't make the grade even as far as a charity shop was a framed painting I created several decades' ago, a clumsy attempt to describe scientific imagery in acrylics. In front of a false colour radar map of the surface of Venus was the head and neck of a raptor dinosaur above a bowler-hatted figure straight out of Rene Magritte. You can judge the work for yourself below; I seem to remember the bemusement of the framer but as I said at the time, it wasn't meant to be to everyone's taste...

But if my daub was rather wide of the mark, just how successful have attempts been to represent the theory and practice of science in the plastic, non-linear, arts such as painting and sculpture? Whereas musical and mathematical ability seem to readily connect and there has been some admirable science-influenced poetry, by comparison the visual arts are somewhat lacking. Much has been written about the Surrealist's use of psychoanalysis but as this discipline is frequently described as a pseudoscience I've decided to cut through the issue by ignoring it and concentrate on the 'hard' sciences instead.

Combining science and art - or failing to
One of the most difficult issues to resolve (especially for those who accept C.P. Snow's theory of 'two cultures') is that whilst most science books for a general readership describe a linear progression or definitive advancement to the history of science, art has no such obvious arrow of change. After all, a century has passed since the early non-realist movements (Cubism, les Fauves, etc.) but there are plenty of contemporary artists who avoid abstraction. Granted, they are unlikely to win any of the art world's top prizes, but the progression of science and its child technology over the past three or so centuries clearly differentiates the discipline from the arts, both the sequential schools of the West and the 'traditional' aesthetics of other cultures.

Of course, it's usual to differentiate the character of scientists and artists about as far apart as any human behaviour can get, but like most stereotypical ideas it doesn't take much to prove them wildly inaccurate. Anyone aware of Einstein's views ("Imagination is more important than knowledge") or his last unsuccessful decades spent on a unification theory that ignored quantum mechanics will understand that scientists can have as imaginative and colourful personality as any artist. Indeed, the cutting edge of theoretical science, especially physics, may rely on insights and creativity as much as advanced mathematics, a far cry from the popular image of dull, plodding scientists who follow dry, repetitive processes.

Another aspect worth mentioning is that our species appears unique in the ability to create representations of the world that can be recognised as such by most if not all of our species. Despite Congo the chimpanzee gaining enough kudos in the 1950s for Picasso and Miro to buy his paintings, as well as more recent media interest in elephant art works, there is no evidence that under controlled experimental conditions non-human artists can produce obviously realistic images unaided. Then again, could it be that we are so biased in our recognition patterns that we do not identify what passes for realism in other species? Might it be possible that other animals interpret their work as representational when to us it resembles the energetic daubs of toddlers? (This suggests shades of Douglas Adams's dolphins, who considered themselves more intelligent than humans because rather than build cities and fight wars, all do is muck about in water having a good time...)

So where do we start? Firstly, what about unintentional, science-generated art? Over the past decade or so there has been a spate of large format, text-light, coffee table books consisting of images taken by space probes, telescopes and Earth resources satellites. A recent internet success consisted of time lapse photography of the Earth taken by crew aboard the International Space Station; clearly, no-one spent a hundred billion US dollars or so just to make a breath-taking video, but the by-products of the project are a clear example of how science can incidentally create aesthetic work. This isn't just a contemporary phenomenon either: the earliest examples I can think of are Leonardo da Vinci's dissection drawings; in addition to being possibly the most detailed such illustrations until today's non-invasive scanning techniques they are also beautiful works of art in themselves. But then Leonardo's intentions appear to have been to both investigate the natural world for the sheer sake of learning as well as improve his painting technique by knowledge of the underlying anatomy. I wonder if there are any contemporary artists who use MRI technology or similar as a technical aid for their draftsmanship?

At the other end of the spectrum (groan), mathematician Marcus du Sautoy's 2010 BBC TV series The Beauty of Diagrams was an interesting discourse on how certain images created for a scientific purpose have become mainstream visual symbols. From Vitruvian Man, da Vinci's analysis of ideal human proportions, to the double helix diagram of DNA (incidentally first drawn by Odile Crick, an artist married to a scientist), these works integrate the transmission of information with a beautiful aesthetic. The latter example is particularly interesting in that the attempt to illustrate complex, miniscule structures in an easily understandable format has since become a mainstay of science diagrams, shorthand that is frequently interpreted by the non-specialist as a much closer representation of reality than the schematic it really is.

Physicist and writer John Gribbin has often stated that the cutting edge science of the past century, especially physics, has had to resort to allegory to describe situations at scales far removed from human sensual experience. This implies that an essential method by which science can be conveyed is via the written metaphor and visual symbolism. As we delve further into new phenomena, science may increasingly rely on art to describe ideas that cannot for the foreseeable future be glimpsed at first hand. But ironically this could have a deleterious effect on public understanding if the model is too successful, for then it becomes difficult to supplant with a more accurate theory. An obvious example is the architecture of the atom, with the familiar if highly inaccurate classical model of electrons orbiting the nucleus like a miniature solar system prevalent long after the development of quantum electrodynamics.

You might ask how difficult would it be to describe probabilities and world paths in conventional art media, but Cubism was a style attempting to combine different viewpoints of a subject into one composition. If this appears too simplistic, then it may seem more convincing once you know that physicist Niels Bohr was inspired by Cubist theories during the development of the Complementarity Principle on the wave-particle duality. Cubism is of course only one of the more obvious visual tricks but even the most photo-realistic painting requires techniques to convert three dimensional reality (well four, if you want to include time), into two dimensions. How often do we consider this conversion process in itself, which relies on a series of visual formula to produce the desired result? It may not be science, but the production of most art isn't a haphazard or random series of actions.

It's easy to suggest that a fundamental difference between science and the plastic arts is that the former is ideally built of a combination of method and results whilst the latter is firmly biased towards the works alone. An exception can be seen in abstract expressionism, a.k.a. action painting: at art college we were taught that to practitioners of this school the moment of creation was at least as important as the final result. To this end, Jackson Pollock was filmed painting from as early as 1950, with numerous other artists of various movements following suit soon after. In general though, the art world runs on the rich individuals and corporations who buy the works, not the theories of critics.

And what of art theory? Most of it isn't relevant here, but one of the fundamentals of composition is the harmony and rhythm generated by the use of mathematical ratios and sequences. The Golden section and Fibonacci series are frequently found in organic structures, so in a sense their use is a confirmation of that old adage that the purpose of art is to hold a mirror up to nature. If that sounds trite, why not examine works by contemporary artists inspired by scientific theories or methodologies? That's coming in the next post...

Sunday 1 April 2012

A very special relationship: NASA, BIS and the race to the moon

More years back than I care to remember I met a British satellite engineer who was part of a team investigating a loose component rattling around its latest project...which unfortunately was already in Earth orbit. By rolling the satellite via its attitude thrusters they hoped to discover the nature of the problematic item, which I glibly suggested might have been an absent-minded engineer's lunchbox. I don't believe my idea was followed up and as it was, I never did find out the outcome. Answers on a postcard, please!

The relevance of this anecdote is that as discussed in an earlier post on boffins, it's often been said that Britain stopped technologically trailblazing some decades back. Now, thanks to the Freedom of Information Act, newly-released material suggests the pipe-smoking 'backroom boys' might have played a more pivotal role in astronautics than has been generally made public. Some aviation experts consider the fabled TSR2 strike aircraft (envisioned in 1956 and cancelled a decade later) as the last project where Britain took the lead, but the most recently released FoI records offer tantalising evidence otherwise.

I realise this idea requires concrete evidence, but we have to remember that despite tiny budgets by American standards, Britain is the original home of numerous technological advances, from the Hawker Harrier 'jump' jet to the hovercraft. And never forget that the USA has never developed a supersonic airliner in the forty-plus years since Concorde first flew. One reason the UK has apparently failed to keep up could be that transatlantic politics have overridden the applied science. For example, the satellite engineer mentioned above also worked on the 1980's fiasco known as Project Zircon, a British military satellite that was cancelled allegedly due to skyrocketing costs (there's sort of a jest in there, if you look hard enough). But what if an additional, if not real primary reason, was pressure from the US Government? There have been hints over the years that the European Launch Development Organisation, a predecessor of the European Space Agency, was forced to cancel its remote-controlled space tug project as NASA (and therefore the White House) deemed it too advanced and therefore a potential competitor. So if post-war British technology has been deemed a commercial or security risk to the USA, might the latter have applied pressure to cancel the project or even take over the research, lock, stock and blueprint?

This might sound far-fetched, but many a former British security officer's memoirs have mentioned that the 'special relationship' between the two nations has led the UK to kowtow to the USA on numerous occasions. This ranges from automatically offering new military-biased technology such as signals intelligence software to the US, through to diverting national security listening resources to US-specified targets at the drop of a hat. So might it be possible that political pressure rather than rising costs and technological failures has caused the cancellation of advant-garde projects, or even that the US has unfairly appropriated British high-tech wizardry?

The main thrust of this post (pun on its way) concerns the Apollo/Saturn spacecraft and rocket system (geddit now?) and how the US apparently single-handedly managed to achieve a moon landing less than a decade after the start of manned spaceflight. After all, if you consider that the Saturn V was a completely reliable, purpose-built civilian launch vehicle, unlike earlier manned spacecraft which had relied on adapted ballistic missiles, and in addition was far larger and more powerful than any previous American rocket, it seems incredible how quickly the project came together. Also, one of the chief designers was Wernher von Braun, an idealistic dreamer whose primary life-long interest appears to have been a manned mission to Mars and who a decade before Apollo had been developing plans for 160-foot long rocket ships carrying crews of twenty astronauts! Even the doyen of technology prophets Arthur C. Clarke was sceptical that NASA could achieve President Kennedy's goal for a manned moon landing before 1970.

In which case, I hear you ask, how did Project Apollo succeed so magnificently, especially when the N1, the USSR's equivalent, pretty much failed to escape the launchpad? It wasn't with the help of alien technology, that's for sure. At this point it is worth going back into Clarke's past. In 1937 the Technical Committee of the British Interplanetary Society (BIS), of which Clarke was twice chairman, began a study for a manned moon landing mission. The launch vehicle was comparatively modest compared to Saturn V and the N1, utilising tiers of several thousand small solid-fuel rockets, each step being akin to the later real-life launch vehicle stages. Then in 1949, knowledge of the German V-2 rockets (in which Wernher von Braun had played a key role) led the BIS team to switch to liquid-fuelled engines.

But if the rocket seems highly impractical to modern eyes*, the manned component of the BIS scheme was remarkable for its similarity to NASA hardware, being a combination of the Apollo CMS and LM craft. Many of its features are fundamentally identical to the real thing, from carbon dioxide scrubbers to landing parachutes. Even the EVA suits bear a striking similarity to the NASA design, albeit using less advanced materials. The only big difference I can see was the lack of an onboard computer in the BIS design: hardly surprising, considering the first programmable electronic computer, the room-sized Colossus at Bletchley Park, didn't become operational until 1944 (beat that, ENIAC!) I assume the poor navigator would be stuck with a slide rule instead, provision having been made in the ship's larder for coffee to keep them awake.

*Since then, real launch vehicles have used the modular approach, including the private company OTRAG in the 1970s and '80s and even the Saturn V's predecessors, Saturn 1 and 1B, which used a cluster of eight boosters around the core of the first stage.

But the moon landing project wasn't totally restricted to paper: several instruments were actually built, including an inertial altimeter and a coelostat that was demonstrated at the Science Museum in London. The competence of the Technical Committee members shouldn't be underestimated, as in addition to Arthur C. Clarke they included A.V. Cleaver (another sometime BIS chairman) and R.A. Smith, both of whom later worked on British military rocket and missile projects.

British Interplanetary Society moon lander
The British boffin's ultimate pipe dream

It might not appear convincing that these British speculations could have been converted into NASA blueprints, but a combination of carrot and stick during the dark, paranoid days of the Cold War might have been enough to silence the BIS team's complaints at the appropriation of their work. After all, the project generated a lot of attention even before the Second World War, with coverage in Time Magazine and a visit from a presumed Nazi agent in 1939.

What's more, by the early 1950s Clarke was communicating with now US-based ex-V-2 rocketeers von Braun and Hermann Oberth, whilst R.A. Smith's son later worked for NASA on the Apollo programme! There is even an intriguing suggestion that the very idea of launching early satellites on adapted military missiles (a technique utilised by both the USA and USSR) was promoted in the former country by Alexander Satin, then chief engineer of the Air Branch of the Office of Naval Research, US Navy, after he witnessed a satellite project at the 1951 Second Astronautical Congress in London. And of course, that project's team included Clarke and Cleaver; the space community in those days must have been rather on the small side.

Despite the organisation's name, there have been many American BIS members over the decades, including senior NASA figures such as Dr. Kurt Debus, Director of the John F. Kennedy Space Center during the 1960s; and Gerald Griffin, a Lead Flight Director during the Apollo programme. NASA's primary contractors for Apollo were equally staffed with BIS members, including Grumman's project manager for the Lunar Module (LM), Joseph Gavin Jr. I'm not suggesting that every blivet and gubbins (to use Clarkian terms) on the BIS lunar ship was directly translated into NASA hardware, but the speed with which Project Apollo succeeded, especially compared to the USSR's failure despite its' initial head start, smacks of outside assistance. For an example of how rapidly NASA contractors appear to have cobbled together their designs, Thomas Kelly, Grumman's LM Chief Design Engineer, admitted he was one of only two employees working on LM designs for several years leading up to the NASA-awarded contract in 1962.

In addition to the BIS material, there are X-Files style hints that the British Government was making strides of a more nuts-and-bolts nature with its own lunar landing programme. In 1959 the UK's rocket launch site in Woomera, Australia, appears to have begun construction of a launch pad capable of handling the two- and three-stage man-rated rockets then under development by various British aerospace consortiums, the most prominent of which included winged orbiters akin to more recent NASA lifting body designs. (Incidentally, five UK companies at the time were involved in spacesuit development, with the final Apollo EVA suit owing a lot to the undergarment cooling system developed in the UK.)

Just to put a spanner in the works, one negative piece of evidence for my technology censorship hypothesis is that NASA clearly took no notice of the BIS crew menu. Even after Apollo 11 large strides in technology continued to be made, but the work of the food technologists was not amongst them: all Apollo astronauts lost weight and suffered electrolyte imbalance, which clearly would not have happened if they had stuck to the wholesome fare - ham and cheese sandwiches, porridge, and the like - envisioned by the British boffins. It's a shame that their health temporarily suffered, but at least Neil Armstrong and co. could take music cassettes of everyone from Dvorak to the Beatles on their journeys; imagine being stuck in a small cabin with scratchy recordings of Flanagan and Allen or Vera Lynn...

Monday 27 February 2012

Predators vs poisons: the ups and downs of biological control

Ever since Darwin, islands and island groups have been known as prominent natural laboratories of evolution. Their isolation leads to radiation of species from a single common ancestor, the finches and giant tortoises of the Galapagos Islands providing a classic example. But a small population restricted in range also means that many island species are extremely susceptible to external factors, rapid extinction being the ultimate result - as can be seen from the dodo onwards. Living as I do on an island (New Zealand counts within the terms of this discussion, as I will explain) has led me to explore what a foreign invasion can do to a local population.

Either through direct hunting or the actions of imported Polynesian dogs and rats, almost half the native vertebrate fauna was wiped out within a few centuries of humans arriving in New Zealand; so much for the myth of pre-technological tribes living in ecological harmony! But the deliberate introduction of a new species to pray on another is now a much-practised and scientifically-supported technique. One of the late Stephen Jay Gould's most moving essays concerned the plight of the Partula genus of snails on the Society Islands of Polynesia. The story starts with the introduction of edible Achatina snails to the islands as food, only for some to escape and become an agricultural pest. In 1977 the Euglandina cannibal wolfsnail was brought in as a method of biological control, the idea being that they would eat the crop munchers. Unfortunately, the latest wave of immigrant gastropods ignored the Achatina and went after the local species instead. The results were devastating: in little more than a decade, many species of Partula had become extinct in their native habitat.

(As an interesting aside, the hero of Gould's Partula vs. Euglandina story is gastropod biologist Henry Crampton, whose half century of research into the genus is presumably no longer relevant in light of the decimation of many species. Yet Crampton, born in 1875, worked in typical Victorian quantitative fashion and during a single field trip managed to collect 116,000 specimens from just a single island, Moorea. I have no idea how many individual snails existed at the time, but to me this enormous number removed from breeding population in the name of scientific research was unlikely to do anything for the genus. I wonder whether comparable numbers of organisms are still being collected by researchers today: somehow I doubt it!)

The Society Islands is not the only place where the deliberate introduction of Euglandina has led to the unintended devastation of indigenous snail species: Hawaii and its native Achatinella and Bermuda's Poecilozonites have suffered a similar fate to Partula. Gould used the example of the Partula as a passionate plea (invoking 'genocide' and 'wholesale slaughter') to prevent further inept biological control programmes, but do these examples justify banning the method in totality?

The impetus for this post came from a recent visit to my local wetlands reserve, when my daughters played junior field biologists and netted small fish in order to examine them in a portable environment container (alright, a jam jar) - before of course returning them to the stream alive. The main fish species they caught was Gambusia, which originates from the Gulf of Mexico but was introduced to New Zealand in the 1930s as a predator of mosquito larvae. However, akin to Euglandina it has had a severe impact on many other fish species and is now rightly considered a pest. In fact, it's even illegal to keep them in a home aquarium, presumably just in case you accidentally aid their dispersion. Australia has also tried introducing Gambusia to control the mosquito population, but there is little data to show it works there either. The latter nation also provides a good illustration of environmental degradation via second- and third-hand problems originating from deliberate introduction. For example, the cane toad was imported to control several previously introduced beetle species but instead rapidly decimated native fauna, including amphibians and reptiles further up the food chain, via toad-vectored diseases.

Gambusia: the aggressive mosquito fish
Gambusia affinis: a big problem in a small fish

This isn't to say that there haven't been major successes with the technique. An early example concerns a small insect called the cottony cushion scale, which began to have a major impact on citrus farming in late Nineteenth Century California. It was brought under control by the introduction of several Australian fly and beetle species and without any obvious collateral damage, as the military might phrase it. But considering the extinction history of New Zealand since humans arrived, I've been amazed to discover just how many organisms have been deliberately introduced as part of biological control schemes, many in the past quarter century. For instance, twenty-one insect and mite species have been brought over to stem the unrestrained growth of weeds such as ragwort and gorse, although the rates of success have been extremely mixed (Old man's beard proving a complete failure, for example). As for controlling unwelcome fauna in New Zealand, a recent promising research programme involves the modification of parasites that could inhibit possum fertility. This is something of a necessity considering possums (first imported from Australia in the 1830s and now numbering around sixty million) are prominent bovine tuberculosis vectors.

Stephen Jay Gould was a well-known promoter of the importance of contingency within evolution, and how a re-run of any specific branch of life would only lead to a different outcome. So the question has to be asked, how do biologists test the effect of outsider species on an ecosystem (i.e. within laboratory conditions) when only time will show whether the outcome is as intended? No amount of research will show whether an unknown factor might, at an unspecified time during or after the eradication programme, have a negative impact. It could have been argued in the past that the relative cheapness of biological control compared to alternatives such as poison or chemicals made it the preferable option. However, I imagine the initial costs, involving lengthy testing cycles, mean that it is no longer a cut price alternative.

Considering the recent developments in genetic modification (GM), I wonder whether researchers have been looking into ways of minimising unforeseen dangers? For example, what about the possibility of tailoring the lifespan of the control organism? In other words, once the original invasive species has been eliminated, the predator would also rapidly die out (perhaps by something as simple as being unable to switch to an alternative food source, of which there are already many examples in nature). Or does that sound too much like the replicant-designing Dr Eldon Tyrell in Blade Runner?

One promising recent use of GM organisms as a biological control method has been part of the fight to eradicate disease-carrying (female) mosquitos. Any female offspring of the genetically altered male mosquitos are incapable of flight and thus are unable to infect humans or indeed reproduce. However, following extremely positive cage-based testing in Mexico, researchers appear to have got carried away with their achievements and before you could say 'peer review' they conducted assessments directly in the wild in Malaysia, where I assume there is little GM regulation or public consultation. Therefore test results from one location were extrapolated to another with a very different biota, without regard for knock-on effects such as what unwelcome species might come out of the woodwork to fill the gap in the ecosystem. When stakes are so high, the sheer audacity of the scientists involved appears breathtaking. Like Dr Tyrell, we play god at our peril; let us hope we don't come to an equally sticky end at the hands of our creation...

Monday 30 January 2012

Sell-by date: are old science books still worth reading?

As an outsider to the world of science I've recently been struck by an apparent dichotomy that I don't think I've ever heard discussed, namely that if science is believed by non-practitioners to work on the basis of new theories replacing earlier ones, then are out-of-date popular science (as opposed to text) books a disservice, if not positive danger, to the field?

I recently read three science books written for a popular audience in succession, the contrast between them serving as the inspiration for this post. The most recently published was Susan Conner and Linda Kitchen's Science's Most Wanted: the top 10 book of outrageous innovators, deadly disasters, and shocking discoveries (2002). Yes, it sounds pretty tacky, but I hereby protest that I wanted to read it as much to find out about the authors and their intended audience as the subject material itself. Although only a decade old the book is already out of date, in a similar way that a list of top ten grossing films would be. In this case the book lists different aspects of the scientific method and those involved, looking at issues ranging from collaborative couples (e.g. the Curies) to prominent examples of scientific fraud such as the Chinese fake feathered dinosaur fossil Archaeoraptor.

To some extent the book is a very poor example of the popular science genre, since I found quite a few incorrect but easily verifiable facts. Even so, it proved to be an excellent illustration of how transmission of knowledge can suffer in a rapidly-changing, pop-cultural society. Whilst the obsession with novelty and the associated transience of ideas may appear to somewhat fit in with the principle that a more recent scientific theory always replaces an earlier one, this is too restrictive a definition of science. The discipline doesn't hold with novelty for the sake of it, nor does an old theory that is largely superseded by a later one prove worthless. A good example of the latter is the interrelationship between Newton's classical Law of Gravitation (first published in 1687) and Einstein's General Relativity (1916), with the former still used most of the time (calculating space probe trajectories, etc, etc).

The second of the three books discusses several different variants of scientific practice, although far different from New Zealand particle physicist Ernest Rutherford's crude summary that "physics is the only real science. The rest are just stamp collecting." Stephen Jay Gould's first collection of essays, Ever Since Darwin (1977), contains his usual potpourri of scientific theories, observations and historical research. These range from simple corrections of 'facts' – e.g. Darwin was not the original naturalist on HMS Beagle – to why scientific heresy can serve important purposes (consider the much-snubbed Alfred Wegener, who promoted a precursor to plate tectonics long before the evidence was in) through to a warning of how literary flair can promote poor or even pseudo-science to an unwary public (in this instance, Immanuel Velikovsky's now largely forgotten attempts to link Biblical events to interplanetary catastrophes).

Interestingly enough, the latter element surfaced later in Gould's own career, when his 1989 exposition of the Early Cambrian Burgess Shale fossils, Wonderful Life, was attacked by Richard Dawkins with the exclamation that he wished Gould could think as clearly as he could write! In this particular instance, the attack was part of a wider critique of Gould's theories of evolutionary mechanisms rather than material being superseded by new factual evidence. However, if I'm a typical member of the lay readership, the account of the weird and wonderful creatures largely outweighs the professional arguments. Wonderful Life is still a great read as descriptive natural history and I suppose serves as a reminder that however authoritative the writer, don't take accept everything on face value. But then that's a good lesson in all subjects!

But back to Ever Since Darwin. I was surprised by just how much of the factual material had dated in fields as disparate as palaeontology and planetary exploration over the past thirty-five years. As an example, Essay 24 promotes the idea that the geophysical composition of a planetary body is solely reliant on the body's size, a hypothesis since firmly negated by space probe data. In contrast, it is the historical material that still shines as relevant and in the generic sense 'true'. I've mentioned before (link) that Bill Bryson's bestseller A Short History of Nearly Everything promotes the idea that science is a corpus of up-to-date knowledge, not a theoretical framework and methodology of experimental procedures. But by so short-changing science, Bryson's attitude could promote the idea that all old material is essentially worthless. Again, the love of novelty, now so ingrained in Western societies, can cause public confusion in the multi-layered discipline known as science.

Of course, this doesn't mean that something once considered a classic still has great worth, any more than every single building over half a century old is worthy of a preservation order. But just possibly (depending on your level of post-modernism and/or pessimism) any science book that stands the test of time does so because it contains self-evident truths. The final book of the three is a perfect example of this: Charles Darwin's On the Origin of Species, in this case the first edition of 1859. The book shows that Darwin's genius lay in tying together apparently disparate precursors to formulate his theory; in other words, natural selection was already on the thought horizon (as proven by Alfred Russel Wallace's 1858 manuscript). In addition, the distance between publication and today gives us an interesting insight into the scientist as human being, with all the cultural and linguistic baggage we rarely notice in our contemporaries. In some ways Darwin was very much a man of his time, attempting to soften the non-moralistic side to his theory by subtly suggesting that new can equal better, i.e. a form of progressive evolution. For example, he describes extinct South American mega fauna as 'anomalous monsters' yet our overtly familiar modern horse only survived via Eurasian migration, dying out completely in its native Americas. We can readily assume that had the likes of Toxodon survived but not Equus, the horse would seem equally 'anomalous' today.

Next, Darwin had limited fossil evidence to support him, whilst Nineteenth Century physics negated natural selection by not allowing enough time for the theory to have effect. Of course, if the reader knows what has been discovered in the same field since, they can begin to get an idea of the author's thought processes and indeed world view, and just how comparatively little data he had to work with. For example, Darwin states about variations in the sterility of hybrids whilst we understand, for example that most mules are sterile because of chromosomal issues. Yet this didn’t prevent the majority of mid-Victorian biologists from accepting natural selection, an indication that science can be responsive to ideas with only circumstantial evidence; this is a very long way indeed from the notion of an assemblage of clear-cut facts laid out in logical succession.

I think it was the physicist and writer Alan Lightman who said: "Science is an ideal but the application of science is subject to the psychological complexities of the humans who practice it." Old science books may frequently be dated from a professional viewpoint but can still prove useful to the layman for at least the following reasons: understanding the personalities, mind-sets and modes of thought of earlier generations; observing how theories within a discipline have evolved as both external evidence and fashionable ideas change; and the realisation that science as a method of understanding the universe is utterly different from all other aspects of humanity. Of course, this is always supposing that the purple prose doesn’t obscure a multitude of scientific sins...