Showing posts with label H.G. Wells. Show all posts
Showing posts with label H.G. Wells. Show all posts

Monday 13 August 2018

Life on Mars? How accumulated evidence slowly leads to scientific advances

Although the history of science is often presented as a series of eureka moments, with a single scientist's brainstorm paving the way for a paradigm-shifting theory, the truth is usually rather less dramatic. A good example of the latter is the formulation of plate tectonics, with the meteorologist Alfred Wegener's continental drift being rejected by the geological orthodoxy for over thirty years. It was only with the accumulation of data from late 1950's onward that the mobility of Earth's crust slowly gained acceptance, thanks to the multiple strands of new evidence that supported it.

One topic that looks likely to increase in popularity amongst both public and biologists is the search for life on Mars. Last month's announcement of a lake deep beneath the southern polar ice cap is the latest piece of observational data that Mars might still have environments suitable for microbial life. This is just the latest in an increasing body of evidence that conditions may be still be capable of supporting life, long after the planet's biota-friendly heyday. However, the data hasn't always been so positive, having fluctuated in both directions over the past century or so. So what is the correspondence between positive results and the levels of research for life on Mars?

The planet's polar ice caps were first discovered in the late Seventeenth Century, which combined with the Earth-like duration of the Martian day implied the planet might be fairly similar to our own. This was followed a century later by observation of what appeared to be seasonal changes to surface features, leading to the understandable conclusion of Mars as a temperate, hospitable world covered with vegetation. Then another century on, an early use of spectroscopy erroneously described abundant water on Mars; although the mistake was later corrected, the near contemporary reporting of non-existent Martian canals led to soaring public interest and intense speculation. The French astronomer Camille Flammarion helped popularise Mars as a potentially inhabited world, paving the way for H.G. Wells' War of the Worlds and Edgar Rice Burroughs' John Carter series.

As astronomical technology improved and the planet's true environment became known (low temperatures, thin atmosphere and no canals), Mars' popularity waned. By the time of Mariner 4's 1965 fly-by, the arid, cratered and radiation-smothered surface it revealed only served to reinforce the notion of a lifeless desert; the geologically inactive world was long past its prime and any life still existing there probably wouldn't be visible without a microscope.

Despite this disappointing turnabout, NASA somehow managed to gain the funding to incorporate four biological experiments on the two Viking landers that arrived on Mars in 1976. Three of the experiments gave negative results while the fourth was inconclusive, most researchers hypothesising a geochemical rather than biological explanation for the outcome. After a decade and a half of continuous missions to Mars, this lack of positive results - accompanied by experimental cost overruns - probably contributed to a sixteen-year hiatus (excluding two Soviet attempts at missions to the Martian moons). Clearly, Mars' geology by itself was not enough to excite the interplanetary probe funding czars.

In the meantime, it was some distinctly Earth-bound research that reignited interested in Mars as a plausible source of life. The 1996 report that Martian meteorite ALH84001 contained features resembling fossilised (if extremely small) bacteria gained worldwide attention, even though the eventual conclusion repudiated this. Analysis of three other meteorites originating from Mars showed that complex organic chemistry, lava flows and moving water were common features of the planet's past, although they offered no more than tantalising hints that microbial life may have flourished, possibly billions of years ago.

Back on Mars, NASA's 1997 Pathfinder lander delivered the Sojourner rover. Although it appeared to be little more than a very expensive toy, managing a total distance in its operational lifetime of just one hundred metres, the proof of concept led to much larger and more sophisticated vehicles culminating in today’s Curiosity rover.

The plethora of Mars missions over the past two decades has delivered immense amounts of data, including that the planet used to have near-ideal conditions for microbial life - and still has a few types of environment that may be able to support miniscule extremophiles.

Together with research undertaken in Earth-bound simulators, the numerous Mars projects of the Twenty-first Century have to date swung the pendulum back in favour of a Martian biota. Here are a few prominent examples:

  • 2003 - atmospheric methane is discovered (the lack of active geology implying a biological rather than geochemical origin)
  • 2005 - atmospheric formaldehyde is detected (it could be a by-product of methane oxidation)
  • 2007 - silica-rich rocks, similar to hot springs, are found
  • 2010 - giant sinkholes are found (suitable as radiation-proof habitats)
  • 2011 - flowing brines and gypsum deposits discovered
  • 2012 - lichen survived for a month in the Mars Simulation Laboratory
  • 2013 - proof of ancient freshwater lakes and complex organic molecules, along with a long-lost magnetic field
  • 2014 - large-scale seasonal variation in methane, greater than usual if of geochemical origin
  • 2015 - Earth-based research successfully incubates methane-producing bacteria under Mars-like conditions
  • 2018 - a 20 kilometre across brine lake is found under the southern polar ice sheet

Although these facts accumulate into an impressive package in favour of Martian microbes, they should probably be treated as independent points, not as one combined argument. For as well as finding factors supporting microbial life, other research has produced opposing ones. For example, last year NASA found that a solar storm had temporarily doubled surface radiation levels, meaning that even dormant microbes would have to live over seven metres down in order to survive. We should also bear in mind that for some of each orbit, Mars veers outside our solar system's Goldilocks Zone and as such any native life would have its work cut out for it at aphelion.

A fleet of orbiters, landers, rovers and even a robotic helicopter are planned for further exploration in the next decade, so clearly the search for life on Mars is still deemed a worthwhile effort. Indeed, five more missions are scheduled for the next three years alone. Whether any will provide definitive proof is the big question, but conversely, how much of the surface - and sub-surface - would need to be thoroughly searched before concluding that Mars has either never had microscopic life or that it has long since become extinct?

What is apparent from all this is that the quantity of Mars-based missions has fluctuated according to confidence in the hypothesis. In other words, the more that data supports the existence of suitable habitats for microbes, the greater the amount of research to find them. In a world of limited resources, even such profoundly interesting questions as extra-terrestrial life appear to gain funding based on the probability of near-future success. If the next generation of missions fails to find traces of even extinct life, my bet would be a rapid and severe curtailing of probes to the red planet.

There is a caricature of the stages that scientific hypotheses go through, which can ironically best be described using religious terminology: they start as heresy; proceed to acceptance; and are then carved into stone as orthodoxy. Of course, unlike with religions, the vast majority of practitioners accept the new working theory once the data has passed a certain probability threshold, even if it totally negates an earlier one. During the first stage - and as the evidence starts to be favourable - more researchers may join the bandwagon, hoping to be the first to achieve success.

In this particular case, the expense and sophistication of the technology prohibits entries from all except a few key players such as NASA and ESA. It might seem obvious that in expensive, high-tech fields, there has to be a correlation between hypothesis-supporting facts and the amount of research. But this suggests a stumbling block for out-of-the-box thinking, as revolutionary hypotheses fail to gain funding without at least some supporting evidence.

Therefore does the cutting-edge, at least in areas that require expensive experimental confirmation, start life as a chicken-and-egg situation? Until data providentially appears, is it often the case that the powers-that-be have little enticement for funding left-field projects? That certainly seems to have been true for meteorologist Alfred Wegener and his continental drift hypothesis, since it took several research streams to codify plate tectonics as the revolutionary solution. 

Back to Martian microbes. Having now read in greater depth about seasonal methane, it appears that the periodicity could be due to temperature-related atmospheric changes. This only leaves the scale of variation as support for a biological rather than geochemical origin. Having said that, the joint ESA/Roscosmos ExoMars Trace Gas Orbiter may find a definitive answer as to its source in the next year or so, although even a negative result is unlikely to close the matter for some time to come. Surely this has got to be one of the great what-ifs of our time? Happy hunting, Mars mission teams!

Sunday 1 April 2018

Engagement with Oumuamua: is our first interstellar visitor an alien spacecraft?

It's often said that fact follows fiction but there are times when some such instances appear to be uncanny beyond belief.  One relatively well-known example comes from the American writer Morgan Robertson, whose 1898 novella The Wreck of the Titan (originally entitled Futility) eerily prefigured the 1912 loss of the Titanic. The resemblances between the fictional precursor and the infamous passenger liner are remarkable, including the month of the sinking, the impact location, and similarities of size, speed and passenger capacity. I was first introduced to this series of quirky coincidences via Arthur C. Clarke's 1990 novel The Ghost from the Grand Banks, which not incidentally is about attempts to raise the Titanic. The reason for including the latter reference is that there may have just been an occurrence that involves another of Clarke's own works.

Clarke's 1973 award-winning novel Rendezvous with Rama tells of a 22nd century expedition to a giant interstellar object that is approaching the inner solar system. The fifty-four kilometre long cylinder, dubbed Rama, is discovered by an Earthbound asteroid detection system called Project Spaceguard, a name which since the 1990s has been adopted by real life surveys aiming to provide early warning for Earth-crossing asteroids. Rama is revealed to be a dormant alien spacecraft, whose trajectory confirms its origin outside of our solar system. After a journey of hundreds of thousands of years, Rama appears to be on a collision course with the Sun, only for it to scoop up solar material as a fuel source before heading back into interstellar space (sorry for the spoiler, but if you haven't yet read it, why not?)

In October last year astronomer Robert Weryk at the Haleakala Observatory in Hawaii found an unusual object forty days after its closest encounter with the Sun. Initially catalogued as 1I/2017 U1, the object was at first thought to be a comet, but after no sign of a tail or coma it was reclassified as an asteroid. After another week's examination 1I/2017 U1 was put into a class all by itself and this is when observers began to get excited, as its trajectory appeared to proclaim an interstellar origin.

As it was not spotted until about thirty-three million kilometres from the Earth, the object was far too small to be photographed in any detail; all that appears to telescope-mounted digital cameras is a single pixel. Therefore its shape was inferred from the light curve, which implied a longest-to-shortest axis ratio of 5:1 or even larger, with the longest dimension being between two hundred and four hundred metres. As this data became public, requests were made for a more familiar name than just 1I/2017; perhaps unsurprisingly, Rama became a leading contender. However, the Hawaiian observatory's Pan-STARRS team finally opted for the common name Oumuamua, which in the local language means 'scout'.

Various hypotheses have been raised as to exactly what type of object Oumuamua is, from a planetary fragment to a Kuiper belt object similar - although far smaller than - Pluto.  However, the lack of off-gassing even at perihelion (closest approach to the Sun) implies that any icy material must lie below a thick crust and the light curve suggests a denser material such as metal-rich rock. This sounds most unlike any known Kuiper belt object.

These unusual properties attracted the attention of senior figures in the search for extra-terrestrial intelligence. Project Breakthrough Listen, whose leadership includes SETI luminaries Frank Drake, Ann Druyan and Astronomer Royal Martin Rees, directed the world's largest manoeuvrable radio telescope towards Oumuamua. It failed to find any radio emissions, although the lack of a signal is tempered with the knowledge that SETI astronomers are now considering lasers as a potentially superior form of interstellar communication to radio.

The more that Oumuamua has been studied, the more surprising it appears. Travelling at over eighty kilometres per second relative to the Sun, its path shows that it has not originated from any of the twenty neighbouring solar systems. Yet it homed in on our star, getting seventeen percent nearer to the Sun than Mercury does at its closest. This seems to be almost impossible to have occurred simply by chance - space is just too vast for an interstellar object to have achieved such proximity. So how likely is it that Oumuamua is a real-life Rama? Let's consider the facts:
  1. Trajectory. The area of a solar system with potentially habitable planets is nicknamed the 'Goldilocks zone', which for our system includes the Earth. It's such a small percentage of the system, extremely close to the parent star, that for a fast-moving interstellar object to approach at random seems almost impossible. Instead, Oumuamua's trajectory was perfectly placed to obtain a gravity assist from the Sun, allowing it to both gain speed and change course, with it now heading in the direction of the constellation Pegasus.
  2. Motion. Dr Jason Wright, an associate professor of astronomy and astrophysics at Penn State University, likened the apparent tumbling motion to that of a derelict spacecraft, only to retract his ideas when criticised for sensationalism.
  3. Shape. All known asteroids and Kuiper belt objects are much less elongated than Oumuamua, even though most are far too small to settle into spherical shape due to gravitational attraction (the minimum diameter being around six hundred kilometres for predominantly rocky objects). The exact appearance is unknown, with the ubiquitous crater-covered asteroid artwork being merely an artist's impression. Astronautical experts have agreed that Oumuamua's shape is eminently suitable for minimising damage from particles.
  4. Composition. One definitive piece of data is that Oumuamua doesn't emit clouds of gas or dust that are usually associated with objects of a similar size. In addition, according to a report by the American Astronomical Society, it has an 'implausibly high density'. Somehow, it has survived a relatively close encounter with the Sun while remaining in one piece - at a maximum velocity of almost eighty-eight kilometres per second relative to our star!
  5. Colour. There appears to be a red region on the surface, rather than a uniform colour expected for an object that has been bombarded with radiation on all sides whilst in deep space for an extremely long period.
So where does this leave us? There is an enormous amount of nonsense written about alien encounters, conspiracy theories and the like, with various governments and the military seeking to hide their strategies in deliberate misinformation. For example, last year the hacker collective Anonymous stated that NASA would soon be releasing confirmation of contact with extraterrestrials; to date, in case you were wondering, there's been no such announcement. Besides which, wouldn't it more likely to come from a SETI research organisation such as the Planetary Society or Project Breakthrough Listen?

Is there any evidence to imply cover-up regarding Oumuamua? Here's some suggestions:
  1. The name Rama - already familiar to many from Arthur C. Clarke's novel and therefore evocative of an artificial object - was abandoned for a far less expressive and more obscure common name. Was this an attempt to distance Oumuamua from anything out of the ordinary?
  2. Dr Wright's proposals were luridly overstated in the tabloid media, forcing him to abandon further investigation. Was this a deliberate attempt by the authorities to make light of his ideas, so as to prevent too much analysis while the object was still observable?
  3. Limited attempts at listening for radio signals have been made, even though laser signalling is now thought to be a far superior method. So why have these efforts been so half-hearted for such a unique object?
  4. The only images available in the media are a few very samey artist's impressions of an elongated asteroid, some pock-marked with craters, others, especially animations, with striations (the latter reminding me more of fossilised wood). Not only are these pure speculation but none feature the red area reported from the light curve data. It's almost as if the intention was to show a totally standard asteroid, albeit of unusual proportions. But this appearance is complete guesswork: Oumuamua has been shoe-horned into a conventional natural object, despite its idiosyncrasies.
Thanks to Hollywood, most people's ideas of aliens are as implacable invaders. If - and when - the public receive confirmation of intelligent alien life will there be widespread panic and disorder? After all, the Orson Welles' 1938 radio version of H.G. Wells' War of the Worlds led some listeners to flee their homes, believing a Martian invasion had begun. Would people today be any different? The current following of dangerous fads such as paleo diets and raw water, never mind the paranoid conspiracy theories that fill the World Wide Web, lead me to expect little change from our credulous forbears.

The issue of course, comes down to one of security. Again, science fiction movies tend to overshadow real life space exploration, but the fact is that we have no spacecraft capable of matching orbits with the likes of Oumuamua. In Arthur C. Clarke's Rendezvous with Rama, colonists on 22nd century Mercury become paranoid with the giant spacecraft's approach and attempt to destroy it with a nuclear missile (oops, another spoiler there). There is no 21st century technology that could match this feat, so if Oumuamua did turn out to be an alien craft, we would have to hope for the best. Therefore if, for example, the U.S. Government gained some data that even implied the possibility of artifice about Oumuamua, wouldn't it be in their best interest to keep it quiet, at least until it is long gone?

In which case, promoting disinformation and encouraging wild speculation in the media would be the perfect way to disguise the truth. Far from being an advanced - if dead or dormant - starship, our leaders would rather we believed it to be a simple rocky asteroid, despite the evidence to the contrary. Less one entry for the Captain's log, and more a case of 'to boulderly go' - geddit?

Friday 11 August 2017

From steampunk to Star Trek: the interwoven strands between science, technology and consumer design

With Raspberry Pi computers having sold over eleven million units by the end of last year, consumer interest in older technology appears to have become big business. Even such decidedly old-school devices as crystal radio kits are selling well, whilst replicas of vintage telescopes are proof that not everyone has a desire for the cutting-edge. I'm not sure why this is so, but since even instant Polaroid-type cameras are now available again - albeit with a cute, toy-like styling - perhaps manufacturers are just capitalising on a widespread desire to appear slightly out of the ordinary. Even so, such products are far closer to the mainstream than left field: instant-developing cameras for example now reach worldwide sales of over five million per year. That's hardly a niche market!

Polaroid cameras aside, could it be the desire for a less minimal aesthetic that is driving such purchases? Older technology, especially if it is pre-integrated circuit, has a decidedly quaint look to it, sometimes with textures - and smells - to match. As an aside, it's interesting that whilst on the one hand current miniaturisation has reduced energy consumption for many smaller pieces of technology from the Frankenstein laboratory appearance of valve-based computing and room-sized mainframes to the smart watch etal, the giant scale of cutting-edge technology projects require immense amounts of energy, with nuclear fusion reactors presumably having overtaken the previous perennial favourite example of space rockets when it comes to power usage.

The interface between sci-tech aesthetics and non-scientific design is a complicated one: it used to be the case that consumer or amateur appliances were scaled-down versions of professional devices, or could even be home-made, for example telescopes or crystal radios. Nowadays there is a massive difference between the equipment in high-tech laboratories and the average home; even consumer-level 3D printers won't be able to reproduce gravity wave detectors or CRISPR-Cas9 genome editing tools any time soon.

The current trend in favour - or at least acknowledgement - of sustainable development, is helping to nullify the pervasive Victorian notion that bigger, faster, noisier (and smellier) is equated with progress. It's therefore interesting to consider the interaction of scientific ideas and instruments, new technology and consumerism over the past century or so. To my mind, there appear to be five main phases since the late Victorian period:
  1. Imperial steam
  2. Streamlining and speed
  3. The Atomic Age
  4. Minimalism and information technology
  5. Virtual light

1) Imperial steam

In the period from the late Nineteenth Century's first generation of professional scientists up to the First World War, there appears to have been an untrammelled optimism for all things technological. Brass, iron, wood and leather devices - frequently steam-powered - created an aesthetic that seemingly without effort has an aura of romance to modern eyes.

Although today's steampunk/alternative history movement is indebted to later authors, especially Michael Moorcock, as much as it is to Jules Verne and H.G. Wells, the latter pair are only the two most famous of a whole legion of late Victorian and Edwardian writers who extolled - and occasionally agonised over - the wonders of the machine age.

I must confess I much prefer steam engines to electric or diesel locomotives, despite the noise, smuts and burning of fossil fuels. Although the pistons and connecting rods of these locomotives might be the epitome of the design from this phase, it should be remembered that it was not unknown for Victorian engineers to add fluted columns and cornucopia reliefs to their cast iron and brass machinery, echoes of a pre-industrial past. An attempt was being made, however crude, to tie together the might of steam power to the Classical civilisations that failed to go beyond the aeolipile toy turbine and the Antikythera mechanism.

2) Streamlining and speed

From around 1910, the fine arts and then decorative arts developed new styles obsessed with mechanical movement, especially speed. The dynamic work of the Futurists led the way, depicting the increasing pace of life in an age when humans and machines were starting to interact ever more frequently. The development of heavier-than-air flight even led to a group of 'aeropainters' whose work stemmed from their experience of flying.

Although scientific devices still had some of the Rube Goldberg/Heath Robinson appearance of their Nineteenth Century forebears, both consumer goods and vehicles picked up the concept of streamlining to suggest a sophisticated, future-orientated design. Items such as radios and toasters utilised early plastics, stainless steel and chrome to imply a higher level of technology than their interiors actually contained. This is in contrast to land, sea and aerial craft, whereby the practical benefits of streamlining happily coincided with an attractive aesthetic, leading to design classics such as the Supermarine seaplanes (forerunners of the Spitfire) and the world speed record-holding A4 Pacific Class steam locomotives.

3) The Atomic Age

By the 1950s practically anything that could be streamlined was, whether buildings that looked like ocean liners or cars with rocket-like tailfins and dashboards fit for a Dan Dare spaceship. However, a new aesthetic was gaining popularity in the wake of the development of atomic weapons. It seems to have been an ironic move that somewhere between the optimism of an era of exciting new domestic gadgets and the potential for nuclear Armageddon, the Bohr (classical physics) model of the atom itself gained a key place in post-war design.

Combined with rockets and space the imagery could readily be termed 'space cadet', but it wasn't the only area of science to influence wider society. Biological research was undergoing a resurgence, which may explain why stylised x-ray forms, amoebas and bodily organs become ubiquitous on textiles, furnishings, and fashion. Lighting fixtures were a standout example of items utilising designs based on the molecular models used in research laboratories (which famously gave Crick and Watson the edge in winning the race to understand the structure of DNA).

Monumental architecture also sought to represent the world of molecules on a giant scale, culminating in the 102 metre-high Atomium built in Brussels for the 1958 World's Fair. It could be said that never before had science- and technological-inspired imagery been so pervasive in non-STEM arenas.

4) Minimalism and information technology

From the early 1970s the bright, optimistic designs of the previous quarter century were gradually replaced by the cool, monochromatic sophistication of minimalism. Less is more became the ethos, with miniaturisation increasing as solid-state electronics and then integrated circuits became available. A plethora of artificial materials, especially plastics, meant that forms and textures could be incredibly varied if refined.

Perhaps a combination of economic recession, mistrust of authority (including science and a military-led technocracy) and a burgeoning awareness of environmental issues led to the replacement of exuberant colour with muted, natural tones and basic if self-possessed geometries. Consumers could now buy microcomputers and video games consoles; what had previously only existed in high-tech labs or science fiction became commonplace in the household. Sci-fi media began a complex two-way interaction with cutting-edge science; it's amazing to consider that only two decades separated the iPad from its fictional Star Trek: The Next Generation predecessor, the PADD.

5) Virtual light

With ultra high-energy experiments such as nuclear fusion reactors and the ubiquity of digital devices and content, today's science-influenced designs aim to be simulacra of their professional big brothers. As stated earlier, although consumer technology is farther removed from mega-budget science apparatus than ever, the former's emphasis on virtual interfaces is part of a feedback loop between the two widely differing scales.

The blue and green glowing lights of everything from futuristic engines to computer holographic interfaces in many Hollywood blockbusters are representations of both the actual awesome power required by the likes of the Large Hadron Collider and as an analogy for the visually-unspectacular real-life lasers and quantum teleportation, the ultimate fusion (sorry, couldn't resist that one) being the use of the real National Ignition Facility target chamber as the engine core of the USS Enterprise in Star Trek: Into Darkness.

Clearly, this post-industrial/information age aesthetic is likely to be with us for some time to come, as consumer-level devices emulate the cool brilliance of professional STEM equipment; the outer casing is often simple yet elegant, aiming not to distract from the bright glowing pixels that take up so much of our time. Let's hope this seduction by the digital world can be moderated by a desire to keep the natural, material world working.

Thursday 31 January 2013

Profiling the future: science predictions of a bygone age

I recently heard a joke along the lines of: "Question: What would a scientist from one hundred years ago find most disconcerting about current technology? Answer: whilst there are cheap, mass-produced, pocket-sized devices that can hold a large proportion of mankind's knowledge, they are mostly used for viewing humorous videos of cats!" The obvious point to make (apart from all the missed potential) is that the future is likely to be far more unpredictable than even the best-informed science fiction writer is capable of formulating. But if SF authors are unlikely to make accurate predictions, what are the chances that trained scientists will be any good at prognostication either?

As a child I read with breathless wonder various examples of mainstream science prediction delineating the early Twenty-first Century: flying cars, underwater cities, domestic robots and enormous space colonies; after all, I did grow up in the 1970s! Unfortunately I wasn't to know that these grandiose visions were already fading by the time Apollo 11 touched down on the moon. Yet if this was caused by a decline in the Victorian ideal of progress (or should that be Progress) why didn't the authors of these volumes know about it?

Despite the apparent decline in mega-budget projects over the past forty years - Large Hadron Collider and International Space Station excepted - popular science and technology exposition continued to promote wild, wonderful and occasionally downright wacky ideas into the 1980s. One of the best known examples of the genre is Arthur C. Clarke's Profiles of the Future, originally published in 1962 but with updated editions appearing in 1973, 1983 and 1999. As a leading SF writer and 'Godfather of the Communications Satellite' Clarke seemed better placed than most to make accurate predictions, and thus making him a suitable example with which to explore this theme. Indeed, the first edition of Profiles… contains what was to become his First Law, a direct reference to one of the dangers of prophesizing developments in science and technology: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." Unfortunately, by always following this notion Clarke's prognostications frequently appear overly optimistic, utilising a schoolboy enthusiasm for advancement that downplays the interactions between science and society.

Interestingly, this optimism appears in exact opposition to earlier generations, wherein scientists and pioneer SF writers were frequently pessimistic as to the impact that new science and technology would have on civilisation. Whilst some of Jules Verne and H.G. Wells' fictional predictions have been realised their most negative visions have yet to occur, unless you consider the West's current obsession with witless celebrities and consumerism as a veritable precursor of Wells' post-human Eloi. (Note: if you enjoy watching TV shows such as Celebrity Chefs' Pets' Got Talent you should probably read Wells' The Time Machine as soon as possible…)

While the Nineteenth and early Twentieth Century equivalents of Michael Crichton were raising the possibility of technologically-led dystopias, their scientific contemporaries frequently abided by a philosophy antithetical to Clarke's First Law. The likes of Lord Kelvin, Ernest Rutherford and even Albert Einstein opposed theories now part and parcel of the scientific canon, ranging from black holes, meteorite impacts on Earth and quantum electrodynamics to the ensuing development of heavier-than-air flight, atomic bombs and even commercial radio transmission. Given how quickly advances in science and technology occurred during Clarke's first fifty years, perhaps he and his fellow prophets could be forgiven for thinking progress would remain on a steady, upward path. After all, in terms of astronautics alone, the quarter century from the V-2 to Apollo 11 vindicated many of their ideas and at the same time proved that some of the finest scientific minds of the early Twentieth Century - Rutherford, J.B.S. Haldane, various Astronomer Royals, et al - had been completely wrong.

However, even brief analysis of recent history, say the post-Apollo era, shows that scientific developments are subject to the complicated interactions of culture, economics and leadership; and of course, simple serendipity. The first edition of Profiles of the Future stated that the Ground Effect Machine (A.K.A. hovercraft) would soon become a prominent form of land transport. In the context of the time - the SR.N1 having only made its first 'flight' three years earlier - this would seem to be a reasonable proposition, but once you stop to consider the invested interests in the established transport sector it is readily apparent that such a new kid on the block could not get established without overcoming major obstacles (of a non-technical variety). As Stephen Jay Gould was fond of pointing out, it is exceedingly difficult to replace even suboptimal technology once it has become established, the QWERTY keyboard layout being a prominent example.

As a converse, pioneers such as British jet engine inventor Frank Whittle found themselves snubbed by an establishment that failed to see the advantages of disturbing the status quo. Another issue concerns how theories can easily get lost and only later rediscovered, such as the work of genetics pioneer Gregor Mendel. By failing to take enough notice of these issues, Clarke's generation watched their predictions fall out of synchronisation after what appeared to be a promising start. In contrast, futurists with a keen interest in the sociological implications of new technology, Alvin Toffler perhaps being the best known, have long noted that progress can be non-linear and subject to the vagaries of the society in which it develops.

Although Arthur C. Clarke is remembered as a 'prophet of the space age' it is interesting to ask how original was he: inventive genius, smart extrapolator from the best of H.G. Wells (and numerous pulp SF writers) or just a superb mouth piece for the cutting edge technologists? The Saturn V architect Wernher von Braun for example wrote The Mars Project, a 1948 detailed study for a manned mission to Mars that showed parallels with Clarke's writings of the period. Bombarded as we are today by numerous examples of space travel in fact and fiction, it's hard to imagine a time when anyone discussing the possibility was deemed an eccentric. For instance Robert Goddard, the American pioneer of liquid-fuelled rockets during the 1920s and 30s, faced enormous criticism from those who considered his physics flawed. Only with the development of the V-2 rocket (again, involving von Braun) was there some science fact to back up the fiction and the start of the change in public perception of astronautics from crackpot to realisation. Ironically, the new advances also provided fuel for a moral opposition, C.S. Lewis being a prominent example, who argued that humans shouldn't develop space travel until their ethics had improved. Clarke may be known for his anti-nationalistic stance concerning space exploration, but during the late 1940s and early 1950s even he wrote both fact (The rocket and the future of warfare) and fiction (Earthlight) discussing its military potential.

Just because some of Clarke's ideas - in distinct opposition to all the naysayers - came to fairly rapid fruition doesn't make him a genius at prediction; in the broad sweep of developments he was frequently correct, but when it came to the details there are marked differences. His landmark 1945 paper on global communications from geosynchronous orbit also suggested that atomic-powered rockets would be commonplace by the mid-1960s, a topic elaborated on by his British Interplanetary Society (BIS) colleagues several years later. Whilst Project NERVA did test such systems during that decade, various factors put this line of development on indefinite hold. Clarke also thought the orbital communications system would consist of three, large manned stations rather than dozens of small, unmanned satellites. But then, the development of the microchip in 1959 led to a paradigm shift in miniaturisation largely unforeseen by any prognosticator. It's interesting that although Clarke was postulating remote-controlled war rockets by as early as 1946 he didn't discuss automated space probes until much later: is it possible that the fiction writer within him wanted to downplay the use of dramatically weak unmanned missions? Also, in an unusually modest statement, Clarke himself claimed that he had advanced the idea of orbital communications by approximately fifteen minutes!

So if the technological aspects of Profiles… are reasonably unimpeachable, the failure to consider the infinite complexities of human beings and the societies they build mean that many of Clarke's ideas remain unfulfilled or have been postponed indefinitely. Even for those examples that have been achieved such as the manned moon landings, albeit some years ahead of Clarke's most optimistic timeline, the primary motivations such as the Cold War overshadowed the scientific aspect. Clarke admitted in later years that Project Apollo bore an uncanny resemblance to the first South Polar expedition, the latter being largely motivated by national pride. Indeed, Amundsen's 1911 expedition was not followed up for almost half a century. Clarke even suggested that had he and his BIS armchair astronaut colleagues known the true costs of a lunar landing mission they would probably have given up their feasibility studies in the 1930s! So when as late as 1956 the then Astronomer Royal Richard van der Riet Woolley stated that such an expedition was impractical on grounds of cost alone, he was not far from the truth. As it was, even with a 'minor war'-sized budget an enormous amount of largely unpaid overtime - and resulting divorce rate within project staff - were key to achieving President Kennedy's goal.

Unfortunately, it was a long time before Clarke admitted that non-technical incentives play a key role and he seems to have never fully reconciled himself to this. Although he occasionally promoted and inspired practical, achievable near-future goals such as educational broadcasting via satellite to rural communities in the developing world, his imagination was often looking into deep space and equally deep time. Yet his prominent profile meant that the ethos behind Profiles of the Future was frequently copied in glossy expositions by lesser authors and editors. When in his later years Clarke delineated specific forecasts using his standard criteria, they almost entirely failed to hit the mark: his 1999 speculative, if in places tongue-in-cheek, timeline for the Twenty-first Century has to date failed all of its predictions, with some unlikely to transpire for some decades or possibly even centuries to come. That's not to say that we couldn't do with some of his prophecies coming true sooner rather later: even relatively small advances such as the paperless office would of enormous benefit, but how that could be achieved is anyone's guess!

As a writer of both fact and fiction, Clarke's works have a complex interaction between the world that is and the world as it could be. Many space-orientated professionals, from NASA astronauts to Carl Sagan, claimed inspiration from him, whilst the various Spaceguard surveys of near-Earth objects are named after the prototype in Clarke's 1973 novel Rendezvous with Rama. One of his key ideas was that intellectual progress requires a widening of horizons, whereas a lot of contemporary technological advances are primarily inward-looking, such as electronic consumer goods. But as I have mentioned before, won't we require thought leaders to share something of Clarke's philosophy in order to limit or reverse environmental disasters in the near future? Stephen Hawking for one has stated his belief that the long-term survival of humanity relies on us becoming a multi-planet species sooner rather than later, as unforeseen natural or man-made disasters are a question of when rather than if. Naïve they may appear to be to our jaded, post-modern eyes, but as a visionary with realist tendencies Clarke had an enormous impact on succeeding generations of scientists, engineers and enthusiasts. But to see how Clarke's successors are faring in our relatively subdued times, you'll have to wait until the next post…

Thursday 15 April 2010

In thrall to the machines: Or how to open a packet of biscuits

It says 'Tear here' so I gently pull the red strip, ripping a ragged diagonal line in completely the wrong place. More pulling and the shiny material shreds into a dozen thin strips, dislodging crumbs. A bit more and the top third suddenly rips off the packet, causing biscuits to cascade into the tin. So much for following the instructions. Then why did I tear here? Because it told me to, along with all the 'Lift this flap', 'Open other end' and numerous additional petty directives that rule the lives of us consumers.

Einstein has been quoted as saying "It has become appallingly obvious that our technology has exceeded our humanity." I'm not sure that this wasn't his response to nuclear weapons and Mutually Assured Destruction, but are we as some commentators suggest in danger of becoming a variant of the degenerative, docile Eloi in H.G. Wells' The Time Machine? From birth we are brought up to obey a myriad of procedures that give the appearance of improving our quality of life but have their rationale in manufacturing efficiency and the corporate balance sheet, locking us into a sophisticated socioeconomic profile that overrides individualism. Until the machines we increasingly rely upon achieve a much more sophisticated level of communication, are we are instead instructed to think in machine-like ways to achieve a viable interface? If so, at what cost to fundamental human traits such as initiative? Essentially, were the blank-faced bureaucrats of 2001: A Space Odyssey a more accurate prediction than Arthur C. Clarke's technophiliac Profiles of the Future?

Lest I sound like a socialist luddite, I have to admit to both utilising and enjoying much of the digital technology on offer, but as a means to an end, not an end in itself. And I only go so far - no Bluetooth headset for me! But then I also don't have a Wii, Playstation, DS, wall-mounted flatscreen television, Blueray DVD…yet I don't think I'm missing out on anything. But then I also don't consider it necessary to spend my time on public transport telling friends over the phone that yes, I'm on public transport!

The gee-whiz factor of bigger, faster and louder associated with the macho 'hard' technology of industrialisation has been largely superceded by digital and virtual technology that appeals to both genders. The irony is that whilst the latter is alleged to promote empowerment of the individual, we are in many ways just as subservient to the manufacturing corporations as ever. Where and when devices and software become available are driven by economic factors such as long-term release cycles, meaning upgrades appear staggered over a year or so rather than clumped together in a single update. So far this has done little to abate the enthusiasm for digital communications, entertainment, and navigation technology, despite the impact on consumer debt and the enormous amount of time spent continually learning how to use it all. (I'm not a violent man, but in my opinion most instruction manual authors should be strangled at birth).

But then it is astonishing how fast items such as mobile phones have been taken up by the general public for leisure use, much to the surprise of manufacturers who initially assumed a business-orientated user model. The proliferation of non-core functions has shown that most people find it easy to assimilate cutting edge technology, despite remaining as in the dark as ever regarding the varied theoretical and practical underpinnings. Surely there must be a danger in increasingly placing more and more of our daily lives in the hands of the few who sell us the hardware and software, whilst having no idea how any of it works?

A major cause for concern, as always, is that this ignorance has allowed the proliferation of scare-mongering stories concerning potential health hazards. As far as I am aware, drivers using mobile phones are in far greater danger than the average user is from the radiation emission, yet the debate continues. And speaking of vehicles, the fallibility of satellite navigation devices has yet to be properly addressed, despite police warnings. Drivers seem frequently to be so subservient to their satnav as to leave all common sense behind, as I found to my cost when an articulated lorry driver followed the directions for a shortcut down my obviously too narrow residential street and promptly wrote my car off. The over-reliance on devices or software can also lead to problems if there is not a non-digital back-up. I remember some years ago visiting a branch of a well-known restaurant chain whose staff utilized electronic ordering pads: due to a software failure they were having to work with old-fashioned pencil and paper, leading to a 45 minute backlog for diners. Clearly, basic arithmetic isn't the only skill to suffer these days!

The fact that extremely fine motor skills are usually essential for effective operation of computer and other interfaces, screen readers not withstanding, is frequently overlooked. This, as much as technophobia, can prove a fundamental stumbling block to the elder generations who are encouraged to join the 'online community' or suffer ostracism. But then however good it may be for someone who is infirm or housebound to have a webcam/Skype or even an internet connection, nothing can wholly substitute for direct face-to-face interaction. Indeed, are today's children growing up lacking (even more) social niceties, having largely replaced personal interaction with digital proxies such as texting and social networking websites? I suppose the proof will be in the next few years when the first wholly-immersed such generation reach adulthood...

The development of Web 2.0 technologies, whereby the internet becomes a two-way interface, is a powerful tool for human interaction and grass-reports campaigning, and certainly one of the best things to come out of the digital revolution. But the sheer speed of the paradigm severely limits error-checking, leading to a vast amount of noise and thus sensory overload and overproduction of information. The slick multi-media presentation of information on the web can appeal far more, especially to children, than the old-fashioned printed word, which can lead to a lack of critical thinking. After all, if it looks pretty and sounds good, then surely it must be true? There have always been errors in text books and science popularisations, but the self-proofing of Web 2.0 material can only be worse by several degrees of error. As yet the delivery technology is far superior to the ability to quality control the content. Whether the ease of access to content outweighs the shortcomings is another area that will no doubt receive a great deal of attention in the next few years, from educationalists and parents alike.

Clearly, the future for humanity lies with a post-industrial society (as per Alvin Toffler's The Third Wave), wherein information and virtual products are at least as important the material world. But with high technology in the hands of powerful multinational corporations and public knowledge largely restricted to front-end user status, we face a serious possibility of losing social and cognitive skills as more aspects of our lives become inextricably bound with the wonderful worlds of electrons and silicon. As for any Second Lifers out there, I'll save virtuality addiction for another time...