Showing posts with label Arthur C. Clarke. Show all posts
Showing posts with label Arthur C. Clarke. Show all posts

Tuesday 23 April 2019

Lift to the stars: sci-fi hype and the space elevator

As an avid science-fiction reader during my childhood, one of the most outstanding extrapolations for future technology was that of the space elevator. As popularised in Arthur C. Clarke's 1979 novel, The Fountains of Paradise, the elevator was described as a twenty-second century project. I've previously written about near-future plans for private sector spaceflight, but the elevator would be a paradigm shift in space transportation: a way of potentially reaching as far as geosynchronous orbit without the need for rocket engines.

Despite the novelty of the idea: a tower stretching from Earth - or indeed any planet's surface - to geosynchronous orbit and beyond; the first description dates back to 1895 and writings of the Russian theoretical astronautics pioneer Konstantin Tsiolkovsky. Since the dawn of the Space Age engineers and designers in various nations have either reinvented the elevator from scratch or elaborated on Tsiolkovsky's idea.

There have of course been remarkable technological developments over the intervening period, with carbyne, carbon nanotubes, tubular carbon 60 and graphene seen as potential materials for the elevator, but we are still a long way from being able to build a full-size structure. Indeed, there are now known to be many more impediments to the space elevator than first thought, including a man-made issue that didn't exist at the end of the nineteenth century. Despite this, there seems to be a remarkable number of recent stories about elevator-related experiments and the near-future feasibility of such a project.

An objective look at practical - as opposed to theoretical - studies show that results to date have been decidedly underwhelming. The Space Shuttle programme started tethered satellite tests in 1992. After an initial failure (the first test achieved a distance of a mere 256 metres), a follow up six years later built a tether that was a rather more impressive twenty kilometres long. Then last year the Japanese STARS-me experiment tested a miniature climber component in orbit, albeit at a miniscule distance of nine metres. Bearing in mind that a real tower would be over 35,000 kilometres long, it cannot be argued that the technology is almost available for a full-scale elevator.

This hasn't prevented continuous research by the International Space Elevator Consortium (ISEC), which was formed in 2008 to promote the concept and the technology behind it. It's only to be expected that fans of the space elevator would be enthusiastic, but to my mind their assessment that we are 'tech ready' for its development seems to be optimistic to the point of incredulity.

A contrasting view is that of Google X's researchers, who mothballed their space elevator work in 2014 on the grounds that the requisite technology will not be available for decades to come. While the theoretical strength of carbon nanotubes meets the requirements, the total of cable manufactured to date is seventy centimetres, showing the difficulties in achieving mass production. A key stopping point apparently involves catalyst activity probability; until that problem is resolved, a space elevator less than one metre in length isn't going to convince me, at least.

What is surprising then is that in 2014, the Japanese Obayashi Corporation published a detailed concept that specified a twenty-year construction period starting in 2030. Not to be outdone, the China Academy of Launch Vehicle Technology released news in 2017 of a plan to actually build an elevator by 2045, using a new carbon nanotube fibre. Just how realistic is this, when so little of the massive undertaking has been prototyped beyond the most basic of levels?

The overall budget is estimated to be around US$90 billion, which suggests an international collaboration in order to offset the many years before the completed structure turns a profit. In addition to the materials issue, there are various other problems yet to be resolved. Chief among these are finding a suitable equatorial location (an ocean-based anchor has been suggested), capturing an asteroid for use as a counterweight, dampening vibrational harmonics, removing space junk, micrometeoroid impact protection and shielding passengers from the Van Allen radiation belts. Clearly, just developing the construction material is only one small element of the ultimate effort required.

Despite all these issues, general audience journalism regarding the space elevator - and therefore the resulting public perception - appears as optimistic as the Chinese announcement. How much these two feedback on each other is difficult to ascertain, but there certainly seems to be a case of running before learning to walk. It's strange that China made the claim, bearing in mind how many other rather important things the nation's scientists should be concentrating on, such as environmental degradation and pollution.

Could it be that China's STEM community have fallen for the widespread hype rather than prosaic reality? It's difficult to say how this could be so, considering their sophisticated internet firewall that blocks much of the outside world's content. Clearly though, the world wide web is full of science and technology stories that consist of parrot fashion copying, little or no analysis and click bait-driven headlines.

A balanced, in-depth synthesis of the relevant research is often a secondary consideration. The evolutionary biologist Stephen Jay Gould once labelled the negative impact of such lazy journalism as "authorial passivity before secondary sources." In this particular case, the public impression of what is achievable in the next few decades seems closer to Hollywood science fiction than scientific fact.

Of course, the irony is that even the more STEM-minded section of the public is unlikely to read the original technical articles in a professional journal. Instead, we are reliant on general readership material and the danger inherent in its immensely variable quality. As far as the space elevator goes (currently, about seventy centimetres), there are far more pressing concerns requiring engineering expertise; US$90 billion could, for example, fund projects to improve quality of life in the developing world.

That's not to say that I believe China will construct a space elevator during this century, or that the budget could be found anywhere else, either. But there are times when there's just too much hype and nonsense surrounding science and not enough fact. It's easy enough to make real-world science appear dull next to the likes of Star Trek, but now more than ever we need the public to trust and support STEM if we are to mitigate climate change and all the other environmental concerns.

As for the space elevator itself, let's return to Arthur C. Clarke. Once asked when he thought humanity could build one, he replied: "Probably about fifty years after everybody quits laughing." Unfortunately, bad STEM journalism seems to have joined conservatism as a negative influence in the struggle to promote science to non-scientists. And that's no laughing matter.

Wednesday 13 June 2018

Debunking DNA: A new search for the Loch Ness monster

I was recently surprised to read that a New Zealand genomics scientist, Neil Gemmell of Otago University, is about to lead an international team in the search for the Loch Ness monster. Surely, I thought, that myth has long since been put to bed and is only something exploited for the purposes of tourism? I remember some years ago that a fleet of vessels using side-sweeping sonar had covered much of the loch without discovering anything conclusive. When combined with the fact that the most famous photograph is a known fake and the lack of evidence from the plethora of tourist cameras (never mind those of dedicated Nessie watchers) that have convened on the spot, the conclusion seems obvious.

I've put together a few points that don't bode well for the search, even assuming that Nessie is a 'living fossil' (à la coelacanth) rather than a supernatural creature; the usual explanation is a cold water-adapted descendant of a long-necked plesiosaur - last known to have lived in the Cretaceous Period:
  1. Loch Ness was formed by glacial action around 10,000 years ago, so where did Nessie come from? 
  2. Glacial action implies no underwater caves for hiding in
  3. How can a single creature maintain a long-term population (the earliest mentions date back thirteen hundred years)? 
  4. What does such a large creature eat without noticeably reducing the loch's fish population?
  5. Why have no remains ever been found, such as large bones, even on sonar?
All in all, I didn't think much of the expedition's chances and therefore I initially thought that the new research would be a distinct waste of money that could be much better used elsewhere in Scotland. After all, the Shetland seabird population is rapidly decreasing thanks to over-fishing, plastic pollution and loss of plankton due to increasing ocean temperatures. It would make more sense to protect the likes of puffins (who have suffered a 98% decline over the past 20 years), along with guillemots and kittiwakes amongst others.

However, I then read that separate from the headline-grabbing monster hunt, the expedition's underlying purpose concerns environmental DNA sampling, a type of test never before used at Loch Ness. Gemmell's team have proffered a range of scientifically valid reasons for their project:
  1. To survey the loch's ecosystem, from bacteria upwards 
  2. Demonstrate the scientific process to the public (presumably versus all the pseudoscientific nonsense surrounding cryptozoology)
  3. Test for trace DNA from potential but realistic causes of 'monster' sightings, such as large sturgeon or catfish 
  4. Understand local biodiversity with a view to conservation, especially as regards the effect caused by invasive species such as the Pacific pink salmon. 
Should the expedition find any trace of reptile DNA, this would of course prove the presence of something highly unusual in the loch. Gemmell has admitted he doubts they will find traces of any monster-sized creatures, plesiosaur or otherwise, noting that the largest unknown species likely to be found are bacteria. Doesn't it seem strange though that sometimes the best way to engage the public - and gain funding - for real science is to use what at best could be described as pseudoscience?

Imagine if NASA could only get funding for Earth observation missions by including the potential to prove whether our planet was flat or not? (Incidentally, you might think a flat Earth was just the territory of a few nutbars, but a poll conducted in February this year suggests that fully two percent of Americans are convinced the Earth is a disk, not spherical).

Back to reality. Despite the great work of scientists who write popular books and hold lectures on their area of expertise, it seems that the media - particularly Hollywood - are the primary source of science knowledge to the general public. Hollywood's version of de-extinction science, particularly for ancient species such as dinosaurs, seems to be far better known than the relatively unglamorous reality. Dr Beth Shapiro's book How to clone a mammoth for example is an excellent introduction to the subject, but would find it difficult to compete along side the adventures of the Jurassic World/Park films.

The problem is that many if not most people want to believe in a world that is more exciting than their daily routine would suggest, with cryptozoology offering itself as an alternative to hard science thanks to its vast library of sightings over the centuries. Of course it's easy to scoff: one million tourists visit Loch Ness each year but consistently fail to find anything; surely in this case absence of evidence is enough to prove evidence of absence?

The Loch Ness monster is of course merely the tip of the mythological creature iceberg. The Wikipedia entry on cryptids lists over 170 species - can they all be just as suspect? The deep ocean is the best bet today for large creatures new to science. In a 2010 post I mentioned that the still largely unexplored depths could possibly contain unknown megafauna, such as a larger version of the oarfish that could prove to be the fabled sea serpent.

I've long had a fascination with large creatures, both real (dinosaurs, of course) and imaginary. When I was eight years old David Attenborough made a television series called Fabulous Animals and I had the tie-in book. In a similar fashion to the new Loch Ness research project, Attenborough used the programmes to bring natural history and evolutionary biology to a pre-teen audience via the lure of cryptozoology. For example, he discussed komodo dragons and giant squid, comparing extant megafauna to extinct species such as woolly mammoth and to mythical beasts, including the Loch Ness Monster.

A few years later, another television series that I avidly watched covered some of the same ground, namely Arthur C. Clarke's Mysterious World. No less than four episodes covered submarine cryptozoology, including the giant squid, sea serpents and of course Nessie him (or her) self. Unfortunately the quality of such programmes has plummeted since, although as the popularity of the (frankly ridiculous) seven-year running series Finding Bigfoot shows, the public have an inexhaustible appetite for this sort of stuff.

I've read that it is estimated only about ten percent of extinct species have been discovered in the fossil record, so there are no doubt some potential surprises out there (Home floresiensis, anyone?) However, the evidence - or lack thereof - seems firmly stacked against the Loch Ness monster. What is unlikely though is that the latest expedition will dampen the spirits of the cryptid believers. A recent wolf-like corpse found in Montana, USA, may turn out to be coyote-wolf hybrid, but this hasn't stopped the Bigfoot and werewolf fans from spreading X-Files style theories across the internet. I suppose it’s mostly harmless fun, and if Professor Gemmell’s team can spread some real science along the way, who am I to argue with that? Long live Nessie!

Sunday 1 April 2018

Engagement with Oumuamua: is our first interstellar visitor an alien spacecraft?

It's often said that fact follows fiction but there are times when some such instances appear to be uncanny beyond belief.  One relatively well-known example comes from the American writer Morgan Robertson, whose 1898 novella The Wreck of the Titan (originally entitled Futility) eerily prefigured the 1912 loss of the Titanic. The resemblances between the fictional precursor and the infamous passenger liner are remarkable, including the month of the sinking, the impact location, and similarities of size, speed and passenger capacity. I was first introduced to this series of quirky coincidences via Arthur C. Clarke's 1990 novel The Ghost from the Grand Banks, which not incidentally is about attempts to raise the Titanic. The reason for including the latter reference is that there may have just been an occurrence that involves another of Clarke's own works.

Clarke's 1973 award-winning novel Rendezvous with Rama tells of a 22nd century expedition to a giant interstellar object that is approaching the inner solar system. The fifty-four kilometre long cylinder, dubbed Rama, is discovered by an Earthbound asteroid detection system called Project Spaceguard, a name which since the 1990s has been adopted by real life surveys aiming to provide early warning for Earth-crossing asteroids. Rama is revealed to be a dormant alien spacecraft, whose trajectory confirms its origin outside of our solar system. After a journey of hundreds of thousands of years, Rama appears to be on a collision course with the Sun, only for it to scoop up solar material as a fuel source before heading back into interstellar space (sorry for the spoiler, but if you haven't yet read it, why not?)

In October last year astronomer Robert Weryk at the Haleakala Observatory in Hawaii found an unusual object forty days after its closest encounter with the Sun. Initially catalogued as 1I/2017 U1, the object was at first thought to be a comet, but after no sign of a tail or coma it was reclassified as an asteroid. After another week's examination 1I/2017 U1 was put into a class all by itself and this is when observers began to get excited, as its trajectory appeared to proclaim an interstellar origin.

As it was not spotted until about thirty-three million kilometres from the Earth, the object was far too small to be photographed in any detail; all that appears to telescope-mounted digital cameras is a single pixel. Therefore its shape was inferred from the light curve, which implied a longest-to-shortest axis ratio of 5:1 or even larger, with the longest dimension being between two hundred and four hundred metres. As this data became public, requests were made for a more familiar name than just 1I/2017; perhaps unsurprisingly, Rama became a leading contender. However, the Hawaiian observatory's Pan-STARRS team finally opted for the common name Oumuamua, which in the local language means 'scout'.

Various hypotheses have been raised as to exactly what type of object Oumuamua is, from a planetary fragment to a Kuiper belt object similar - although far smaller than - Pluto.  However, the lack of off-gassing even at perihelion (closest approach to the Sun) implies that any icy material must lie below a thick crust and the light curve suggests a denser material such as metal-rich rock. This sounds most unlike any known Kuiper belt object.

These unusual properties attracted the attention of senior figures in the search for extra-terrestrial intelligence. Project Breakthrough Listen, whose leadership includes SETI luminaries Frank Drake, Ann Druyan and Astronomer Royal Martin Rees, directed the world's largest manoeuvrable radio telescope towards Oumuamua. It failed to find any radio emissions, although the lack of a signal is tempered with the knowledge that SETI astronomers are now considering lasers as a potentially superior form of interstellar communication to radio.

The more that Oumuamua has been studied, the more surprising it appears. Travelling at over eighty kilometres per second relative to the Sun, its path shows that it has not originated from any of the twenty neighbouring solar systems. Yet it homed in on our star, getting seventeen percent nearer to the Sun than Mercury does at its closest. This seems to be almost impossible to have occurred simply by chance - space is just too vast for an interstellar object to have achieved such proximity. So how likely is it that Oumuamua is a real-life Rama? Let's consider the facts:
  1. Trajectory. The area of a solar system with potentially habitable planets is nicknamed the 'Goldilocks zone', which for our system includes the Earth. It's such a small percentage of the system, extremely close to the parent star, that for a fast-moving interstellar object to approach at random seems almost impossible. Instead, Oumuamua's trajectory was perfectly placed to obtain a gravity assist from the Sun, allowing it to both gain speed and change course, with it now heading in the direction of the constellation Pegasus.
  2. Motion. Dr Jason Wright, an associate professor of astronomy and astrophysics at Penn State University, likened the apparent tumbling motion to that of a derelict spacecraft, only to retract his ideas when criticised for sensationalism.
  3. Shape. All known asteroids and Kuiper belt objects are much less elongated than Oumuamua, even though most are far too small to settle into spherical shape due to gravitational attraction (the minimum diameter being around six hundred kilometres for predominantly rocky objects). The exact appearance is unknown, with the ubiquitous crater-covered asteroid artwork being merely an artist's impression. Astronautical experts have agreed that Oumuamua's shape is eminently suitable for minimising damage from particles.
  4. Composition. One definitive piece of data is that Oumuamua doesn't emit clouds of gas or dust that are usually associated with objects of a similar size. In addition, according to a report by the American Astronomical Society, it has an 'implausibly high density'. Somehow, it has survived a relatively close encounter with the Sun while remaining in one piece - at a maximum velocity of almost eighty-eight kilometres per second relative to our star!
  5. Colour. There appears to be a red region on the surface, rather than a uniform colour expected for an object that has been bombarded with radiation on all sides whilst in deep space for an extremely long period.
So where does this leave us? There is an enormous amount of nonsense written about alien encounters, conspiracy theories and the like, with various governments and the military seeking to hide their strategies in deliberate misinformation. For example, last year the hacker collective Anonymous stated that NASA would soon be releasing confirmation of contact with extraterrestrials; to date, in case you were wondering, there's been no such announcement. Besides which, wouldn't it more likely to come from a SETI research organisation such as the Planetary Society or Project Breakthrough Listen?

Is there any evidence to imply cover-up regarding Oumuamua? Here's some suggestions:
  1. The name Rama - already familiar to many from Arthur C. Clarke's novel and therefore evocative of an artificial object - was abandoned for a far less expressive and more obscure common name. Was this an attempt to distance Oumuamua from anything out of the ordinary?
  2. Dr Wright's proposals were luridly overstated in the tabloid media, forcing him to abandon further investigation. Was this a deliberate attempt by the authorities to make light of his ideas, so as to prevent too much analysis while the object was still observable?
  3. Limited attempts at listening for radio signals have been made, even though laser signalling is now thought to be a far superior method. So why have these efforts been so half-hearted for such a unique object?
  4. The only images available in the media are a few very samey artist's impressions of an elongated asteroid, some pock-marked with craters, others, especially animations, with striations (the latter reminding me more of fossilised wood). Not only are these pure speculation but none feature the red area reported from the light curve data. It's almost as if the intention was to show a totally standard asteroid, albeit of unusual proportions. But this appearance is complete guesswork: Oumuamua has been shoe-horned into a conventional natural object, despite its idiosyncrasies.
Thanks to Hollywood, most people's ideas of aliens are as implacable invaders. If - and when - the public receive confirmation of intelligent alien life will there be widespread panic and disorder? After all, the Orson Welles' 1938 radio version of H.G. Wells' War of the Worlds led some listeners to flee their homes, believing a Martian invasion had begun. Would people today be any different? The current following of dangerous fads such as paleo diets and raw water, never mind the paranoid conspiracy theories that fill the World Wide Web, lead me to expect little change from our credulous forbears.

The issue of course, comes down to one of security. Again, science fiction movies tend to overshadow real life space exploration, but the fact is that we have no spacecraft capable of matching orbits with the likes of Oumuamua. In Arthur C. Clarke's Rendezvous with Rama, colonists on 22nd century Mercury become paranoid with the giant spacecraft's approach and attempt to destroy it with a nuclear missile (oops, another spoiler there). There is no 21st century technology that could match this feat, so if Oumuamua did turn out to be an alien craft, we would have to hope for the best. Therefore if, for example, the U.S. Government gained some data that even implied the possibility of artifice about Oumuamua, wouldn't it be in their best interest to keep it quiet, at least until it is long gone?

In which case, promoting disinformation and encouraging wild speculation in the media would be the perfect way to disguise the truth. Far from being an advanced - if dead or dormant - starship, our leaders would rather we believed it to be a simple rocky asteroid, despite the evidence to the contrary. Less one entry for the Captain's log, and more a case of 'to boulderly go' - geddit?

Saturday 1 April 2017

The moons of Saturn and echoes of a synthetic universe

As fans of Star Wars might be aware, George Lucas is nothing if not visually astute. His thumbnail sketches for the X-wing, TIE fighter and Death Star created the essence behind these innovative designs. So isn't it strange that there is a real moon in our solar system that bears an astonishing resemblance to one of Lucas's creations?

At the last count Saturn had 53 confirmed moons, with another 9 provisionally verified - and as such assigned numbers rather than names. One of the ringed planet's natural satellites is Mimas, discovered in 1789 and at 396 kilometres in diameter about as small as an object can be yet conform to an approximate sphere. The distinguishing characteristic of Mimas is a giant impact crater about 130 kilometres in diameter, which is named Herschel after the moon's discoverer, William Herschel. For anyone who has seen Star Wars (surely most of the planet by now), the crater gives Mimas an uncanny resemblance to the Death Star. Yet Lucas's original sketch for the battle station was drawn in 1975, five years before Voyager 1 took the first photograph with a high enough resolution to show the crater.


Okay, so one close resemblance between art and nature could be mere coincidence. But amongst Saturn's retinue of moons is another with an even more bizarre feature. At 1469 kilometres in diameter Iapetus is the eleventh largest moon in the solar system. Discovered by Giovanni Cassini in 1671, it quickly became apparent that there was something extremely odd about it, with one hemisphere much brighter than the other.

As such, it attracted the attention of Arthur C. Clarke, whose novel 2001: A Space Odyssey described Japetus (as he called it) as the home of the Star Gate, an artificial worm hole across intergalactic space. He explained the brightness differentiation as being due to an eye-shaped landscape created by the alien engineers of the Star Gate: an enormous pale oval with a black dot at its centre. Again, Voyager 1 was the first spacecraft to photograph Iapetus close up…revealing just such a feature! Bear in mind that this was 1980, whereas the novel was written between 1965 and 1968. Carl Sagan, who worked on the Voyager project, actually sent Clarke a photograph of Iapetus with a comment "Thinking of you..." Clearly, he had made the connection between reality and fiction.

As Sagan himself was apt to say, extraordinary claims require extraordinary evidence. Whilst a sample of two wouldn't make for a scientifically convincing result in most disciplines, there is definitely something strange about two Saturnian moons that are found to closely resemble elements in famous science fiction stories written prior to the diagnostic observations being made. Could there be something more fundamental going on here?

One hypothesis that has risen in popularity despite lacking any hard physical evidence is that of the simulated universe. Nick Bostrum, the director of the University of Oxford's Future of Humanity Institute has spent over a decade promoting the idea. Instead of experimental proof Bostrum uses probability theory to support his suppositions. At its simplest level, he notes that the astonishing increase in computing power over the past half century implies an ability in the near future to create detailed recreations of reality within a digital environment; basically, it's The Matrix for real (or should that be, for virtual?)

It might sound like the silliest science fiction, as no-one is likely to be fooled by current computer game graphics or VR environments, but with quantum computing on the horizon we may soon have processing capabilities far beyond those of the most powerful current mainframes. Since the ability to create just one simulated universe implies the ability to create limitless - even nested - versions of a base reality, each with potentially tweaked physical or biological laws for experimental reasons, the number of virtual realities must far outweigh the original model.

As for the probability of it being true in our universe, this key percentage varies widely from pundit to pundit. Astronomer and presenter Neil deGrasse Tyson has publicly admitted he considers it an even chance likelihood, whilst Space-X and Tesla entrepreneur Elon Musk is prepared to go much further, having stated that there is only a one in a billion chance that our universe is the genuine physical one!

Of course anyone can state a probability for a hypothesis as being fact without providing supporting evidence, but then what is to differentiate such an unsubstantiated claim from a religious belief? To this end, a team of researchers at the University of Bonn published a paper in 2012 called 'Constraints on the Universe as a Numerical Simulation', defining possible methods to verify whether our universe is real or virtual. Using technical terms such as 'unimproved Wilson fermion discretization' makes it somewhat difficult for anyone who isn't a subatomic physicist to get to grips with their argument (you can insert a smiley here) but the essence of their work involves cosmic rays. The paper states that in a virtual universe these are more likely to travel along the axes of a multi-dimensional, fundamental grid, rather than appear in equal numbers in all directions. In addition, they will exhibit energy restrictions at something called the Greisen-Zatsepin-Kuzmin cut-off (probably time for another smiley). Anyhow, the technology apparently exists for the relevant tests to be undertaken, assuming the funding could be obtained.

So could our entire lives simply be part of a Twenty-Second Century schoolchild's experiment or museum exhibit, where visitors can plug-in, Matrix-style, to observe the stupidities of their ancestors? Perhaps historians of the future will be able to run such simulations as an aide to their papers on why the hell, for example, the United Kingdom opted out of the European Union and the USA elected Donald Trump?

Now there's food for thought.

Friday 26 August 2016

The benefit of hindsight: the truth behind several infamous science quotes

With utmost apologies to Jane Austen fans, it is a truth universally acknowledged that most people misinterpret science as an ever-expanding corpus of knowledge rather than as a collection of methods for investigating natural phenomena. A simplistic view for those who adhere to the former misapprehension might include questioning science as a whole when high-profile practitioners make an authoritative statement that is proven - in a scientific sense - to be incorrect.

Amongst the more obvious examples of this are the numerous citations from prominent STEM (Science, Technology, Engineering and Mathematics) professionals that are inaccurate to such an extreme as to appear farcical in light of later evidence. I have already discussed the rather vague of art of scientific prognostication in several connected posts but now want to directly examine several quotations concerning applied science. Whereas many quotes are probably as deserving of contempt as the popular opinion of them, I believe the following require careful reading and knowledge of their context in which to attempt any meaningful judgement.

Unlike Hollywood, STEM subjects are frequently too complex for simple black versus white analysis. Of course there have been rather derisible opinions espoused by senior scientists, many of which - luckily - remain largely unknown to the wider public. The British cosmologist and astronomer Sir Fred Hoyle has a large number of these just to himself, from continued support for the Steady State theory long after the detection of cosmic microwave background radiation, to the even less defensible claims that the Natural History Museum's archaeopteryx fossil is a fake and that flu germs are really alien microbes!

Anyhow, here's the first quote:

1) Something is seriously wrong with space travel.

Richard van der Riet Woolley was the British Astronomer Royal at the dawn of the Space Age. His most infamous quote is the archetypal instance of Arthur C. Clarke's First Law:  "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

Although a prominent astronomer, van der Riet Woolley had little knowledge of the practical mechanics that would be required for spaceflight. By the mid-1930s the British Interplanetary Society had developed detailed (although largely paper-only) studies into a crewed lunar landing mission. In 1936 Van der Riet Woolley publically criticised such work, stating that the development of even an unmanned rocket would present fundamental technical difficulties. Bear in mind that this was only six years before the first V2 rocket, which was capable of reaching an altitude of just over 200km!

In 1956, only one year before Sputnik 1 - and thirteen years prior to Apollo 11 - the astronomer went on to claim that near-future space travel was unlikely and a manned lunar landing "utter bilge, really". Of course this has been used as ammunition against him ever since, but the quote deserves some investigation. Van der Riet Woolley goes on to reveal that his primary objection appears to have changed (presumably post-V2 and its successors) from an engineering problem to an economic one, stating that it would cost as much as a "major war" to land on the moon.

This substantially changes the flavour of his quote, since it is after all reasonably accurate. In 2010 dollars, Project Apollo has an estimated budget of about US$109 billion - incidentally about 11% of the cost of the contemporary Vietnam War. In addition, we should bear in mind that a significant amount of the contractors' work on the project is said to have consisted of unpaid overtime. Is it perhaps time to reappraise the stargazer from a reactionary curmudgeon to an economic realist?

Indeed, had Apollo been initiated in a subsequent decade, there is reasonable evidence to suggest it would have failed to leave the ground, so to speak. The uncertainty of the post-Vietnam and Watergate period, followed by the collapse of the Soviet Union, suggest America's loss of faith in technocracy would have effectively cut Apollo off in its prime. After all, another colossal American science and engineering project, the $12 billion particle accelerator the Superconducting Super Collider, was cancelled in 1993 after being deemed unaffordable. Yet up to that point only about one-sixth of its estimated budget had been spent.

In addition, van der Riet Woolley was not alone among STEM professionals: for three decades from the mid-1920s the inventor of the vacuum tube Lee De Forest is said to have claimed that space travel was impractical. Clearly, the Astronomer Royal was not an isolated voice in the wilderness but part of a large consensus opposed to the dreamers in the British Interplanetary Society and their ilk. Perhaps we should allow him his pragmatism, even if it appears a polar opposite to one of Einstein's great aphorisms: "The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. .."

Talking of whom…

2) Letting the genie out of the bottle.

In late 1934 an American newspaper carried this quotation from Albert Einstein: "There is not the slightest indication that (nuclear energy) will ever be obtainable. It would mean that the atom would have to be shattered at will." This seems to be rather amusing, considering the development of the first self-sustaining nuclear chain reaction only eight years later. But Einstein was first and foremost a theorist, a master of the thought experiment, his father's work in electrical engineering not being noticeably sustained in his son. There is obviously a vast world of difference between imagining riding a beam of light to the practical difficulties in assembling brand new technologies with little in the way of precedent. So why did Einstein make such a definitive prediction?

I think it is possible that it may also have been wishful thinking on Einstein's part; as a pacifist he would have dreaded the development of a new super weapon. As the formulator of the equivalence between mass and energy, he could have felt in some way responsible for initiating the avalanche that eventually led to Hiroshima and Nagasaki. Yet there is no clear path between E=mc2 and a man-made chain reaction; it took a team of brilliant experimental physicists and engineers in addition to theorists to achieve a practical solution, via the immense budget of $26 billion (in 2016 dollars).

It is hardly as if the good professor was alone in his views either, as senior officials also doubted the ability to harness atomic fission for power or weaponry. In 1945 when the Manhattan Project was nearing culmination, the highest-ranking member of the American military, Fleet Admiral William Leahy, apparently informed President Truman that the atomic bomb wouldn't work. Perhaps this isn't as obtuse as it sounds, since due to the level of security only a very small percentage of the personnel working on the project knew any of the details.

Leahy clearly knew exactly what the intended outcome was, but even as "an expert in explosives" had no understanding of the complexity of engineering involved. An interesting associated fact is that despite being a military man, the Admiral considered the atomic bomb unethical for its obvious potential as an indiscriminate killer of civilians. Weapons of mass destruction lack any of the valour or bravado of traditional 'heroic' warfare.  Is it possible that this martial leader wanted the bomb to fail for moral reasons, a case of heart over mind? In which case, is this a rare example in which the pacifism of the most well-known scientist was in total agreement with a military figurehead?

Another potential cause is the paradigm shift that harnessing the power of the atom required. In the decade prior to the Manhattan Project, New Zealand physicist Ernest Rutherford had referred to the possibility of man-made atomic energy as "moonshine" whilst another Nobel laureate, American physicist Robert Millikan, had made similar sentiments in the 1920s. And this from men who were pioneers in understanding the structure of the atom!

As science communicator James Burke vividly described in his 1985 television series The Day the Universe Changed, major scientific developments often require substantial reappraisals in outlook, seeing beyond what is taken for granted. The cutting edge of physics is often described as being ruled by theorists in their twenties; eager young turks who are more prepared to ignore precedents. When he became a pillar of the establishment, Einstein ruefully commented: "To punish me for my contempt for authority, fate made me an authority myself."

Perhaps then, such fundamental shifts in technology as the development of space travel and nuclear fission require equally revolutionary changes in mind set and we shouldn't judge the authors of our example quotes too harshly. Then again, if you are an optimist, Clarke's First Law might seem applicable in this situation, in which case quotes from authority figures with some knowledge of the subject in hand should take note of the ingenuity of our species. If there is a moral to this to story, it is other than the speed of light in a vacuum and the Second Law of Thermodynamics, never say never...

Wednesday 24 February 2016

Drowning by numbers: how to survive the information age

2002 was a big year. According to some statistics, it was the year that digital storage capacity overtook analogue: books gave way to online information; binary became king. Or hyperbole to that effect. Between email, social media, websites and the interminable selfie, we are all guilty to greater or lesser extent of creating data archived in digital format. The human race now generates zettabytes of data every year (a zettabyte being a trillion gigabytes, in case you're still dealing in such minute amounts of data).

So what's so bad about that? More and more we rely on syntheses of information in order to keep up with the exponential amount of knowledge revealed to our species by the scientific and other methods. Counter to Plato's 2400 year-old dialogue Phaedrus, we can no longer work out everything important for ourselves; instead, we must rely on analysis and results created by other, often long-dead, humans. Even those with superb memories cannot retain more than a miniscule fraction of the information known about even one discipline. In addition, we can now create data-rich types of content undreamed of in Plato's time. Some, MRSI medical scans being an ad-hoc example , may require long-term storage. If quantum computing becomes mainstream, then that will presumably generate an exponential growth in data.

What then, are the primary concerns of living in a society that has such high demands for the creation and safe storage of data? I've been thinking about this for a while now and the following is my analysis of the situation.

1. Storage. In recent years it has become widely known that CDs and to a lesser extent DVDs are subject to several forms of disk rot. I've heard horror stories of people putting their entire photo and/or video collection onto portable hard drives, only for these to fail within a year or two, the data being irrevocably lost. With the advent of cloud storage, this lessens the issue, but not completely. Servers are still subject to all sorts of problems, with even enterprise-level solutions suffering due to insufficient disaster recovery and resilience (to use terms us web developers use). I'm not saying audio tapes, vinyl records and VHS were any better, far from it, but there is a lot less data stored in these formats. There are times when good old-fashioned paper still rules - as it still does in the legal and compliance sectors I've had contact with.

2. Security and privacy. As for safety, the arms race against hackers, etal, is well and truly engaged. Incompetence also has its place. When living in the UK I once received a letter stating that my children's social services records, including their contact details, had become publicly available. This turned out to be due to loss of a memory stick containing database passwords. As for identify theft, well, let's just say that Facebook is a rude word. I managed to track down an old friend after nearly twenty years' incommunicado, finding details such as his address, wife's name and occupation, etc, mostly via Facebook, in less than half an hour. Lucky I'm not a stalker, really!

Even those who avoid social media may find themselves with some form of internet presence. I had a friend who signed a political petition on paper and then several years' later found his name on a petition website. Let's hope it was the sort of campaign that didn't work against his career - these things can happen.

And then there's the fact that being a consumer means numerous manufacturers and retail outlets will have your personal details on file. I've heard that in some countries if you - and more particularly your smartphone - enter a shopping mall, you may get a message saying that as a loyal customer of a particular store there is a special sale on just for you, the crunch being that you only have a limited time, possibly minutes, to get to the outlet and make a purchase. Okay, that doesn't sound so bad, but the more storage locations that contain your personal details, the greater the chance they will be used against you. Paranoid? No, just careful. Considering how easy it was for me to become a victim of financial fraud about fifteen years ago, I have experience of these things.

As any Amazon customer knows, you are bombarded with offers tailored via your purchase record. How long will it be before smart advertising billboards recognise your presence, as per Steven Spielberg's Minority Report? Yes, the merchandiser's dream of ultimate granularity in customer targeting, but also a fundamental infringement of their anonymity. Perhaps everyone will end up getting five seconds' of public fame on a daily basis, thanks to such devices. Big Brother is truly watching you, even if most of the time it's for the purpose of flogging you consumer junk.

3. Efficiency. There are several million blog posts each day, several hundred billion emails and half a billion daily tweets. How can we possibly extract the wheat from the chaff (showing my age with that idiom), if we spend so much time ploughing through social media? I, for one, am not convinced there's much worth in a lot of this new-fangled stuff anyway (insert smiley here). I really don't want to know what friends, relatives or celebrities had for breakfast or which humorous cat videos they've just watched. Of course it's subjective, but I think there's a good case for claiming the vast majority of digital content is a complete load of rubbish. So how can we live useful, worthwhile or even fulfilled lives when surrounded by it? In other words, how do we find the little gems of genuine worth among the flood of noise? It seems highly probable that a lot of the prominent nonsense theories such as moon landing hoax wouldn't be anywhere near as popular if it wasn't for the World Wide Web disseminating them.

4. Fatigue and overload. Research has shown that our contemporary news culture (short snippets repeated ad nauseum over the course of a day or so) leads to a weary attitude. Far from empowering us, bombarding everyone with the same information, frequently lacking context, can rapidly lead to antipathy. Besides which, if information is inaccurate in the first place it can quickly achieve canonical status as it gets spread across the digital world. As for the effect all this audio-visual over-stimulation is having on children's attention spans...now where was I?

5. The future. So are there any solutions to these issues? I assume as we speak there are research projects aiming to develop heuristic programs that are the electronic equivalent of a personal assistant. If a user carefully builds their personality profile, then the program would be expected to extract nuggets of digital gold from all the sludge. Yet even personally-tailored smart filters that provide daily doses of information, entertainment, commerce and all points in between have their own issues. For example, unless the software is exceptional (i.e. rather more advanced than anything commercially available today) you would probably miss out on laterally- or tangentially-associated content. Even for scientists, this sort of serendipity is a great boon to creativity, but is rarely found in any form of machine intelligence. There's also the risk that corporate or governmental forces could bias the programming…or is that just the paranoia returning? All I can say: knowledge is power.

All in all, this sounds a touch pessimistic. I think Arthur C. Clarke once raised his concern about the inevitable decay within societies that overproduced information. The digital age is centered on the dissemination of content that is both current and popular, but not necessarily optimal. We are assailed by numerous sources of data, often created for purely commercial purposes; rarely for anything of worth. Let's hope we don't end up drowning in videos of pesky kittens. Aw, aren't they cute, though?

Wednesday 1 April 2015

A very Kiwi conspiracy: in search of New Zealand's giant sea serpent

As a young child I probably overdid it on books in the boy's own fantastic facts genre, reading with breathless wonder about giant - and collectively extinct - megafauna such as ichthyosaurs and plesiosaurs. Therefore it's probably not surprising that a few years' later I was captivated by Arthur C. Clarke's 1957 novel The Deep Range, featuring as it does a giant squid and a sea serpent, both very much alive. How seriously Clarke took such cryptozoology is unknown, although he clearly stated he considered it likely that the ocean depths harboured specimens up to twice the size of those known to science.

Of course it's easy to scoff at such notions, bombarded as we are with endless drivel about megalodon and mermaids, both from a myriad of websites and even worse, the docufiction masquerading as fact on allegedly science-themed television channels (I'm talking about you, Discovery!) As Carl Sagan was known to say, "extraordinary claims require extraordinary evidence". Incidentally, if anyone has seen the clearly Photoshopped image of World War Two U-boats in front of the dorsal and tail fins of a megalodon, the total length of such an animal would be well over thirty metres. Most experts place the maximum length of this long-extinct species under twenty metres, so why do so many fakes over-egg the monster pudding?

I digress. One obvious difference between today and the pre-industrial past is that there used to be myriads of sightings regarding sea monsters of all shapes and sizes, but nowadays there are comparatively few, especially considering the number of vessels at sea today. Whilst there is a vast collection of fakery on the World Wide Web, much of this material appears to have been inspired by the BBC 2003 series Sea Monsters (and the various imitations that have since been broadcast) and the ease with which images can now be realistically manipulated.

As for scientifically-verifiable material of unknown marine giants, there is almost none - colossal squid aside. As Steven Spielberg summed up a quarter century after his canonical UFO movie Close Encounters of the Third Kind, with all the smartphone cameras about there should be documentary evidence galore. Likewise, enormous marine beasties should now be recorded on an ever-more frequent basis. After all, it's hardly as if giant sea serpents are being fished into extinction! Yet the lack of evidence implies that once again, the human penchant for perceiving patterns where none exist has caused the creation of myths, not the observation of genuine marine megafauna.

At least that's what I thought, until a couple of serendipitous events occurred. Early last year I noticed the National Institute of Water and Atmospheric Research's second-largest vessel MV Kaharoa docked in Viaduct Harbour in Auckland. It had just returned from a month's research expedition to the Kermadec Islands, about 900 kilometres north-east of New Zealand. What was interesting was that I later found out the Kaharoa had been on an identical trip the previous year, ostensibly to record the condition of the snapper stocks. Yet NIWA usually organises these missions every second year rather than annually. So why was the vessel returning to the Kermadecs a year early? Although a joint venture between France, Scotland and New Zealand, the funding has to originate either with public money or corporate grants. Therefore it's unlikely the decision for a 2014 mission was undertaken lightly.


MV Kaharoa

I'd forgotten this mildly diverting conundrum when many months later I was browsing the NIWA website and came across their Critter of the Week blog. It was fairly late at night and I'll confess to having imbibed several bottles of beer, but I was pretty astounded to see a fairly murky and obviously deep water image containing what appeared to be nothing less than a hairy-maned sea serpent, with a note stating it was estimated to be around  twenty metres in length. I quickly loaded some news channels, including the New Zealand Herald and the BBC's Science and Environment news home page, but without finding any references to such a beast. I then flicked to the main NIWA website, but again didn't come across anything related to the creature. I returned to the Critter of the Week blog, only to find the page was no longer there. How X-Files is that?

Of course I'd forgotten to screenshot the page or download the image, so there was no proof that I hadn't been hallucinating. Did I imagine it or just misinterpret a perfectly normal specimen? Or was the blog temporarily hacked by a nutter or conspiracy theorist, who added a spoof article? As I went through the options and discarded them, it gradually dawned on me that perhaps the Kaharoa's unexpected summer expedition had been organised with one particular purpose in mind: the search for an elusive giant spotted the previous year.

I usually consider myself to be fairly sane, so let's consider the facts in lieu of hard evidence:
  1. NIWA excel at finding new creatures: they have reported 141 species unknown to science within the past three years;
  2. The Kermadecs are home to some very large animals for their type, including oversize oysters, the giant limpet Patella kermadecensis and the amphipod Alicella gigantea, which is ten times the size of most species in the same taxonomic order;
  3. NIWA scientists have been known to comment with surprise on how many deep water species have recently been discovered - even if a specimen hasn't actually been captured - for regions that they have repeatedly studied over some years;
  4. Expeditions are only just starting to explore the region between the depths of 2000 and 8000 metres;
  5. Although the Kermadecs are on the edge of a marine desert, a combination of hot water and minerals upwelling from hydrothermal vents and the seabird guano that provides nutrition for the near-surface phytoplankton, help to kick-start diverse food webs;
  6. There is an increasing quantity of meltwater from the Antarctic ice shelf, which being less dense than seawater may affect the depth of the thermocline, a region of highly variable temperature, which in turn could be altering the ecology of the region;
  7. MV Kaharoa was carrying baited Hadal-landers, ideal for recording deep sea fauna, whereas snapper usually live in the top two hundred metres.

Apart from my own close encounter of the fishy kind, has there been any other recent evidence of what could be termed a giant sea serpent in New Zealand waters? Just possibly. A Google Earth image of Oke Bay in the Bay of Islands shows the wake of something that has been estimated to be around twelve metres long. The wake doesn't fit the diagnostic appearance for great whales or of a boat engine. Therefore could this be proof of sea serpents in the area? I have to say it looks more like an image rendering glitch to me, but then I'm no expert. On the plus side, the most likely candidate for such a creature is the giant oarfish Regalecus glesne, which I discussed in a post five years ago and which authoritative sources suggest can attain a maximum length of eleven metres. So clearly, the Oke Bay image is within the realm of possibility. As for the lack of documentary evidence compared to earlier centuries, could it be that the vast amounts of noise pollution from ship's engines may keep the creatures far from standard shipping lanes?

Where does this leave the Critter of the Week content that so briefly slipped - presumably accidentally - onto the live site? One possible clue that led marine biologists back to the Kermadecs could be the 2012 Te Papa Tongarewa Museum report on a colossal squid dissection, which states that chunks of herring-type flesh were found in its stomach and caecum. The oarfish belongs to the herring family and so it is just possible that titanic struggles between squid and oarfish are occurring in the ocean deep even now. And where better for an expedition to search for an elusive monster without fear of interruption than these relatively remote islands?

Unfortunately this is all surmise, as NIWA have refused to respond to my queries. It may be a long shot, but if anyone has noticed Te Papa taking delivery of a lengthy, narrow cross-section tank, or very large vats of formalin, why not let me know? The truth is out there, somewhere...probably...

Sunday 30 November 2014

Consumer complexity: engineering the public out of understanding

Last weekend my car stopped working. If a little knowledge is a dangerous thing, then an hour of internet research is probably worse. Convinced it was either the transmission or gearing, it turned out to be lack of petrol, the fuel gauge and warning light having simultaneously failed. At this point - breathing a sigh of relief that I wasn't facing an enormous repair bill so soon after an annual service - I realised that my knowledge of cars is extremely limited, despite having driven them for almost thirty years.

Obviously I'm far from being unique in this respect. In years past New Zealanders in particular were renowned for maintaining old cars long after other developed nations had scrapped them, with Australians referring to their neighbour as the place where Morris Minors went to die. However, anti-corrosion legislation put an end to such ‘canny Kiwi' tinkering so the country has presumably lost this resourcefulness when it comes to keeping ancient vehicles on the road.

Of course cars just aren't built to last any more: modern vehicles continue to be ever more fuel efficient and built of lightweight materials, but I doubt few will last as long as the classic cars still running after half a century or more. Built-in obsolescence is partly to blame, but the sophistication of today's designs means that their repair and maintenance is becoming ever more difficult without a complete workshop and diagnostic computer. As a teenager I learnt how to change my car's spark plugs but have since been told this should now only be undertaken by professionals as the tolerances required cannot be achieved by hand!

It isn't just motor vehicles that are affected by ever increasing complexity: high-tech consumer gadgets, especially those with integrated circuits (which let's face it, is most of them these days) are seemingly built to prevent tampering or repairs by the end user. Yet this is a fairly recent phenomenon. In my grandparents' generation the most sophisticated item in their house was likely to be a radio that used vacuum tube technology, but a cheaper alternative was available in the form of a do-it-yourself galena or pyrite crystal radio. Even children - Arthur C. Clarke amongst them - were able to build these self-powered devices, which worked rather well except for the fact that they had no speaker and so the user had to listen via headphones. It might seem unlikely that such as device was easy to construct until you remember that pioneer aircraft were built by bicycle manufacturers!

In contrast, the most advanced technological item my parents would have had until their twenties - when television sets started to become affordable - would have been a mass-produced transistor radio. Compared to the valve-infested sideboard gramophone, simple problems such as loose wires in these radios could be repaired with basic tools such as small screwdrivers, needle-nose pliers and a low wattage soldering iron. Whilst requiring a bit of skill and some understanding of wiring, such repairs were still within the range of many consumers.

Today, my experience suggests that the expendable consumerism that first became overt in the late 1960's is a key mind set in developed nations, with do-it-yourself work on gadgetry largely absent. In fact, it is frequently cheaper to buy a replacement item than to have it repaired or purchase the tools in order to attempt those repairs yourself. The speed with which newer models are released is such that it may even prove impossible to source a replacement part only a few years after the item has been purchased. This inevitably increases our distance from the inner workings of the ever more numerous high-tech consumer gadgets we now surround ourselves with. Surely it is a great irony that despite our ability to operate all of them, the vast majority of users have little idea of the fundamentals of the technologies involved?

My own experience with attempting to fix consumer electronics is rather limited, but I can see that manufacturers are deliberately trying to prevent this by using techniques such as hiding screw heads and using one-way pins, ensuring that any attempt to dismantle an item will snap parts within the casing. Additionally, the more sophisticated the technology, the more sensitive it seems to be. An example from a rather different sphere of activity comes from 1976, when a defecting Soviet Air Force pilot delivered a state-of-the-art fighter jet into the hands of Western intelligence. The MiG-25 ‘Foxbat' was discovered to be using valve-based rather than solid-state avionics, yet despite its primitive appearance the electronics were both extremely powerful and able to withstand immense physical stress, which is obviously of great importance in such aircraft.

Back to household gadgetry, I've seen an old cathode ray tube television repaired after water was accidentally tipped down the back of it, whilst flat screen computer monitors that were inadvertently cleaned with water - not by me, I hasten to add - were sent straight to the scrap heap. That isn't to say that there aren't a few brave souls who post internet videos on how to disassemble devices such as iPads in order to fix hardware issues, but I think you would either have to be very confident or quite rich before attempting such repairs. There are also websites dedicated to technology hackers, who enhance, customise or otherwise amend consumer gadgets beyond their out-of-the-box capabilities. Again, I don't have the confidence for this sort of thing, especially since there are hidden dangers: a digital camera for example contains a flash capacitor that can store - and deliver to the unwary - a charge of several hundred volts. Ouch!

So the next time someone declares their bewilderment with the ever-widening array of consumer gadgetry, or bores you with a piece of New Age nonsense, you should remember although we are surrounded with some extremely sophisticated devices, various causes have conspired to remove insight into their inner workings. Our consumerist age is geared towards acceptance of such items whilst limiting our involvement to that of end user. And of course I haven't even mentioned the ultimate fundamentals behind all this integrated circuitry, quantum electrodynamics...

Wednesday 20 November 2013

Newton and Einstein: fundamental problems at the heart of science

As previously discussed, Arthur C. Clarke's First Law is as follows: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." Now there have been many examples of prominent scientists who have been proved wrong but don't want to lose their pet idea - think astronomer Fred Hoyle and the Steady State Theory - or bizarrely negated their own hypothesis, such as natural selection's co-discoverer Alfred Russel Wallace and his supernatural explanation of the human mind.

But although with hindsight we can easily mock when pioneers have failed to capitalise on a theory that later proves canonical (assuming any theory except the second law of thermodynamics can ever be said to be the final word in the matter) there are some scientists who have followed profoundly unorthodox paths of thought. In fact, I would go so far as to as say that certain famous figures would find it almost impossible to maintain positions in major research institutes today. This might not matter if these were run-of-the-mill scientists, but I'm talking about two of the key notables of the discipline: Sir Isaac Newton and Albert Einstein.

The public perception of scientists has changed markedly over the past half century, from rational authority figures, via power-mad destroyers, to the uncertainties of today, when the often farcical arguments surrounding climate change have further undermined faith in scientific 'truth'. But the recognition of Newton and Einstein's achievements has never wavered, making them unassailable figures in the history of science. Indeed, if there were ever to be two undisputed champions of physics, or even for all of science - as chosen by contemporary scientists, let alone the public - this contrasting pair is likely to the among the most popular. Yet underneath their profound curiosity and dogged search for truth there are fundamental elements to their personal research that make the offbeat ideas of Wallace, Hoyle & co. appear mildly idiosyncratic.

1) Sir Isaac Newton
While some historians have tried to pass off Newton's non-scientific work as typical of his age, his writings on alchemy, eschatology and the general occult are at least as numerable as those on physics. Some of the more recent examinations of his work have suggested that without these pseudo-scientific studies, Newton would not have gained the mind-set required to generate the scientific corpus he is renowned for. Although he claimed to have no need for hypotheses or 'occult qualities', preferring to examine natural phenomena in order to gain understanding, much of Newton's surviving notes suggest the very opposite. Whether he was using numerology to research the date of the end of the world, or alchemy to search for the Philosopher's Stone, the real Newton was clearly a many-faceted man. This led economist (and owner of some of Newton's papers) John Maynard Keynes to label him "the last of the magicians". Indeed, key aspects of Newton's personality appear entirely in tune with pseudo-science.

It is well known that Newton was a secretive man, given to hiding his discoveries for decades and not wanting to share his theories. Part of this was due to his wish to avoid having to waste time with the less intelligent (i.e. just about everybody else) and partly to his fear of plagiarism, frequently experiencing conflicts with contemporary natural philosophers. To some extent this unwillingness to publish only exacerbated the issue, such as when Leibniz published his version of calculus some years after Newton had completed his unpublicised 'fluxions'.

Today, establishing scientific priority relies upon prompt publication, but Newton's modus operandi was much closer to the technique of the alchemist. Far from being a non-systematic forerunner of chemistry, alchemy was a subjective discipline, couched in metaphor and the lost wisdom of 'ancient' sages (who, after Newton's time, were frequently discovered to be early Medieval or Ptolemaic Egyptian frauds). The purity of the practitioner was deemed fundamental to success and various pseudoscientific 'influences' could prevent repeatability of results.

In addition, such knowledge as could be discovered was only to be shared between a few chosen adepts, not disseminated to a wide audience for further examination and discussion. In personality then, Newton was far more like the pre-Enlightenment alchemist than many of his contemporaries. He believed in a sense of his own destiny: that he had been chosen by God to undertake the sacred duty of decoding now-hidden patterns in the universe and history. When Descartes postulated a 'clockwork universe', Newton opposed it on the grounds that it had no place for a constantly intervening deity. And surprising as it may seem, in that respect he had a lot in common with Einstein.

2) Albert Einstein
Einstein was in many ways a much more down-to-earth (and fully rounded human being) than Newton. Whereas the latter frequently neglected such basic human activities as food and sleep, Einstein indulged in pipe tobacco and playing the violin (shades of Sherlock Holmes, indeed!) However, he was just as much a determined thinker when it came to solving fundamental riddles of nature. A good anecdote, possibly true, tells of how whilst searching for a makeshift tool to straighten a bent paperclip, Einstein came across a box of new paperclips. Yet rather than use one of the new ones per se, he shaped it into the tool required to fix the original paperclip. When questioned, he replied that once had started a task it was difficult for him to curtail it.

But one of the oft-quoted phrases surrounding him is that Einstein would have been better off spending his last two or three decades fishing, rather than pursuing a unified field theory. The reason for this is that despite being a pioneer in the quantum theory of light, he could not accept some of the concepts of quantum mechanics, in particular that it was a fundamental theory based on probability rather than simply a starting point for some underlying aspect of nature as yet unknown.

Even today there are only interpretations of quantum mechanics, not a completely known explanation of what is occurring. However, Einstein considered these as more akin to philosophy rather than science and that following for example the Copenhagen interpretation prevented deeper thought into the true reality. Unfortunately, the majority of physicists got on the quantum mechanics bandwagon, leaving Einstein and a few colleagues to try to find holes in such strange predictions as entanglement, known by Einstein under the unflattering term of "spooky action at a distance".

Although it was only some decades after his death that such phenomena were experimentally proven, Einstein insisted that the non-common sense aspects of quantum mechanics only showed their incompleteness. So what lay at the heart of his fundamental objections to the theory? After all, his creative brilliance had shown itself in his discovery of the mechanism behind Newtonian gravitation, no mean feat for so bizarre a theory. But his glorious originality came at a price: as with many other scientists and natural philosophers, from Johannes Kepler via Newton to James Clerk Maxwell, Einstein sought answers that were aesthetically pleasing. In effect, the desire for truth was driven by a search for beautiful patterns. Like Newton, there is the concept of wanting to understand the mind of God, regardless of how different the two men's concept of a deity was (in Einstein's case, looking for the secrets of the 'old one').

By believing that at the heart of reality there is a beautiful truth, did Einstein hamper his ability to come to terms with such ugly and unsatisfying concepts as the statistical nature of the sub-atomic world? In this respect he seems old-fashioned, even quaint, by the exacting standards required - at least theoretically - in contemporary research institutes. Critical thinking unhampered by aesthetic considerations has long been shown a myth when it comes to scientific insights, but did Einstein take the latter too far in his inability to accept the most important physics developed during the second half of his life? In some respects, his work after the mid-1920s is seemingly as anachronistic as Newton's pseudo-scientific interests.

As a result of even these minimal sketches, it is difficult to believe that Newton would ever have gained an important academic post if he were alive today, whilst Einstein, certainly in the latter half of his life would probably have been relegated to a minor research laboratory at best. So although they may be giants in the scientific pantheon, it is an irony that neither would have gained such acceptance by the establishment had they been alive today. If there's a moral to be drawn here, presumably it is that even great scientists are just as much a product of their time as any other human being, even if they occasionally see further than us intellectual dwarves.

Wednesday 27 February 2013

An index of possibilities: is science prognostication today worthwhile or just foolish?

A few evenings ago I saw the International Space Station. It was dusk, and walking home with the family we were looking at Jupiter when a moving bright light almost directly overhead got our attention. Too high for an aircraft, too large for a satellite, a quick check on the Web when we got home confirmed it was the ISS. 370 kilometres above our heads, a one hundred metre long, permanently crewed construction confirmed everything I read in my childhood: we had become a space-borne species. But if so few of the other scientific and technological advances I was supposed to be enjoying in adulthood have come true, has the literature of science prediction in these areas also changed markedly?

It is common to hear nowadays that science is viewed as just one of many equally valid methods of describing reality. So whilst on the one hand most homes in the developed world contain a myriad of up-to-date high technology, many of the users of these items haven't got the faintest idea how they work. Sadly, neither do they particularly have any interest in finding out. It's a scary thought that more and more of the key devices we rely on every day are designed and manufactured by a tiny percentage of specialists in the know; we are forever increasing the ease with which our civilisation could be knocked back to the steam age - if not the stone age.

Since products of such advanced technology are now familiar in the domestic environment and not just in the laboratory, why are there seemingly fewer examples of popular literature praising the ever-improving levels of knowledge and application compared to Arthur C. Clarke's 1962 prophetic classic Profiles of the Future and its less critical imitators that so caught my attention as a child? Is it that the level of familiarity has led to the non-scientist failing to find much interest or inspiration in what is now such an integrated aspect of our lives? With scientific advance today frequently just equated with cutting-edge consumerism we are committing an enormous error, downplaying far more interesting and important aspects of the discipline whilst cutting ourselves off from the very processes by which we can gain genuine knowledge.

Therefore it looks as if there's somewhat of an irony: non-scientists either disregard scientific prognostication as non-practical idealism ("just give me the new iPad, please") and/or consider themselves much more tech savvy than the previous generation (not an unfair observations, if for obvious reasons - my pre-teen children can work with our 4Gb laptop whilst my first computer had a 48Kb RAM). Of course it's not all doom and gloom. Although such as landmark experiments as the New Horizons mission to Pluto has gone largely unnoticed, at least by anyone I know, the Large Hadron Collider (LHC) and Mars Curiosity rover receive regular attention in popular media.

Perhaps the most regularly-occurring theme in science news articles over the past decade or so has been climate change, but with the various factions and exposé stories confusing the public on an already extremely complex issue, could it be that many people are turning their back on reading postulated technological advances as (a) technology may have greatly contributed to global warming; and (b) they don't want to consider a future that could be extremely bleak unless we ameliorate or solve the problem? The Astronomer Royal and former President of the Royal Society Martin Rees is one of many authors to offer a profoundly pessimistic view of mankind's future. His 2003 book Our Final Hour suggests that either by accident or design, at some point before AD2100 we are likely to initiate a technological catastrophe here on the Earth, and the only way to guarantee our species' survival is to establish colonies elsewhere as soon as possible.

But there are plenty of futurists with the opposite viewpoint to Rees and like-minded authors, including the grandly-titled World Future Society, whose annual Outlook reports are written with the aim of inspiring action towards improving our prospects. Most importantly, by including socio-economic aspects they may fare better than Arthur C. Clarke and his generation, whose space cadet optimism now seems hopelessly naïve.

One way near-future extrapolation may increase accuracy is for specialists to concentrate in their area of expertise. To this end, many scientists and popularisers have concentrated on trendy topics such as nanotechnology, with Ray Kurzweil perhaps the best known example. This isn't to say that there aren't still some generalist techno-prophets still around, but Michio Kaku's work along these lines has proved very mixed as to quality whilst the BBC Futures website is curiously old school, with plenty of articles on macho projects (e.g. military and transport hardware) that are mostly still in the CAD program and will probably remain that way for many years to come.

With so many factors influencing which science and technology projects get pursued, it seems worthwhile to consider whether even a little knowledge of current states and developments might be as useful as in-depth scientific knowledge when it comes to accurate prognostication, with luck instead playing the primary role. One of my favourite examples of art-inspired science is the iPad, released to an eager public in 2010 some twenty-three years after the fictional PADD was first shown on Star Trek: The Next Generation (TNG) - although ironically the latter is closer in size to non-Apple tablets. In an equally interesting reverse of this, there is now a US$10 million prize on offer for the development of a hand-held Wi-Fi health monitoring and diagnosis device along the lines of the Star Trek tricorder. No doubt Gene Roddenberry would have been pleased that his optimistic ideas are being implemented so rapidly; but then even NASA have at times hired his TNG graphic designer!

I'll admit that even I have made my own modest if inadvertent contribution to science prediction. In an April Fools' post in 2010 I light-heartedly suggested that perhaps sauropod dinosaurs could have used methane emissions as a form of self-defence. Well, not quite, but a British study in the May 2012 edition of Current Biology hypothesises that the climate of the period could have been significantly affected by dino-farts. As they say, truth is always stranger than fiction…

Thursday 31 January 2013

Profiling the future: science predictions of a bygone age

I recently heard a joke along the lines of: "Question: What would a scientist from one hundred years ago find most disconcerting about current technology? Answer: whilst there are cheap, mass-produced, pocket-sized devices that can hold a large proportion of mankind's knowledge, they are mostly used for viewing humorous videos of cats!" The obvious point to make (apart from all the missed potential) is that the future is likely to be far more unpredictable than even the best-informed science fiction writer is capable of formulating. But if SF authors are unlikely to make accurate predictions, what are the chances that trained scientists will be any good at prognostication either?

As a child I read with breathless wonder various examples of mainstream science prediction delineating the early Twenty-first Century: flying cars, underwater cities, domestic robots and enormous space colonies; after all, I did grow up in the 1970s! Unfortunately I wasn't to know that these grandiose visions were already fading by the time Apollo 11 touched down on the moon. Yet if this was caused by a decline in the Victorian ideal of progress (or should that be Progress) why didn't the authors of these volumes know about it?

Despite the apparent decline in mega-budget projects over the past forty years - Large Hadron Collider and International Space Station excepted - popular science and technology exposition continued to promote wild, wonderful and occasionally downright wacky ideas into the 1980s. One of the best known examples of the genre is Arthur C. Clarke's Profiles of the Future, originally published in 1962 but with updated editions appearing in 1973, 1983 and 1999. As a leading SF writer and 'Godfather of the Communications Satellite' Clarke seemed better placed than most to make accurate predictions, and thus making him a suitable example with which to explore this theme. Indeed, the first edition of Profiles… contains what was to become his First Law, a direct reference to one of the dangers of prophesizing developments in science and technology: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." Unfortunately, by always following this notion Clarke's prognostications frequently appear overly optimistic, utilising a schoolboy enthusiasm for advancement that downplays the interactions between science and society.

Interestingly, this optimism appears in exact opposition to earlier generations, wherein scientists and pioneer SF writers were frequently pessimistic as to the impact that new science and technology would have on civilisation. Whilst some of Jules Verne and H.G. Wells' fictional predictions have been realised their most negative visions have yet to occur, unless you consider the West's current obsession with witless celebrities and consumerism as a veritable precursor of Wells' post-human Eloi. (Note: if you enjoy watching TV shows such as Celebrity Chefs' Pets' Got Talent you should probably read Wells' The Time Machine as soon as possible…)

While the Nineteenth and early Twentieth Century equivalents of Michael Crichton were raising the possibility of technologically-led dystopias, their scientific contemporaries frequently abided by a philosophy antithetical to Clarke's First Law. The likes of Lord Kelvin, Ernest Rutherford and even Albert Einstein opposed theories now part and parcel of the scientific canon, ranging from black holes, meteorite impacts on Earth and quantum electrodynamics to the ensuing development of heavier-than-air flight, atomic bombs and even commercial radio transmission. Given how quickly advances in science and technology occurred during Clarke's first fifty years, perhaps he and his fellow prophets could be forgiven for thinking progress would remain on a steady, upward path. After all, in terms of astronautics alone, the quarter century from the V-2 to Apollo 11 vindicated many of their ideas and at the same time proved that some of the finest scientific minds of the early Twentieth Century - Rutherford, J.B.S. Haldane, various Astronomer Royals, et al - had been completely wrong.

However, even brief analysis of recent history, say the post-Apollo era, shows that scientific developments are subject to the complicated interactions of culture, economics and leadership; and of course, simple serendipity. The first edition of Profiles of the Future stated that the Ground Effect Machine (A.K.A. hovercraft) would soon become a prominent form of land transport. In the context of the time - the SR.N1 having only made its first 'flight' three years earlier - this would seem to be a reasonable proposition, but once you stop to consider the invested interests in the established transport sector it is readily apparent that such a new kid on the block could not get established without overcoming major obstacles (of a non-technical variety). As Stephen Jay Gould was fond of pointing out, it is exceedingly difficult to replace even suboptimal technology once it has become established, the QWERTY keyboard layout being a prominent example.

As a converse, pioneers such as British jet engine inventor Frank Whittle found themselves snubbed by an establishment that failed to see the advantages of disturbing the status quo. Another issue concerns how theories can easily get lost and only later rediscovered, such as the work of genetics pioneer Gregor Mendel. By failing to take enough notice of these issues, Clarke's generation watched their predictions fall out of synchronisation after what appeared to be a promising start. In contrast, futurists with a keen interest in the sociological implications of new technology, Alvin Toffler perhaps being the best known, have long noted that progress can be non-linear and subject to the vagaries of the society in which it develops.

Although Arthur C. Clarke is remembered as a 'prophet of the space age' it is interesting to ask how original was he: inventive genius, smart extrapolator from the best of H.G. Wells (and numerous pulp SF writers) or just a superb mouth piece for the cutting edge technologists? The Saturn V architect Wernher von Braun for example wrote The Mars Project, a 1948 detailed study for a manned mission to Mars that showed parallels with Clarke's writings of the period. Bombarded as we are today by numerous examples of space travel in fact and fiction, it's hard to imagine a time when anyone discussing the possibility was deemed an eccentric. For instance Robert Goddard, the American pioneer of liquid-fuelled rockets during the 1920s and 30s, faced enormous criticism from those who considered his physics flawed. Only with the development of the V-2 rocket (again, involving von Braun) was there some science fact to back up the fiction and the start of the change in public perception of astronautics from crackpot to realisation. Ironically, the new advances also provided fuel for a moral opposition, C.S. Lewis being a prominent example, who argued that humans shouldn't develop space travel until their ethics had improved. Clarke may be known for his anti-nationalistic stance concerning space exploration, but during the late 1940s and early 1950s even he wrote both fact (The rocket and the future of warfare) and fiction (Earthlight) discussing its military potential.

Just because some of Clarke's ideas - in distinct opposition to all the naysayers - came to fairly rapid fruition doesn't make him a genius at prediction; in the broad sweep of developments he was frequently correct, but when it came to the details there are marked differences. His landmark 1945 paper on global communications from geosynchronous orbit also suggested that atomic-powered rockets would be commonplace by the mid-1960s, a topic elaborated on by his British Interplanetary Society (BIS) colleagues several years later. Whilst Project NERVA did test such systems during that decade, various factors put this line of development on indefinite hold. Clarke also thought the orbital communications system would consist of three, large manned stations rather than dozens of small, unmanned satellites. But then, the development of the microchip in 1959 led to a paradigm shift in miniaturisation largely unforeseen by any prognosticator. It's interesting that although Clarke was postulating remote-controlled war rockets by as early as 1946 he didn't discuss automated space probes until much later: is it possible that the fiction writer within him wanted to downplay the use of dramatically weak unmanned missions? Also, in an unusually modest statement, Clarke himself claimed that he had advanced the idea of orbital communications by approximately fifteen minutes!

So if the technological aspects of Profiles… are reasonably unimpeachable, the failure to consider the infinite complexities of human beings and the societies they build mean that many of Clarke's ideas remain unfulfilled or have been postponed indefinitely. Even for those examples that have been achieved such as the manned moon landings, albeit some years ahead of Clarke's most optimistic timeline, the primary motivations such as the Cold War overshadowed the scientific aspect. Clarke admitted in later years that Project Apollo bore an uncanny resemblance to the first South Polar expedition, the latter being largely motivated by national pride. Indeed, Amundsen's 1911 expedition was not followed up for almost half a century. Clarke even suggested that had he and his BIS armchair astronaut colleagues known the true costs of a lunar landing mission they would probably have given up their feasibility studies in the 1930s! So when as late as 1956 the then Astronomer Royal Richard van der Riet Woolley stated that such an expedition was impractical on grounds of cost alone, he was not far from the truth. As it was, even with a 'minor war'-sized budget an enormous amount of largely unpaid overtime - and resulting divorce rate within project staff - were key to achieving President Kennedy's goal.

Unfortunately, it was a long time before Clarke admitted that non-technical incentives play a key role and he seems to have never fully reconciled himself to this. Although he occasionally promoted and inspired practical, achievable near-future goals such as educational broadcasting via satellite to rural communities in the developing world, his imagination was often looking into deep space and equally deep time. Yet his prominent profile meant that the ethos behind Profiles of the Future was frequently copied in glossy expositions by lesser authors and editors. When in his later years Clarke delineated specific forecasts using his standard criteria, they almost entirely failed to hit the mark: his 1999 speculative, if in places tongue-in-cheek, timeline for the Twenty-first Century has to date failed all of its predictions, with some unlikely to transpire for some decades or possibly even centuries to come. That's not to say that we couldn't do with some of his prophecies coming true sooner rather later: even relatively small advances such as the paperless office would of enormous benefit, but how that could be achieved is anyone's guess!

As a writer of both fact and fiction, Clarke's works have a complex interaction between the world that is and the world as it could be. Many space-orientated professionals, from NASA astronauts to Carl Sagan, claimed inspiration from him, whilst the various Spaceguard surveys of near-Earth objects are named after the prototype in Clarke's 1973 novel Rendezvous with Rama. One of his key ideas was that intellectual progress requires a widening of horizons, whereas a lot of contemporary technological advances are primarily inward-looking, such as electronic consumer goods. But as I have mentioned before, won't we require thought leaders to share something of Clarke's philosophy in order to limit or reverse environmental disasters in the near future? Stephen Hawking for one has stated his belief that the long-term survival of humanity relies on us becoming a multi-planet species sooner rather than later, as unforeseen natural or man-made disasters are a question of when rather than if. Naïve they may appear to be to our jaded, post-modern eyes, but as a visionary with realist tendencies Clarke had an enormous impact on succeeding generations of scientists, engineers and enthusiasts. But to see how Clarke's successors are faring in our relatively subdued times, you'll have to wait until the next post…

Monday 30 July 2012

Buy Jupiter: the commercialisation of outer space

I recently saw a billboard for the Samsung Galaxy SIII advertising a competition to win a "trip to space", in the form of a suborbital hop aboard a Virgin Galactic SpaceshipTwo. This phrase strikes me as highly interesting: a trip to space, not into space, as if the destination was just another beach holiday resort. The accompanying website uses the same wording, so clearly the choice of words wasn't caused by space issues (that's space for the text, not space as in outer). Despite less than a dozen space tourists to date, is space travel now considered routine and the rest of the universe ripe for commercial gain, as per the Pan Am shuttle and Hilton space station in 2001: A Space Odyssey? Or is this all somewhat premature, with the hype firmly ahead of the reality? After all, the first fee-paying space tourist, Dennis Tito, launched only eleven years ago in 2001.

Vodafone is only the second company after Guinness Breweries to offer space travel prizes, although fiction was way ahead of the game: in Arthur C. Clarke's 1952 children's novel Islands in the Sky the hero manages a trip into low Earth orbit thanks to a competition loophole.  However, the next decade could prove the turning point. Virgin Galactic already have over 500 ticket-holders whilst SpaceX, developer of the first commercial orbital craft - the unmanned Dragon cargo ship - plan to build a manned version that could reduce orbital seat costs by about 60%.

If anything, NASA is pushing such projects via its Commercial Orbital Transportation Services (COTS) programme, including the aim of using for-profit services for the regular supply of cargo and crew to the International Space Station (ISS). The intention is presumably for NASA to concentrate on research and development rather than routine operations, but strong opposition to such commercialisation comes from an unusual direction: former NASA astronauts including Apollo pioneers Neil Armstrong and Eugene Cernan deem the COTs programme a threat to US astronautic supremacy. This seems to be more an issue of patriotism and politics rather than a consideration of technological or scientific importance. With China set to overtake the USA in scientific output next year and talk of a three-crew temporary Chinese space station within 4 years, the Eclipse of the West has already spread beyond the atmosphere. Then again, weren't pre-Shuttle era NASA projects, like their Soviet counterparts, primarily driven by politics, prestige, and military ambitions, with technological advances a necessary by-product and science very much of secondary importance?

Commerce in space could probably be said to have begun with the first communications satellite, Telstar 1, in 1962. The big change for this decade is the ability to launch ordinary people rather than trained specialists into space, although as I have mentioned before, the tourist jaunts planned by Virgin Galactic hardly go where no-one has gone before. The fundamental difference is that such trips are deemed relatively safe undertakings, even if the ticket costs of are several orders greater than any terrestrial holiday. A trip on board SpaceShipTwo is currently priced at US$200,000 whilst a visit to the International Space Station will set you back one hundred times that amount. This is clearly somewhat closer to the luxury flying boats of the pre-jet era than any modern package tour.

What is almost certain is that despite Virgin Galactic's assessment of the risk as being akin to 1920s airliners, very few people know enough of aviation history's safety record to make this statistic meaningful. After all, two of the five Space Shuttle orbiters were lost, the latter being the same number intended for the SpaceshipTwo fleet. Although Virgin Galactic plays the simplicity card for their design - i.e. the fewer the components, the less the chance of something going wrong - it should be remembered that the Columbia and Challenger shuttles were lost due to previously known and identified problems with the external fuel tank and solid rocket boosters respectively. In other words, when there is a known technical issue but the risk is considered justifiable, human error enters the equation.

In addition, human error isn't just restricted to the engineers and pilots: anything from passenger illness (about half of all astronauts get spacesick - headaches and nausea for up to several days after launch) to disruptive behaviour of the sort I have witnessed on airliners. Whether the loss of business tycoons or celebrities would bring more attention to the dangers of space travel remains to be seen. Unfortunately, the increase in number and type of spacecraft means it is almost certainly a case of when, not if.

Planet Saturn via a Skywatcher telescope

Location location location (via my Skywatcher 130PM)

But if fifteen minutes of freefall might seem a sublime experience there are also some ridiculous space-orientated ventures, if some of the ludicrous claims found on certain websites are anything to go by. Although the 1967 Outer Space Treaty does not allow land on other bodies to be owned by a nation state, companies such as Lunar Embassy have sold plots on the Moon to over 3 million customers. It is also possible to buy acres on Mars and Venus, even if the chance of doing anything with it is somewhat limited. I assume most customers treat their land rights as a novelty item, about as useful as say, a pet rock, but with some companies issuing mineral rights deeds for regions of other planets, could this have serious implications in the future? Right now it might seem like a joke, but as the Earth's resources dwindle and fossil fuels run low, could private companies race to exploit extra-terrestrial resources such as lunar Helium 3?

Various cranks/forward thinkers (delete as appropriate) have applied to buy other planets since at least the 1930s but with COTs supporting private aerospace initiatives such as unmanned lunar landers there is at least the potential of legal wrangling over mining rights throughout the solar system. The US-based company Planetary Resources has announced its intention to launch robot mining expeditions to some of the 1500 or so near-Earth asteroids, missions that are the technological equivalent of a lunar return mission.

But if there are enough chunks of space rock to go round, what about the unique resources that could rapidly become as crowded as low Earth orbit? For example, the Earth-Moon system's five Lagrange points are gravitationally stable positions useful for scientific missions, whilst geosynchronous orbit is vital for commercial communication satellites. So far, national governments have treated outer space like Antarctica, but theoretically a private company could cause trouble if the law fails to keep up with the technology, in much the same way that the internet has been a happy harbour for media pirates.

Stephen Hawking once said "To confine our attention to terrestrial matters would be to limit the human spirit". Then again, no-one should run before they can walk, never mind fly. We've got a long way to go before we reach the giddy heights of wheel-shaped Hiltons, but as resources dwindle and our population soars, at some point it will presumably become a necessity to undertake commercial space ventures, rather than just move Monte Carlo into orbit. Now, where's the best investment going to be: an acre of Mars or two on the Moon?