Showing posts with label Stephen Hawking. Show all posts
Showing posts with label Stephen Hawking. Show all posts

Monday 23 November 2020

Self-destructive STEM: how scientists can devalue science

Following on from last month's exploration of external factors inhibiting the scientific enterprise, I thought it would be equally interesting to examine issues within the sector that can negatively influence STEM research. There is a range of factors that vary from the sublime to the ridiculous, showing that science and its practitioners are as prey to the whims of humanity as any other discipline. 

1) Conservatism

The German physicist Max Planck once said that a "new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." With peer review of submitted articles, it's theoretically possible that a new hypothesis could be prevented from seeing the light of day due to being in the wrong place at the wrong time; or more precisely, because the reviewers personally object to the ideas presented.

Another description of this view is that there are three stages before the old guard accept the theories of the young turks, with an avant garde idea eventually being taken as orthodoxy. One key challenge is the dislike shown by established researchers to outsiders who promote a new hypothesis in a specialisation they have no formal training in. 

A prominent example of this is the short shrift given to meteorologist Alfred Wegener when he described continental drift to the geological establishment; it took over thirty years and a plethora of evidence before plate tectonics was found to correlate with Wegener's seemingly madcap ideas. More recently, some prominent palaeontologists wrote vitriolic reviews of the geologist-led account of the Chicxulub impact as the main cause of the K-T extinction event. 

This also shows the effect impatience may have; if progress in a field is slow or seemingly negative, it may be prematurely abandoned by most if not all researchers as a dead end.

2) Putting personal preferences before evidence 

Although science is frequently sold to the public as having a purely objective attitude towards natural phenomena, disagreements at the cutting edge are common enough to become cheap ammunition for opponents of STEM research. When senior figures within a field disagree with younger colleagues, it's easy to see why there might be a catch-22 situation in which public funding is only available when there is consensus and yet consensus can only be reached when sufficient research has as placed an hypothesis on a fairly firm footing.

It is well known that Einstein wasted the last thirty or so years of his life trying to find a unified field theory without including quantum mechanics. To his tidy mind, the uncertainty principle and entanglement didn't seem to be suitable as foundation-level elements of creation, hence his famous quote usually truncated as "God doesn't play dice". In other words, just about the most important scientific theory ever didn't fit into his world picture - and yet the public's perception of Einstein during this period was that he was the world's greatest physicist.

Well-known scientists in other fields have negatively impacted their reputation late in their career. Two well-known examples are the astronomer Fred Hoyle and microbiologist Lynn Margulis. Hoyle appears to have initiated increasingly fruity ideas as he got older, including the claim that the archaeopteryx fossil at London's Natural History Museum was a fake. Margulis for her part stayed within her area of expertise, endosymbiotic theory for eukaryotic cells, to claim her discoveries could account for an extremely wide range of biological functions, including the cause of AIDS. It doesn't take much to realise that if two such highly esteemed scientists can publish nonsense, then uninformed sections of the public might want to question the validity of a much wider variety of established scientific truths.

3) Cronyism and the academic establishment

While nepotism might not appear often in the annals of science history, there have still been plenty of instances in which favoured individuals gain a position at the expense of others. This is of course a phenomenon as old as natural philosophy, although thankfully the rigid social hierarchy that affected the careers of nineteenth century luminaries such as physicist Michael Faraday and dinosaur pioneer Gideon Mantell is no longer much of an issue. 

Today, competition for a limited number of places in university research faculties can lead to results as unfair as in any humanities department.  A congenial personality and an ability to self-publicise may tip the balance on gaining tenure as a faculty junior; scientists with poor interpersonal skills can fare badly. As a result, their reputation can be denigrated even after their death, as happened with DNA pioneer Rosalind Franklin in James Watson's memoirs. 

As opponents of string theory are keen to point out, graduates are often forced to get on bandwagons in order to gain vital grants or academic tenure. This suggests that playing safe by studying contemporary ‘hot' areas of research is preferred to investigating a wider range of new ones. Nobel Laureate and former Stephen Hawking collaborator Roger Penrose describes this as being particularly common in theoretical physics, whereby the new kids on the block have to join the entourage of an establishment figure rather than strike out with their own ideas.

Even once a graduate student has gained a research grant, it doesn't mean that their work will be fairly recognised. Perhaps the most infamous example of this occurred with the 1974 Nobel Prize in Physics. One of the two recipients was Antony Hewish, who gained the prize for his "decisive role in the discovery of pulsars”. Yet it was his student Jocelyn Bell who promoted the hypothesis while Hewish was claiming the signal to be man-made interference. 

4) Jealousy and competitiveness

Although being personable and a team player can be important, anyone deemed to be too keen on self-aggrandising may attract the contempt of the scientific establishment. Carl Sagan was perhaps the most prominent science communicator of his generation but was blackballed from the US National Academy of Sciences due to being seen as too popular! This is despite some serious planetary astronomy in his earlier career, including work on various Jet Propulsion Laboratory probes. 

Thankfully, attitudes towards sci-comm have started to improve. The Royal Society has advocated the notion that prominent scientists should become involved in promoting their field, as public engagement has been commonly judged by STEM practitioners as the remit of those at the lower end of scientific ability. Even so, there remains the perception that those engaged in communicating science to the general public are not proficient enough for a career in research. Conversely, research scientists should be able to concentrate on their work rather than having to spend large amounts of their time of seeking grants or undertaking administration - but such ideals are not likely to come to in the near future!

5) Frauds, hoaxes and general misdemeanours 

Scientists are as human as everyone else and given the temptation have been known to resort to underhand behaviour in order to obtain positions, grants and renown. Such behaviour has been occurring since the Enlightenment and varies from deliberate use of selective evidence through to full-blown fraud that has major repercussions for a field of research. 

One well-known example is the Piltdown Man hoax, which wasn't uncovered for forty years. This is rather more due to the material fitting in with contemporary social attitudes rather than the quality - or lack thereof - of the finds. However, other than generating public attention of how scientists can be fooled, it didn't damage science in the long run. 

A far more insidious instance is that of Cyril Burt's research into the heritability of intelligence. After his death, others tried to track down Burt's assistants, only to find they didn't exist. This of course placed serious doubt on the reliability of both his data and conclusions, but even worse his work was used by several governments in the late twentieth century as the basis for social engineering. 

Scandals are not unknown in recent years, providing ammunition for those wanting to deny recognition of fundamental scientific theories (rarely the practical application). In this age of social media, it can take only one person's mistake - deliberate or otherwise - to set in motion a global campaign that rejects the findings of science, regardless of the evidence in its favour. As the anti-vaccination lobby have proven, science communication still has long way to go if we are to combine the best of both worlds: a healthy scepticism with an acceptance of how the weird and wonderful universe really works, and not how we would like it to.

Tuesday 27 October 2020

Bursting the bubble: how outside influences affect scientific research

In these dark times, when some moron (sorry, non-believer in scientific evidence) can easily reach large numbers of people on social media with their conspiracy theories and pseudoscientific nonsense, I thought it would be an apt moment to look at the sort of issues that block the initiation, development and acceptance of new scientific ideas. We are all aware of the long-term feud between some religions and science but aside from that, what else can influence or inhibit both theoretical and applied scientific research?

There are plenty of other factors, from simple national pride to the ideologies of the far left and right that have prohibited theories considered inappropriate. Even some of the greatest twentieth century scientists faced persecution; Einstein was one of the many whose papers were destroyed by the Nazis simply for falling under the banner 'Jewish science'. At least this particular form of state-selective science was relatively short-lived: in the Soviet Union, theories deemed counter to dialectical materialism were banned for many decades. A classic example of this was Stalin's promotion of the crackpot biologist Trofim Lysenko - who denied the modern evolutionary synthesis - and whose scientific opponents were ruthlessly persecuted. 

Even in countries with freedom of speech, if there is a general perception that a particular area of research has negative connotations then no matter how unfounded, public funding may be affected likewise. From the seemingly high-profile adulation of STEM in the 1950s and 1960s (ironic, considering the threat of nuclear war), subsequent decades have seen a decreasing trust in both science and its practitioners. For example, the Ig Nobel awards have for almost thirty years been a high-profile way of publicising scientific projects deemed frivolous or a waste of resources. A similar attitude is frequently heard in arts graduate-led mainstream media; earlier this month, a BBC radio topical news comedy complemented a science venture that was seen as "doing something useful for once." 

Of course, this attitude is commonly related to how research is funded, the primary question being why should large amounts of resources go to keep STEM professionals employed if their work fails to generate anything of immediate use? I've previously discussed this contentious issue, and despite the successes of the Large Hadron Collider and Laser Interferometer Gravitational-Wave Observatory, there are valid arguments in favour of them being postponed until our species has dealt with fundamental issues such as climate change mitigation. 

There are plenty of far less grandiose projects that could benefit from even a few percent of the resources given to the international, mega-budget collaborations that gain the majority of headlines. Counter to the 'good science but wrong time' argument is the serendipitous nature of research; many unforeseen inventions and discoveries have been made by chance, with few predictions hitting the mark.

The celebrity-fixated media tends to skew the public's perception of scientists, representing them more often as solitary geniuses rather than team players. This has led to oversimplified distortions, such as that inflicted on Stephen Hawking for the last few decades of his life. Hawking was treated as a wise oracle on all sorts of science- and future-related questions, some far from his field of expertise. This does neither the individuals involved nor the scientific enterprise any favours. It makes it appear as if a mastermind can pull rabbits out of a hat, rather than hardworking groups spending years on slow, methodical and - let's face it - from the outsider's viewpoint what appears to be somewhat dull research. 

The old-school caricature of the wild-haired, lab-coated boffin is thankfully no longer in evidence, but there are still plenty of popular misconceptions that even dedicated STEM media channels don't appear to have removed. For example, almost everyone I meet fails to differentiate between the science of palaeontology and the non-science of archaeology, the former of course usually being solely associated with dinosaurs. If I had to condense the popular media approach to science, it might be something along these lines:

  • Physics (including astronomy). Big budget and difficult to understand, but sometimes exciting and inspiring
  • Chemistry. Dull but necessary, focusing on improving products from food to pharmaceuticals
  • Biology (usually excluding conventional medicine). Possibly dangerous, both to human ego and our ethical and moral compass (involve religion at this point if you want to) due to both working theories (e.g. natural selection) and practical applications, such as stem cell research. 

Talking of applied science, a more insidious form of pressure has sometimes been used by industry, either to keep consumers purchasing their products or prevent them moving to rival brands. Various patents, such as for longer-lasting products, have been snapped up and hidden by companies protecting their interests, while the treatment meted out to scientific whistle blowers has been legendary. Prominent examples include Rachel Carson's expose of DDT, which led to attacks on her credibility, to industry lobbying of governments to prevent the banning of CFCs after they were found to be destroying the ozone layer.

When the might of commerce is combined with wishful thinking by the scientist involved, it can lead to dreadful consequences. Despite a gathering body of evidence for smoking-related illnesses, the geneticist and tobacco industry spokesman Ronald Fisher - himself a keen pipe smoker - argued for a more complex relationship between nicotine and lung disease. The sector used his prominence to denigrate the truth, no doubt shortening the lives of immense numbers of smokers.

If there's a moral to all this, it is that even at a purely theoretical level science cannot be isolated from all manner of activities and concerns. Next month I'll investigate negative factors within science itself that have had deleterious effects on this uniquely human sphere of accomplishment.

Thursday 11 October 2018

Sonic booms and algal blooms: a smart approach to detoxifying waterways

A recent report here in New Zealand has raised some interesting issues around data interpretation and the need for independent analysis to minimise bias. The study has examined the state of our fresh water environment over the past decade, leading to the conclusion that our lakes and rivers are improving in water quality.

However, some of the data fails to support this: populations of freshwater macro invertebrates remain low, following a steady decline over many decades. Therefore while the overall tone of the report is one of optimism, some researchers have claimed that the data has been deliberately cherry-picked in order to present as positive a result as possible.

Of course, there are countless examples of interested parties skewing scientific data for their own ends, with government organisations and private corporations among the most common culprits. In this case, the recorded drop in nitrate levels has been promoted at the expense of the continued low population of small-scale fauna. You might well ask what use these worms, snails and insects are, but even a basic understanding of food webs shows that numerous native bird and freshwater fish species rely on these invertebrates for food. As I've mentioned so often the apparently insignificant may play a fundamental role in sustaining human agriculture (yes, some other species practice farming too!)

So what is it that is preventing the invertebrates' recovery? The answer seems to be an increase in photosynthetic cyanobacteria, or as is more commonly - and incorrectly known - blue-green algae. If it is identified at all, it's as a health food supplement called spirulina, available in smoothies and tablet form. However, most cyanobacteria species are not nearly as useful - or pleasant. To start with, their presence in water lowers the oxygen content, so thanks to fertiliser runoff - nitrogen and phosphorus in particular - they bloom exponentially wherever intensive farming occurs close to fresh water courses. Another agriculture-related issue is due to clearing the land for grazing: without trees to provide shade, rivers and streams grow warmer, encouraging algal growth. Therefore as global temperatures rise, climate change is having yet another negative effect on the environment.

Most species of cyanobacteria contain toxins that can severely affect animals much larger than fresh water snails. Dogs have been reported as dying in as little as a quarter of an hour from eating it, with New Zealand alone losing over one hundred and fifty pet canines in the past fifteen years; it's difficult to prevent consumption, since dogs seem to love the smell! Kiwis are no stranger to the phylum for other reasons, as over one hundred New Zealand rivers and lakes have been closed to swimmers since 2011 due to cyanobacterial contamination.

Exposure to contaminated water or eating fish from such an environment is enough to cause external irritation to humans and may even damage our internal organs and nervous system. Drinking water containing blue-green algae is even worse; considering their comparable size to some dogs, it is supposed that exposure could prove fatal to young children. Research conducted over the past few years also suggests that high-level contamination can lead to Lou Gehrig's disease, A.K.A. amyotrophic lateral sclerosis, the same condition that Stephen Hawking suffered from.

What research you might ask is being done to discover a solution to this unpleasant organism? Chemicals additives including copper sulphate and calcium hypochlorite have been tried, but many are highly expensive while the toxicity of others is such that fish and crustacean populations also suffer, so this is hardly a suitable answer.

A more elegant solution has been under trial for the past two years, namely the use of ultrasound to sink the blue-green algae too deep to effectively photosynthesise, thus slowly killing it. A joint programme between New Zealand and the Netherlands uses a high-tech approach to identifying and destroying ninety per cent of each bloom. Whereas previous ultrasonic methods tended to be too powerful, thereby releasing algal toxins into the water, the new technique directly targets the individual algal species. Six tests per hour are used to assess water quality and detect the species to be eradicated. Once identified, the sonic blasts are calibrated for the target species and water condition, leading to a slower death for the blue-green algae that avoids cell wall rupture and so prevents the toxins from escaping.

Back to the earlier comment as to why the report's conclusions appear to have placed a positive spin that is unwarranted, the current and previous New Zealand Governments have announced initiatives to clean up our environment and so live up to the tourist slogan of '100% Pure'. The latest scheme requires making ninety percent of the nation's fresh water environments swimmable by 2040, which seems to be something of a tall order without radical changes to agriculture and the heavily polluting dairy sector in particular. Therefore the use of finely target sonic blasting couldn't come a moment too soon.

Our greed and short-sightedness has allowed cyanobacteria to greatly increase at the expense of the freshwater ecosystem, not to mention domesticated animals. Now advanced but small-scale technology has been developed to reduce it to non-toxic levels, but is yet to be implemented beyond the trial stage. Hopefully this eradication method will become widespread in the near future, a small victory in our enormous fight to right the wrongs of over-exploitation of the environment. But as with DDT, CFCs and numerous others, it does make me wonder how many more man-made time bombs could be ticking out there...

Tuesday 12 December 2017

Robotic AI: key to utopia or instrument of Armageddon?

Recent surveys around the world suggest the public feel they don't receive enough science and non-consumer technology news in a format they can readily understand. Despite this, one area of STEM that captures the public imagination is an ever-growing concern with the development of self-aware robots. Perhaps Hollywood is to blame. Although there is a range of well-known cute robot characters, from WALL-E to BB-8 (both surely designed with a firm eye on the toy market), Ex Machina's Ava and the synthetic humans of the Blade Runner sequel appear to be shaping our suspicious attitudes towards androids far more than real-life projects are.

Then again, the idea of thinking mechanisms and the fears they bring out in us organic machines has been around far longer than Hollywood. In 1863 the English novelist Samuel Butler wrote an article entitled Darwin among the Machines, wherein he recommended the destruction of all mechanical devices since they would one day surpass and likely enslave mankind. So perhaps the anxiety runs deeper than our modern technocratic society. It would be interesting to see - if such concepts could be explained to them - whether an Amazonian tribe would rate intelligent, autonomous devices as dangerous. Could it be that it is the humanoid shape that we fear rather than the new technology, since R2-D2 and co. are much-loved, whereas the non-mechanical Golem of Prague and Frankenstein's monster are pioneering examples of anthropoid-shaped violence?

Looking in more detail, this apprehension appears to be split into two separate concerns:

  1. How will humans fare in a world where we are not the only species at our level of consciousness - or possibly even the most intelligent?
  2. Will our artificial offspring deserve or receive the same rights as humans - or even some animals (i.e. appropriate to their level of consciousness)?

1) Utopia, dystopia, or somewhere in the middle?

The development of artificial intelligence has had a long and tortuous history, with the top-down and bottom-up approaches (plus everything in between) still falling short of the hype. Robots as mobile mechanisms however have recently begun to catch up with fiction, gaining complete autonomy in both two- and four-legged varieties. Humanoid robots and their three principal behavioural laws have been popularised since 1950 via Isaac Asimov's I, Robot collection of short stories. In addition, fiction has presented many instances of self-aware computers with non-mobile extensions into the physical world. In both types of entity, unexpected programming loopholes prove detrimental to their human collaborators. Prominent examples include HAL 9000 in 2001: A Space Odyssey and VIKI in the Asimov-inspired feature film called I, Robot. That these decidedly non-anthropomorphic machines have been promoted in dystopian fiction runs counter to the idea above concerning humanoid shapes - could it be instead that it is a human-like personality that is the deciding fear factor?

Although similar attitudes might be expected of a public with limited knowledge of the latest science and technology (except where given the gee-whiz or Luddite treatment by the right-of-centre tabloid press) some famous scientists and technology entrepreneurs have also expressed doubts and concerns. Stephen Hawking, who appears to be getting negative about a lot of things in his old age, has called for comprehensive controls around sentient robots and artificial intelligence in general. His fears are that we may miss something when coding safeguards, leading to our unintentional destruction. This is reminiscent of HAL 9000, who became stuck in a Moebius loop after being given instructions counter to his primary programming.

Politics and economics are also a cause for concern is this area. A few months' ago, SpaceX and Tesla's Elon Musk stated that global conflict is the almost inevitable outcome of nations attempting to gain primacy in the development of AI and intelligent robots. Both Mark Zuckerberg and Bill Gates promote the opposite opinion, with the latter claiming such machines will free up more of humanity - and finances - for work that requires empathy and other complex emotional responses, such as education and care for the elderly.

All in all, there appears to be a very mixed bag of responses from sci-tech royalty. However, Musk's case may not be completely wrong: Vladimir Putin recently stated that the nation who leads AI will rule the world. Although China, the USA and India may be leading the race to develop the technology, Russia is prominent amongst the countries engaged in sophisticated industrial espionage. It may sound too much like James Bond, but clearly the dark side of international competition should not be underestimated.

There is a chance that attitudes are beginning to change in some nations, at least for those who work in the most IT-savvy professions. An online survey over the Asia Pacific region in October and November this year compiled some interesting statistics. In New Zealand and Australia only 8% of office professionals expressed serious concern about the potential impact of AI. However, this was in stark contrast to China, where 41% of interviewees claimed they were extremely concerned. India lay between these two groups at 18%. One factor these four countries had in common was the very high interest in the use of artificial intelligence to free humans from mundane tasks, with the figures here varying from 87% to 98%.

Talking of which, if robots do take on more and more jobs, what will everyone do? Most people just aren't temperamentally suited to the teaching or caring professions, so could it be that those who previously did repetitive, low-initiative tasks will be relegated to a life of enforced leisure? This appears reminiscent of the far-future, human-descended Eloi encountered by the Time Traveller in H.G. Wells' The Time Machine; some wags might say that you only have to look at a small sample of celebrity culture and social media to see that this has already happened...

Robots were once restricted to either the factory or the cinema screen, but now they are becoming integrated into other areas of society. In June this year Dubai introduced a wheeled robot policeman onto its streets, with the intention of making one quarter of the police force equally mechanical by 2030. It seems to be the case that wherever there's the potential to replace a human with a machine, at some point soon a robot will be trialling that role.

2) Robot rights or heartless humans?

Hanson Robotics' Sophia gained international fame when Saudi Arabia made her the world's first silicon citizen. A person in her own right, Sophia is usually referred to as 'she' rather than 'it' - or at least as a 'female robot' - and one who has professed the desire to have children. But would switching her off constitute murder? So far, her general level of intelligence (as opposed to specific skills) varies widely, so she's unlikely to pass the Turing test in most subjects. One thing is for certain: for an audience used to the androids of the Westworld TV series or Blade Runner 2049, Sophia is more akin to a clunky toy.

However, what's interesting here is not so much Sophia's level of sophistication as the human response to her and other contemporary human-like machines. The British tabloid press have perhaps somewhat predictably decided that the notion of robots as individuals is 'bonkers', following appeals to give rights to sexbots - who are presumably well down the intellectual chain from the cutting edge of Sophia. However, researchers at the Massachusetts Institute of Technology and officers in the US military have shown aversion to causing damage to their robots, which in the case of the latter was termed 'inhumane'. This is thought-provoking since the army's tracked robot in question bore far greater resemblance to WALL-E than to a human being.

A few months' ago I attended a talk given by New Zealand company Soul Machines, which featured a real-time chat with Rachel, one of their 'emotionally intelligent digital humans'. Admittedly Rachel is entirely virtual, but her ability to respond to words (both the tone in which they are said as well as their meaning) as well as to physical and facial gestures, presented an uncanny facsimile of human behaviour. Rachel is a later version of the AI software that was first showcased in BabyX, who easily generated feelings of sympathy when she became distraught. BabyX is perhaps the first proof that we are well on the way to creating a real-life version of David, the child android in Spielberg's A.I. Artificial Intelligence; robots may soon be able to generate powerful, positive emotions in us.

Whilst Soul Machines' work is entirely virtual, the mechanical shell of Sophia and other less intelligent bipedal robots shows that the physical problem of subtle, independent movement has been almost solved. This begs the question, when Soul Machines' 'computational model of consciousness' is fully realised, will we have any choice but to extend human rights to them, regardless of whether these entities have mechanical bodies or only exist on a computer screen?

To some extent, Philip K. Dick's intention in Do Androids Dream of Electric Sheep? to show that robots will always be inferior to humans due to their facsimile emotions was reversed by Blade Runner and its sequel. Despite their actions, we felt sorry for the replicants since although they were capable of both rational thought and human-like feelings, they were treated as slaves. The Blade Runner films, along with the Cylons of the Battlestar Galactica reboot, suggest that it is in our best interest to discuss robot rights sooner rather than later, both to prevent the return of slavery (albeit of an organic variety) and to limit a prospective AI revolution. It might sound glib, but any overly-rational self-aware machine might consider itself the second-hand product of natural selection and therefore the successor of humanity. If that is the case, then what does one do with an inferior predecessor that is holding it up its true potential?

One thing for certain is that AI robot research is unlikely to be slowing down any time soon. China is thought to be on the verge of catching up with the USA whilst an Accenture report last year suggested that within the next two decades the implementation of such research could add hundreds of billions of dollars to the economies of participating nations. Perhaps for peace of mind AI manufacturers should follow the suggestion of a European Union draft report from May 2016, which recommended an opt-out mechanism, a euphemistic name for a kill switch, to be installed in all self-aware entities. What with human fallibility and all, isn't there a slight chance that a loophole could be found in Asimov's Three Laws of Robotics, after which we find out if we have created partners or successors..?

Tuesday 28 November 2017

Research without borders: why international cooperation is good for STEM

I've just finished reading Bryan Sykes' (okay, I know he's a bit controversial) The Seven Daughters of Eve, about the development of mitochondrial DNA research for population genetics. One chapter mentioned Dr Sykes' discovery of the parallel work of Hans-Jürgen Bandelt, who's Mathematics Genealogy Project provided a structure diagram perfectly suited to explaining Sykes' own evolutionary branching results. This discovery occurred largely by chance, suggesting that small research groups must rely either on serendipity or have knowledge of the latest professional papers in order to find other teams who's work might be useful.

This implies that the more international the character of scientific and technological research, the more likely there will be such fortuitous occurrences. Britain's tortuous path out of the European Union has led various organisations on both sides of the Channel to claim that this can only damage British STEM research. The Francis Crick Institute, a London-based biomedical research centre that opened last year, has staff originating from over seventy nations. This size and type of establishment cannot possibly rely on being supplied with researchers from just one nation. Yet EU scientists resident in Britain have felt 'less welcome' since the Brexit referendum, implying a potential loss of expertise in the event of a mass withdrawal.

In recent years, European Union research donations to the UK have exceeded Britain's own contributions by £3 billion, meaning that the additional £300 million newly announced for research and development over the coming four years is only ten percent of what the EU has provided - and the UK Government is clearly looking to the private sector to make up the shortfall. It should also be recognised that although there are high numbers of non-British nationals working in Britain's STEM sector, the country also has a fair number of its own STEM professionals working overseas in EU nations.

The United Kingdom is home to highly expensive, long-term projects that require overseas funding and expertise, including the Oxfordshire-based Joint European Torus nuclear fusion facility. British funding and staff also contribute to numerous big-budget international projects, from the EU-driven Copernicus Earth observation satellite programme to the non-EU CERN. The latter is best-known for the Large Hadron Collider, the occasional research home of physicist and media star Brian Cox (how does he find the time?) and involves twenty-two key nations plus researchers from more than eighty other countries. Despite the intention to stay involved in at least the non-EU projects, surveys suggest that post-Brexit there will be greater numbers of British STEM professionals moving abroad. Indeed, in the past year some American institutions have actively pursued the notion of recruiting more British scientists and engineers.

Of course, the UK is far from unique in being involved in so many projects requiring international cooperation. Thirty nations are collaborating on the US-based Deep Underground Neutrino Experiment (DUNE); the recently-successful Laser Interferometer Gravitational-Wave Observatory (LIGO) involves staff from eighteen countries; and the Square Kilometre Array radio telescope project utilises researchers of more than twenty nationalities. Although the USA has a large population when compared to European nations, one report from 2004 states that approaching half of US physicists were born overseas. Clearly, these projects are deeply indebted to non-nationals.

It isn't just STEM professionals that rely on journeying cross-border, either. Foreign science and technology students make up considerable percentages in some developed countries: in recent years, over 25% of the USA's STEM graduate students and even higher numbers of its master's degree and doctorate students were not born there. Canada, Australia, New Zealand and several European countries have similar statistics, with Indian and Chinese students making up a large proportion of those studying abroad.

As a small nation with severely limited resources for research, New Zealand does extremely well out of the financial contributions from foreign students. Each PhD student spends an average of NZ$175,000 on fees and living costs, never mind additional revenue from the likes of family holidays, so clearly the economics alone make sense. Non-nationals can also introduce new perspectives and different approaches, potentially lessening inflexibility due to cultural mind sets. In recent years, two New Zealand-based scientists, microbiologist Dr Siouxsie Wiles and nanotechnologist Dr Michelle Dickinson (A.K.A. Nanogirl) have risen to prominence thanks to their fantastic science communication work, including with children. Both were born in the UK, but New Zealand sci-comm would be substantially poorer without their efforts. Could it be that their sense of perspective homed in on a need that locally-raised scientists failed to recognise?

This combination of open borders for STEM professionals and international collaboration on expensive projects proves if anything that science cannot be separated from society as a whole. Publically-funded research requires not only a government willing to see beyond its short-term spell in office but a level of state education that satisfies the general populace as to why public money should be granted for such undertakings. Whilst I have previously discussed the issues surrounding the use of state funding for mega-budget research with no obvious practical application, the merits of each project should still be discussed on an individual basis. In addition, and as a rule of thumb, it seems that the larger the project, the almost certain increase in the percentage of non-nationals required to staff it.

The anti-Brexit views of prominent British scientists such as Brian Cox and the Astronomer Royal, Lord Rees of Ludlow, are well known. Let's just hope that the rising xenophobia and anti-immigration feeling that led to Brexit doesn't stand for 'brain exit'. There's been enough of that already and no nation - not even the USA - has enough brain power or funding to go it alone on the projects that really need prompt attention (in case you're in any doubt, alternative energy sources and climate change mitigation spring to mind). Shortly before the Brexit referendum, Professor Stephen Hawking said: "Gone are the days when we could stand on our own, against the world. We need to be part of a larger group of nations." Well if that's not obvious, I don't know what is!

Tuesday 29 August 2017

Cerebral celebrities: do superstar scientists harm science?

One of my earliest blog posts concerned the media circus surrounding two of the most famous scientists alive today: British physicist Stephen Hawking and his compatriot the evolutionary biologist Richard Dawkins. In addition to their scientific output, they are known in public circles thanks to a combination of their general readership books, television documentaries and charismatic personalities. The question has to be asked though, how much of their reputation is due to their being easily-caricatured and therefore media-friendly characters rather than what they have contributed to human knowledge?

Social media has done much to democratise the publication of material from a far wider range of authors than previously possible, but the current generation of scientific superstars who have arisen in the intervening eight years appear party to a feedback loop that places personality as the primary reason for their media success. As a result, are science heroes such as Neil deGrasse Tyson and Brian Cox merely adding the epithet 'cool' to STEM disciplines as they sit alongside the latest crop of media and sports stars? With their ability to fill arenas usually reserved for pop concerts or sports events, these scientists are seemingly known far and wide for who they are as much as for what they have achieved. It might seem counterintuitive to think that famous scientists and mathematicians could be damaging STEM, but I'd like to put forward five ways by which this could be occurring:

1: Hype and gossip

If fans of famous scientists spend their time reading, liking and commenting at similarly trivial levels, they may miss important material from other, less famous sources. A recent example that caught my eye was a tweet by British astrophysicist and presenter Brian Cox, containing a photograph of two swans he labelled ‘Donald' and ‘Boris'. I assume this was a reference to the current US president and British foreign secretary, but with over a thousand 'likes' by the time I saw it I wonder what other, more serious, STEM-related stories might have been missed in the rapid ebb and flow of social media.

As you would expect with popular culture fandom the science celebrities' material aimed at a general audience receives the lion's share of attention, leaving the vast majority of STEM popularisations under-recognised. Although social media has exacerbated this, the phenomenon does pre-date it. For example, Stephen Hawking's A Brief History of Time was first published in 1988, the same year as Timothy Ferris's Coming of Age in the Milky Way, a rather more detailed approach to similar material that was left overshadowed by its far more famous competitor. There is also the danger that celebrities with a non-science background might try to cash in on the current appeal of science and write poor-quality popularisations. If you consider this unlikely, you should bear in mind that there are already numerous examples of extremely dubious health, diet and nutrition books written by pop artists and movie stars. If scientists can be famous, perhaps the famous will play at being science writers.

Another result of this media hubbub is that in order to be heard, some scientists may be guilty of the very hype usually blamed on the journalists who publicise their discoveries. Whether to guarantee attention or self-promoting in order to gain further funding, an Australian research team recently came under fire for discussing a medical breakthrough as if a treatment was imminent, despite having so are only experimented on mice! This sort of hyperbole both damages the integrity of science in the public eye and can lead to such dangerous outcomes as the MMR scandal, resulting in large numbers of children not being immunised.

2: Hero worship

The worship of movie stars and pop music artists is nothing new and the adulation accorded them reminds me of the not dissimilar veneration shown to earlier generations of secular and religious leaders. The danger here then is for impressionable fans to accept the words of celebrity scientists as if they were gospel and so refrain from any form of critical analysis. When I attended an evening with astrophysicist Neil deGrasse Tyson last month I was astonished to hear some fundamental misunderstandings of science from members of the public. It seemed as if Dr Tyson had gained a personality cult who hung on each utterance but frequently failed to understand the wider context or key issues regarding the practice of science. By transferring hero worship from one form of human activity to another, the very basis - and differentiation - that delineates the scientific enterprise may be undermined.

3: Amplifying errors

Let's face it, scientists are human and make mistakes. The problem is that if the majority of a celebrity scientist's fan base are prepared to lap up every statement, then the lack of critical analysis can generate further issues. There are some appalling gaffes in the television documentaries and popular books of such luminaries as Sir David Attenborough (as previously discussed) and even superstar Brian Cox is not immune: his 2014 book Human Universe described lunar temperatures dropping below -2000 degrees Celsius! Such basic errors imply that the material is ghost-written or edited by authors with little scientific knowledge and no time for fact checking. Of course this may embarrass the science celebrity in front of their potentially jealous colleagues, but more importantly can serve as ammunition for politicians, industrialists and pseudo-scientists in their battles to persuade the public of the validity of their own pet theories - post-truth will out, and all that nonsense.

4: Star attitude

With celebrity status comes the trappings of success, most usually defined as a luxury lifestyle. A recent online discussion here in New Zealand concerned the high cost of tickets for events featuring Neil deGrasse Tyson, Brian Greene, David Attenborough, Jane Goodall and later this year, Brian Cox. Those for Auckland-based events were more expensive than tickets to see Kiwi pop star Lorde and similar in price for rugby matches between the All Blacks and British Lions. By making the tickets this expensive there is little of chance of attracting new fans; it seems to be more a case of preaching to the converted.

Surely it doesn't have to be this way: the evolutionary biologist Beth Shapiro, author of How to Clone a Mammoth, gave an excellent free illustrated talk at Auckland Museum a year ago. It seems odd that the evening with Dr Tyson, for example, consisting of just himself, interviewer Michelle Dickinson (A.K.A. Nanogirl) and a large screen, cost approximately double that of the Walking with Dinosaurs Arena event at the same venue two years earlier, which utilised US$20 million worth of animatronic and puppet life-sized dinosaurs.

Dr Tyson claims that by having celebrity interviewees on his Star Talk series he can reach a wider audience, but clearly this approach is not feasible when his tour prices are so high. At least Dr Goodall's profits went into her conservation charity, but if you consider that Dr Tyson had an audience of probably over 8000 in Auckland alone, paying between NZ$95-$349 (except for the NZ$55 student tickets) you have to wonder where all this money goes: is he collecting ‘billions and billions' of fancy waistcoats? It doesn't look as if this trend will soon stop either, as Bill Nye (The Science Guy) has just announced that he will be touring Australia later this year and his tickets start at around NZ$77.

5: Skewing the statistics

The high profiles of sci-comm royalty and their usually cheery demeanour implies that all is well in the field of scientific research, with adequate funding for important projects. However, even a quick perusal of less well-known STEM professionals on social media prove that this is not the case. An example that came to my attention back in May was that of the University of Auckland microbiologist Dr Siouxsie Wiles, who had to resort to crowdfunding for her research into fungi-based antibiotics after five consecutive funding submissions were rejected. Meanwhile, Brian Cox's connection to the Large Hadron Collider gives the impression that even such blue-sky research as the LHC can be guaranteed enormous budgets.

As much as I'd like to thank these science superstars for promoting science, technology and mathematics, I can't quite shake the feeling that their cult status is too centred on them rather than the scientific enterprise as a whole.  Now more than ever science needs a sympathetic ear from the public, but this should be brought about by a massive programme to educate the public (they are the taxpayers, after all) as to the benefits of such costly schemes as designing nuclear fusion reactors and the research on climate change. Simply treating celebrity scientists in the same way as movie stars and pop idols won't help an area of humanity under siege from so many influential political and industrial leaders with their own private agendas. We simply mustn't allow such people to misuse the discipline that has raised us from apemen to spacemen.

Monday 30 January 2017

Hold the back page: 5 reasons science journalism can be bad for science

Although there's an extremely mixed quality to television science documentaries these days (with the Discovery Channel firmly at the nadir) - and in stark contrast to the excellent range of international radio programmes available - the popular press bombards us daily with news articles discussing science and technology. Both traditional print and online publications reach an enormous percentage of the public who would never otherwise read stories connected to STEM (Science, Technology, Engineering and Mathematics). Therefore these delivery channels and the journalists who write material for them face an immense challenge: how to make science accessible and comprehensible as well as interesting. How well they are doing can be judged by the general public's attitude towards the subject...which is currently not that great.

In November 2016 Oxford Dictionaries stated that their Word of the Year was 'post-truth', which refers to 'circumstances in which objective facts are less influential...than appeals to emotion and personal belief.' Clearly, this is the antithesis of how good science should proceed. Combined with the enormous output from social media, which gives the impression that anyone's opinion is as valid as a trained professionals and you can see why things aren't going well for critical thought in general. Did you know that a Google search for 'flat earth' generates over 12 million results? What a waste of everyone's time and data storage! As they said about Brexit: pride and prejudice has overcome sense and sensibility. Here then are five reasons why popular science journalism, mostly covering general news publications but occasionally dipping into specialist magazines too, can be detrimental to the public's attitude towards science.

1) Most science writers on daily newspapers or non-specialist periodicals don't have any formal science training. Evolutionary biologist Stephen Jay Gould once pointed out that journalists have a tendency to read summaries rather than full reports or scientific papers, thus distancing themselves from the original material before they even write about it. The problem is that an approach that works for the humanities may not be suitable for science stories. We're not critiquing movies or gourmet cuisine, folks!

As an humorous example of where a lack of research has led to a prevalent error,  a 1984 April Fools' Day spoof research paper by American journalism student Diana ben-Aaron was published in 350 newspapers before the original publisher admitted that Retrobreeding the Woolly Mammoth was phoney. One of the facts that ben-Aaron made up (and still remains unknown) is that woolly mammoth had fifty-eight chromosomes. This number is now ubiquitous across the World Wide Web from Wikipedia to the Washington Post, although I'm pleased to see that the National Geographic magazine website correctly states the situation. Clearly, anyone who follows the President Trump approach that "All I know is what's on the Internet" isn't going to get the correct answer.

This isn't to say that even a scientifically-trained journalist would understand stories from all sectors: the pace of advance in some fields is so fast than no-one can afford the time to maintain a sophisticated understanding of areas beyond their own specialism. But it isn't just particular research that is a concern: general concepts and methodology can be ignored or misunderstood; whilst a lack of mathematical training can easily restrict an understanding of how statistics work, with error bars and levels of significance often overlooked or misrepresented.

Related to this ambiguity and margin for error, journalists love to give definitive explanations, which is where there can be serious issues. Science is a way of finding ever more accurate explanations for the universe, not a collection of unchangeable laws (excepting the Second Law of Thermodynamics, of course). Therefore today's breakthrough may be reversed by tomorrow's report of sample contamination, unrepeatable results or other failure. It's rarely mentioned that scientists are willing to live with uncertainty - it's a key component of the scientific enterprise, after all. Yet in the event of an about turn or setback it's usually the scientists involved who get blamed, with accusations ranging from wasting public money to taking funding from something more worthwhile. Meanwhile, the journalist who wrote the original distorted account rarely gets held responsible. As for the one-sided scare stories such as nicknaming GM crops as 'Frankenfoods', this lowers what should be a serious public debate to an infantile level extremely difficult to overthrow.

2) How many science documentaries have you seen where the narrator says something along the lines of “and then the scientists found something that stunned them”? Such is the nature of story-making today, where audiences are deemed to have such short attention spans that every five minutes they require either a summary of the last ten minutes or a shock announcement. This week I saw a chart about bias within major news organisations: both CNN and USA Today were labelled as 'sensational or clickbait'. I've repeatedly read about scientists who were prompted by journalists towards making a controversial or sensational quote, which if published would distort their work but provide a juicy headline. It seems that limiting hyperbole is a critical skill for any scientist being interviewed.

Journalists don't owe invertebrate paleontologists, for example, a free lunch but there is a lot of good professional and occasionally amateur science being conducted away from the spotlight. Concentrating on the more controversial areas of research does little to improve science in the public's eye. Even reporting of such abstract (but mega-budget) experiments as the Large Hadron Collider seems to be based around headlines about 'The God Particle' (nearly six million results on Google) A.K.A. Higgs Boson (less than two million results). Next thing, they'll be nicknaming the LHC ‘The Hammer of Thor' or something equally cretinous. Although come to think of it…

The World Wide Web is far worse than printed news, with shock headlines ('It Was The Most XXX Ever Found - "It Blew My Mind," Expert Says') and over-inflated summaries that would make even lowbrow tabloids blush. Even specialist periodicals are not immune to the syndrome, with New Scientist magazine being particularly at fault. In 2009 it published the silly headline 'Darwin was wrong' which drew the ire of many biologists whilst providing a new form of ammunition for creationists. In 2012 their special 'The God Issue' turned out to contain less than fifteen pages on religion - but then it is meant to be a popular science periodical! In this vein the Ig Nobels seem to get more attention than the Nobel Prizes as journalists look for a quirky man-bites-dog angle to convince the public that a science story is worth reading.

3) Talking of which, journalists want to reach the widest possible audience and therefore looking for human angle is a prominent way to lure in readers. The two most recent Brian Cox television documentary series, Human Universe and Forces of Nature have concentrated on stories around families and children, with the science elements being interwoven almost effortlessly into the narrative.

In print and digital formats this bias means that the focus is frequently on articles that might directly affect humanity, especially medical, agricultural and environmental stories. This puts an unbalanced emphasis on certain areas of science and technology, leaving other specialisations largely unreported. This might not appear bad in itself, but lack of visibility can cause difficulties when it comes to maintaining public funding or attracting private philanthropy for less commercial and/or more theoretical science projects.

Another method used to make science more palatable is to concentrate on individual geniuses rather than team efforts. I assume only a very small proportion of the public know that theoretical physicists do their best work before they are thirty years old, yet the seventy-five year old Stephen Hawking (whose name is now a trademark, no less) is quoted almost every week as if he were Moses. He's well worth listening to but even so, Professor Hawking seems have become a spokesperson for almost any aspect of science the media want a quote on.

4) With competition tougher than ever thanks to social media and smartphone photography, journalists face ever tighter deadlines to publish before anyone else. This can obviously lead to a drop in accuracy, with even basic fact-checking sometimes lacking. For example, a year or two ago I sent a tweet to the British paleopathologist and presenter Dr Alice Roberts that the BBC Science and Environment News web page stated humans were descended from chimpanzees! She must have contacted them fairly rapidly as the content was corrected soon after, but if even the BBC can make such basic blunders, what hope is there for less reputable news-gathering sources? As with much of contemporary business, the mentality seems to be to get something into market as quick as possible and if it happens to be a smartphone that frequently catches fire, we'll deal with that one later. The Samsung Galaxy Note 7's recent debacle is the gadget equivalent of the BBC error: beating the opposition takes precedence over exactitude.

It's one to thing to define science as striving towards more accurate descriptions of aspects of reality rather than being a series of set-in-stone commandments, but publishing incorrect details for basic, well-established facts can only generate mistrust of journalists by both scientific professionals and members of the public who discover the mistake. Surely there's time for a little cross-checking with reference books and/or websites in order to prevent the majority of these howlers? Having said that, I find it scary that a major media organisation can commit such blunders. I wonder what the outcry would be if the BBC's Entertainment and Arts News page claimed that Jane Austen wrote Hamlet?

5) Finally, there's another explanation that has less to do with the science journalists themselves and more with what constitutes newsworthy stories. Negativity is the key here, and as such science news is swept along with it. For example, the BBC Science and Environment News web page currently has three articles on climate change and animal extinctions, an expensive project technology failure, earthquake news and a pharmaceutical story. Like a lot of political reports, those concerning STEM subjects concentrate on the bad side of the fence. Unfortunately, the dog-bites-man ordinariness of, for example ‘Project X succeeds in finding something interesting' usually precludes it from being deemed media-worthy. The ethos seems to be either find a unique angle or publish something pessimistic.

One tried and tested method to capture attention is to concentrate on scandal and error: science is just as full of problems as any other aspect of humanity. Of course it is good to examine the failure of high-tech agriculture that led to the UK's BSE 'mad cow' disease outbreaks in the 1980s and 90s, but the widespread dissemination of the supposed link between MMR and autism has caused immense damage around the world, thanks to a single report being unthinkingly conveyed as rock-hard evidence.

Bearing in mind that journalism is meant to turn a profit, perhaps we shouldn't be surprised at how misrepresented scientific research can be. It's difficult enough to find the most objective versions of reality, considering all the cognitive bias in these post-truth times. There are no obvious answers as to how to resolve the issue of poor quality science reporting without either delaying publishing and/or employing scientifically-trained staff. The market forces that drive journalism unfortunately mean that STEM stories rarely do science justice and often promote a negative attitude among the rest of mankind. Which is hardly what we need right now!

Wednesday 25 May 2016

From Dr Strangelove to Dr Evil: Hollywood's anti-science stance

Despite the decades of hard work by the likes of Bill Nye, Stephen Hawking, Carl Sagan, Stephen Jay Gould etal, there is still an enormous amount of public suspicion surrounding scientists and their work. From wavering opinion concerning climate change to the negative publicity revolving around genetically-modified crops (A.K.A. 'Frankenfoods') it seems that popular opinion of scientists isn't far above that meted out in recent years to politicians and merchant bankers.

Tabloid media cannot be solely to blame for this, although the ridiculous scaremongering stories given front page attention, frequently involving medical science, are certainly no help. Instead, I would argue that some of the blame for the public attitude to STEM (Science, Technology, Engineering and Mathematics) comes from that ubiquitous global communicator, mainstream Hollywood. So where did the world's movie capital get its ideas from?

It seems that the denigration of science and its technological applications has probably existed as long as modern science itself. Before there were films to spread the negativity, literature had a mixed opinion of the discipline. Could some of the most famous apparently anti-scientific publications from Europe have inspired Hollywood's pioneers, many of whom were European emigrés?

Jonathan Swift's third book of Gulliver's Travels concerns the scientific elite of a floating island called Laputa. First published in 1726 during the so-called Age of Enlightenment, the book is typical of Swift's no holds barred approach to satire, making much use of the learning of the day. Despite being far more concerned with social and political issues rather than an anti-scientific stance, the material is still echoed today in the popular media.

Granted, many would agree that some of the more expensive STEM research projects such as the Large Hadron Collider could wait until global issues concerning hunger, medicine, environmental degradation - and poverty in general - are solved, but then wealth is rarely evenly distributed. After all, the USA apparently spends twice as much on pet grooming as it does on nuclear fusion research. Incidentally, isn't this bizarre in itself: it's not just that we consider ourselves so much more rational than all other animals, but that the human brain is the most complex object in the known universe. That's a pretty scary thought!

As for Mary Shelley's classic novel whose title is evoked during criticism of GM foods, she may have been inspired by the general feeling of doom then in the air; almost literally in fact, due to the 1815 eruption of Mount Tambora, with volcanic dust creating 1816's 'Year without a Summer'. As an aside, the astonishingly lurid colours of J.M.W. Turner's sunsets of the period were another artistic response associated with the high-altitude volcanic aerosols.

In addition to the extremely cold, wet conditions of that year, Shelley is thought to have stopped near to the original Frankenstein Castle in Germany, where alchemy and other dubious dark arts were reputed to have been practiced. Combined with Luigi Galvani's experiments on frogs' legs - originally performed several decades earlier but much imitated still in Shelley's time, including on human cadavers - the novel is clearly a reflection of widespread anxieties of the time.

With the expansion of industrial cities and their associated squalor, the mid-Nineteenth Century saw the origin of philosophies that associated technological advances (and their scientific underpinnings) with a debasement of humanity. William Blake's description of 'satanic mills' epitomises this mode of thought, seen in as diverse a range of expression as the Pre-Raphaelite Brotherhood of artists, the Arts and Crafts movement, even the political writings of Marx and Engels. To blame the greed of the new captains of industry on science is obviously unfair, but then the latter were a far easier target. After all, the English chemist and political radical Joseph Priestley fled to the United States after an authority-sponsored mob burnt down his house in 1791.

Blake's over-wraught emoting ("Science is the Tree of Death") is amongst the strongest negativity of the period, but can we blame him, considering science was, as it is today, often wrongly blamed as the root cause of the widespread destruction of nature to make way for a soulless, artificial environment? But it wasn't just a response to the changes to society and landscape that Blake took exception to: he detested the mechanistic vision of the universe built upon the work of Galileo and Newton, believing that too much knowledge destroyed wonder and awe.

This is clearly as subjective a viewpoint as any discussion of a work of art; it can be easily rebuffed, although the attitude behind it should be treated seriously. Happily, today's plethora of glossy coffee table books on such scientifically-gleaned wonders as Hubble Space Telescope imagery show there is still plenty to be in awe of.

Mainstream cinema frequently paints a very A versus B picture of the world (think classic westerns or war films). But science can rarely fit into such neat parcels: consider how the more accurate general theory of relativity can live alongside its predecessor from Newton. In addition, it's very tricky to make interesting drama within a traditional narrative structure that utilises scientist protagonists unless it's a disaster movie (even the likes of Jurassic Park falls within this category.)

It isn't difficult to recall many negative examples of scientists in Hollywood movies, from at best those too wrapped up in their own work to notice its wider effects, to at worst insane megalomaniacs intent on either world domination or destruction. In contrast, how many sympathetic movie scientists are there?

It seems such a shame that such a ubiquitous form of entertainment consistently portrays such a lack of sympathy towards science. Even the film version of Carl Sagan's novel Contact lacked the cosmic spiritual elements of the source material, as if afraid that a combination of astrophysics and the mystical wouldn't be comprehensible to audiences (2001 syndrome, perhaps?) Science fiction films these days often seem keen to boast of their technical consultants, so what about a more sympathetic attitude to the practitioners of science itself? After all, most scientists don't live with their private armies in secret headquarters bases, planning to takeover the world...

Monday 29 October 2012

From geek to guru: can professional scientists be successful expositors (and maintain careers in both fields)?

The recent BBC TV series Orbit: Earth's Extraordinary Journey gave me food for thought: although presenter Helen Czerski is a professional physicist she was burdened with a co-presenter who has no formal connection with science, namely Kate Humble. You have to ask: why was Humble needed at all? I'll grant that there could have been a logistics issue, namely getting all the locations filmed in the right season within one year, but if that was the case why not use another scientist, perhaps from a different discipline? Were the producers afraid a brace of scientists would put the public off the series?

The old days of senior figures pontificating as if in a university lecture theatre are long gone, with blackboard diagrams and scruffy hair replaced by presenters who are keen to prove their non-geek status via participation in what essentially amount to danger sports in the name of illustrating examples. Okay, so the old style could be very dry and hardly likely to be inspirational to the non-converted, but did Orbit really need a non-scientist when Helen Czerski (who is hardly new to television presenting) can deliver to camera whilst skydiving? In addition, there are some female presenters, a prominent British example being Alice Roberts, who have been allowed to solely present several excellent series, albeit involving science and humanities crossovers (and why not?)

But going back to Kate Humble, some TV presenters seems to cover such a range of subject matter that it makes you wonder if they are just hired faces with no real interest (and/or knowledge) in what they are espousing: “just read the cue cards convincingly, please!” Richard Hammond - presenter of light entertainment show Top Gear and the (literally) explosive Brainiac: Science Abuse has flirted with more in-depth material in Richard Hammond's Journey To The Centre Of The Planet, Richard Hammond's Journey To The Bottom Of The Ocean and Richard Hammond's Invisible Worlds. Note the inclusion of his name in the titles – just in case you weren't aware who he is. Indeed, his Top Gear co-presenter James May seems to be genre-hopping in a similar vein, including James May's Big Ideas, James May's Things You Need to Know, James May on the Moon and James May at the Edge of Space amongst others, again providing a hint as to who is fronting the programmes. Could it be that public opinion of scientists is poor enough - part geek, part Dr Strangelove - to force producers to employ non-scientist presenters with a well-established TV image, even if that image largely consists of racing cars?

Popular science books from Cosmos to A Brief History of Time

Having said that, science professionals aren't infallible communicators: Sir David Attenborough, a natural sciences graduate and fossil collector since childhood, made an astonishing howler in his otherwise excellent BBC documentary First Life. During an episode that ironically included Richard 'Mr Trilobite' Fortey himself, Sir David described these organisms as being so named due to their head/body/tail configuration. In fact, the group's name stems somewhat obviously from tri-lobes, being the central and lateral lobes in their body plan. It was an astounding slip up and gave me food for thought as to whether anyone on these series ever double checks the factual content, just to make sure it wasn't copied off the back of a cereal packet.

Another possible reason for using non-science presenters is that in order to make a programme memorable, producers aim to differentiate their expositors as much as possible. I've already discussed the merits of two of the world's best known scientists, Stephen Hawking and Richard Dawkins, and the unique attributes they bring to their programmes, even if in Dawkins' case this revolves around his attitude to anyone who has an interest in any form of unproven belief. I wonder if he extends his disapprobation to string theorists?

What is interesting is that whereas the previous generation of popular science expositors achieved fame through their theories and eventually bestselling popularisations, the current crop, of whom Helen Czerski is an example, have become well-known directly through television appearances. That's not to say that the majority of people who have heard of Stephen Hawking and Richard Dawkins have read The Selfish Gene or A Brief History of Time. After all, the former was first published in 1976 and achieved renown in academic circles long before the public knew of Dawkins. Some estimates suggest as little as 1% of the ten million or so buyers of the latter have actually read it in its entirety and in fact there has been something of a small industry in reader's companions, not to mention Hawking's own A Briefer History of Time, intended to convey in easier-to-digest form some of the more difficult elements of the original book. In addition, the US newspaper Investors Business Daily published an article in 2009 implying they thought Hawking was an American! So can you define fame solely of being able to identify a face with a name?

In the case of Richard Dawkins it could be argued that he has a remit as a professional science communicator, or at least had from 1995 to 2008, due to his position during this time as the first Simonyi Professor for the Public Understanding of Science. What about other scientists who have achieved some degree of recognition outside of their fields of study thanks to effective science communication? Theoretical physicist Michio Kaku has appeared in over fifty documentaries and counting and has written several bestselling popular science books , whilst if you want a sound bite on dinosaurs Dale Russell is your palaeontologist. But it's difficult to think of any one scientist capable of inspiring the public as much as Carl Sagan post- Cosmos. Sagan though was the antithesis of the shy and retiring scientist stereotype and faced peer accusations of deliberately cultivating fame (and of course, fortune) to the extent of jumping on scientific bandwagons solely in order to gain popularity. As a result, at the height of his popularity and with a Pulitzer Prize-winning book behind him, Sagan failed to gain entry to the US National Academy of Sciences. It could be argued that no-one has taken his place because they don't want their scientific achievements belittled or ignored by the senior science establishment: much better to claim they are a scientist with a sideline in presenting, rather than a communicator with a science background. So in this celebrity-obsessed age, is it better to be a scientific shrinking violet?

At this point you might have noticed that I've missed out Brian Cox (or Professor Brian Cox as it states on the cover of his books, just in case you thought he was an ex-keyboard player who had somehow managed to wangle his way into CERN.) If anyone could wish to be Sagan's heir - and admits to Sagan as a key inspiration - then surely Cox is that scientist. With a recent guest appearance as himself on Dr Who and an action hero-like credibility, his TV series having featured him flying in a vintage supersonic Lightening jet and quad biking across the desert, Cox is an informal, seemingly non-authoritative version of Sagan. A key question is will he become an egotistical prima donna and find himself divorced from the Large Hadron Collider in return for lucrative TV and tie-in book deals?

Of course, you can't have science without communication. After all, what's the opposite of popular science: unpopular science? The alternative to professionals enthusing about their subject is to have a mouth-for-hire, however well presented; delineating material they neither understand nor care about. And considering the power that non-thinking celebrities appear to wield, it's vital that science gets the best communicators it can, recruited from within its own discipline. The alternative can clearly be seen by last years' celebrity suggestion that oceans are salty due to whale sperm. Aargh!

Monday 30 July 2012

Buy Jupiter: the commercialisation of outer space

I recently saw a billboard for the Samsung Galaxy SIII advertising a competition to win a "trip to space", in the form of a suborbital hop aboard a Virgin Galactic SpaceshipTwo. This phrase strikes me as highly interesting: a trip to space, not into space, as if the destination was just another beach holiday resort. The accompanying website uses the same wording, so clearly the choice of words wasn't caused by space issues (that's space for the text, not space as in outer). Despite less than a dozen space tourists to date, is space travel now considered routine and the rest of the universe ripe for commercial gain, as per the Pan Am shuttle and Hilton space station in 2001: A Space Odyssey? Or is this all somewhat premature, with the hype firmly ahead of the reality? After all, the first fee-paying space tourist, Dennis Tito, launched only eleven years ago in 2001.

Vodafone is only the second company after Guinness Breweries to offer space travel prizes, although fiction was way ahead of the game: in Arthur C. Clarke's 1952 children's novel Islands in the Sky the hero manages a trip into low Earth orbit thanks to a competition loophole.  However, the next decade could prove the turning point. Virgin Galactic already have over 500 ticket-holders whilst SpaceX, developer of the first commercial orbital craft - the unmanned Dragon cargo ship - plan to build a manned version that could reduce orbital seat costs by about 60%.

If anything, NASA is pushing such projects via its Commercial Orbital Transportation Services (COTS) programme, including the aim of using for-profit services for the regular supply of cargo and crew to the International Space Station (ISS). The intention is presumably for NASA to concentrate on research and development rather than routine operations, but strong opposition to such commercialisation comes from an unusual direction: former NASA astronauts including Apollo pioneers Neil Armstrong and Eugene Cernan deem the COTs programme a threat to US astronautic supremacy. This seems to be more an issue of patriotism and politics rather than a consideration of technological or scientific importance. With China set to overtake the USA in scientific output next year and talk of a three-crew temporary Chinese space station within 4 years, the Eclipse of the West has already spread beyond the atmosphere. Then again, weren't pre-Shuttle era NASA projects, like their Soviet counterparts, primarily driven by politics, prestige, and military ambitions, with technological advances a necessary by-product and science very much of secondary importance?

Commerce in space could probably be said to have begun with the first communications satellite, Telstar 1, in 1962. The big change for this decade is the ability to launch ordinary people rather than trained specialists into space, although as I have mentioned before, the tourist jaunts planned by Virgin Galactic hardly go where no-one has gone before. The fundamental difference is that such trips are deemed relatively safe undertakings, even if the ticket costs of are several orders greater than any terrestrial holiday. A trip on board SpaceShipTwo is currently priced at US$200,000 whilst a visit to the International Space Station will set you back one hundred times that amount. This is clearly somewhat closer to the luxury flying boats of the pre-jet era than any modern package tour.

What is almost certain is that despite Virgin Galactic's assessment of the risk as being akin to 1920s airliners, very few people know enough of aviation history's safety record to make this statistic meaningful. After all, two of the five Space Shuttle orbiters were lost, the latter being the same number intended for the SpaceshipTwo fleet. Although Virgin Galactic plays the simplicity card for their design - i.e. the fewer the components, the less the chance of something going wrong - it should be remembered that the Columbia and Challenger shuttles were lost due to previously known and identified problems with the external fuel tank and solid rocket boosters respectively. In other words, when there is a known technical issue but the risk is considered justifiable, human error enters the equation.

In addition, human error isn't just restricted to the engineers and pilots: anything from passenger illness (about half of all astronauts get spacesick - headaches and nausea for up to several days after launch) to disruptive behaviour of the sort I have witnessed on airliners. Whether the loss of business tycoons or celebrities would bring more attention to the dangers of space travel remains to be seen. Unfortunately, the increase in number and type of spacecraft means it is almost certainly a case of when, not if.

Planet Saturn via a Skywatcher telescope

Location location location (via my Skywatcher 130PM)

But if fifteen minutes of freefall might seem a sublime experience there are also some ridiculous space-orientated ventures, if some of the ludicrous claims found on certain websites are anything to go by. Although the 1967 Outer Space Treaty does not allow land on other bodies to be owned by a nation state, companies such as Lunar Embassy have sold plots on the Moon to over 3 million customers. It is also possible to buy acres on Mars and Venus, even if the chance of doing anything with it is somewhat limited. I assume most customers treat their land rights as a novelty item, about as useful as say, a pet rock, but with some companies issuing mineral rights deeds for regions of other planets, could this have serious implications in the future? Right now it might seem like a joke, but as the Earth's resources dwindle and fossil fuels run low, could private companies race to exploit extra-terrestrial resources such as lunar Helium 3?

Various cranks/forward thinkers (delete as appropriate) have applied to buy other planets since at least the 1930s but with COTs supporting private aerospace initiatives such as unmanned lunar landers there is at least the potential of legal wrangling over mining rights throughout the solar system. The US-based company Planetary Resources has announced its intention to launch robot mining expeditions to some of the 1500 or so near-Earth asteroids, missions that are the technological equivalent of a lunar return mission.

But if there are enough chunks of space rock to go round, what about the unique resources that could rapidly become as crowded as low Earth orbit? For example, the Earth-Moon system's five Lagrange points are gravitationally stable positions useful for scientific missions, whilst geosynchronous orbit is vital for commercial communication satellites. So far, national governments have treated outer space like Antarctica, but theoretically a private company could cause trouble if the law fails to keep up with the technology, in much the same way that the internet has been a happy harbour for media pirates.

Stephen Hawking once said "To confine our attention to terrestrial matters would be to limit the human spirit". Then again, no-one should run before they can walk, never mind fly. We've got a long way to go before we reach the giddy heights of wheel-shaped Hiltons, but as resources dwindle and our population soars, at some point it will presumably become a necessity to undertake commercial space ventures, rather than just move Monte Carlo into orbit. Now, where's the best investment going to be: an acre of Mars or two on the Moon?

Sunday 3 January 2010

What's in a label? How words shape reality

With the start of a new year it seems appropriate to look at how our perception of the universe is created via language - after all, there's no position in space identifying an orbital starting point. We grow up with a notion of reality that is largely defined by convenience and historical accidents embedded into our language and therefore our thought patterns (and vice versa). For at least the last six hundred years many societies have called our planet Earth, whilst of course Ocean would be more appropriate. Whilst this is just an obvious chauvinism for a land-based species, there are other terms that owe everything to history. We count in base ten, position zero longitude through the Greenwich Meridian and usually show the Earth from one perspective, despite there not being an arrow in our galaxy stating 'this way up' (but then had Ancient Egyptians' view prevailed, Australia and New Zealand would be in the Northern Hemisphere.)

So how far can go with constructs? Our calendar is an archaic, sub-optimal mish-mash, with the interpolation of July and August meaning the last four months of the year are inaccurately named seven through ten. The changeover from the Julian to Gregorian calendar varied from nation to nation, meaning well-known events such as the birth of George Washington and the Bolshevik Revolution have several dates depending on the country defining that piece of history. As for the majority of humans agreeing that we are now in AD 2010, thanks to a fifteen hundred year-old mistake by Dionysius Exiguus our current year should really be at least AD 2014, if we accept that an historical figure called Jesus of Nazareth was born during the lifetime of Herod the Great. It appears that even the fundamentals that guide us through life are subjective at the very least if not far from accurate in many cases.

The philosopher of science Thomas Kuhn argues that all scientific research is a product of the culture of the scientists engaged on those projects, so whilst we might argue that Galileo was the first scientist in a strictly modern use of the word, can there be a definitive boundary between the quasi-mystical thought processes of Copernicus and Kepler (and even Newton), and that of the modern exponents typified by Einstein and Hawking? Whilst we would like to believe in a notion of pure objectivity, scientists are just as subjective as everyone else and their theories are therefore built on assumptions directly related to history, both cultural and biological.

We use labels to comfort ourselves, even boost our egos, via unconscious assumptions that are gradually looking more ridiculous as we delve ever deeper into the mysteries of creation. For example, the past sixty-five million years has been a period frequently named 'the Age of Mammals'. Yet as Stephen Jay Gould was fond of pointing out, most of the world's biomass is microbial and we macroscopic life forms are comparative newcomers, restricted to a far reduced range of environments compared to bacteria, protists and other small-scale organisms.

Despite such sense-expanding tools as infra-red telescopes and electron microscopes, we still process sensory input and use primarily audio-visual output to define scientific theories and methodology. We are in thrall to the languages we use define our thoughts, both conversational language and mathematics. Although the lingua franca of science has varied over the centuries, all languages from Latin to English have one thing in common: they are used to tell us stories. At a basic level, the history of science is riddled with fables and apocrypha, from Newton being hit by an apple (and inventing the reflecting telescope) to Galileo dropping weights from the Leaning Tower of Pisa, even Columbus believing the world was a sphere (he didn't - he thought it was pear-shaped!)

So if scientific history cannot be relied upon, what about the hypotheses and theories themselves? In the words of John Gribbin, we construct 'Just So' stories to create a comprehendible version of reality. Presumably this reliance on metaphor will only increase as our knowledge becomes further divorced from everyday experience but our technology fails to keep pace with confirming new theories; for example, it is far from likely that we will ever be able to directly view a superstring.

In addition, language doesn't just restrict our ideas: if a term has a scientific sense differing from vernacular meaning, problems frequently arise. A classic example would be quantum leap, which to most people means an enormous step forward but to physicists is an electron's miniscule change of energy level. However, even personal computer pioneer Sir Clive Sinclair used the term in its former meaning for his 1984 Quantum Leap microcomputer (at least I assume he did, although QL owners may disagree...)

Speaking of which, perhaps when we finally build (or machines build for us) computers capable of true artificial intelligence, new ways of exploring the universe not tied down to conventional linguistic-based thought patterns may arise. Then again, since we will be the parents of these machines, this may not be feasible. As one of Terry Pratchett's characters stated: "I think perhaps the most important problem is that we are trying to understand the fundamental workings of the universe via a language devised for telling one another where the best fruit is." But all things considered, we haven't done that badly so far.

Technorati Tags: , ,

Sunday 6 December 2009

Hawking and Dawkins: the dynamic duo

There was a time not so long ago when the defining attributes of famous British scientists were little more than a white coat, wild hair, and possibly a monocle. Today, it seems the five-second sound bite mentality of the MTV generation requires any scientist who can top a man-in-the-street poll to have some atypical personality traits, to say the least. So are the current British science superstars good role models in the way they represent science to the public, or having achieved fame are they content to ride the media gravy train, with science taking a backseat (in the last carriage, if you want to continue the metaphor)?

If today's celebrities are frequently reduced to mere caricatures of their former selves (supposing they had anything more in the first place), how can the complex subtleties of modern science survive the media simplification process? If there is one thing that defines our current state of scientific understanding, it is surely that the universe is very subtle indeed. A recent episode of The Armstrong and Miller Show highlighted this beautifully via a sketch of Ben Miller (who in real life swapped a physics PhD for luvviedom) as a professor being interviewed about his latest theory. Each time he was asked if it was possible to provide a brief description of his theory in layman's terms, he succinctly replied, "no".

Arguably the two biggest names today, at least in Britain, are Stephen Hawking and Richard Dawkins. After appearances on everything from Star Trek to The Simpsons, Hawking has overtaken Einstein as the scientific genius everyone has heard of. But, like Einstein's last few decades, has Hawking reached the height of fame long after completing his best work, a genius revered without comprehension by a public unaware of the latest developments in astrophysics? If it's true that theoretical physicists' main period of productivity is usually in their twenties, Hawking wouldn't be any different from other physicists his age (remembering he retired from the Lucasian Chair several months ago).

Hawking himself implies that his fame is compounded of demand from a lazy and scientifically non-savvy media (as in "who's the current Einstein?") twinned with the tedious if understandable interest surrounding his condition. It's probably fair to say that a physically-fit Professor Hawking wouldn't be considered to provide nearly as interesting copy. Of course to be able to write the best-selling (nine-million copies!) A Brief History of Time was a fantastic achievement, not least for its brevity. If it (and Hawking's later ventures) succeed in promoting scientific knowledge and methodologies then all well and good but it's not difficult to get the feeling that he is primarily viewed as a brand name. Very little of the blame can be passed to Hawking himself, but the question that must be asked is does the interest in him divert the limited media attention span for science away from a younger generation of scientists?

Richard Dawkins on the other hand seems to have deliberately cultivated media attention, no doubt revelling in his description as Darwin's Rottweiler. As holder of the Charles Simonyi Professorship until late last year he had an official position from which to promote public understanding, but for me his single-minded crusade has become rather tiresome. His role model, Thomas Henry Huxley, promoted science as "nothing but trained and organized common sense" whilst in addition espousing, via his "trade mark" agnosticism, the notion that one should not believe or disbelieve a proposition without justifiable evidence. Surely Huxley's agnosticism and the ideal of the scientific method are indistinguishable?

In contrast, Dawkins' approach is to browbeat all opposition, religious, scientific, or otherwise, with techniques that ironically having rather more in common with "faith viruses" than science. His documentary The Root of All Evil? allegedly omitted interviews with religious moderates to concentrate on the oddballs. It's understandable that documentary producers like a clear-cut argument, but skewing the evidence to fit the theory is inexcusable for a scientist. Dawkins' use of probability is his most objective method in support of atheism but when the law of parsimony, otherwise known as Occam's razor, cannot obviously be applied to resolve many aspects of the sub-atomic world, how can a glib theory along the lines of "I believe there's a less than even chance of the existence of a deity, therefore there isn't a deity", be accepted any more than a literal interpretation of Genesis? Warning of the increasing dangers of fundamentalism to both science and society as a whole is admirable, but to promote a simplistic thesis regarding complex, largely non-scientific, issues seems more an exercise in self-promotion than anything else. And Dawkins has the cheek to say that the word 'reductionism' makes him want to reach for a weapon...

It pains me to say it but I'm not sure either of the dynamic duo, somewhat atypical scientists as they undoubtedly are, can be said to be ideal promoters of science. If such excellent communicators as Martin Rees, Richard Fortey, or Brian Cox were as well known as Hawking and Dawkins is it more likely we see an increase in science exposition and less media shenanigans? At the end of the day fame is very fickle, if the example of Magnus Pyke is anything to go by. Ubiquitous in the 1970s and '80s, Pyke appeared in everything from a best-selling pop single (and its video) to a washing machine commercial. Voted third in a 1975 New Scientist poll only to Einstein and Newton as the best-known scientist ever, this charismatic and socially-aware 'boffin' is unfortunately almost forgotten today. But then an American business magazine recently claimed that Hawking was an American, no doubt lulled by the speech synthesiser into a false sense of security...

Technorati Tags: , ,