Wednesday, 12 December 2018

New neurons: astrocytes, gene therapy and the public fear of brain modification

Ever since the first cyberpunk novels of the early 1980s - and the massive increase of public awareness in the genre thanks to Hollywood - the idea of artificially-enhanced humans has been a topic of intense discussion. Either via direct augmentation of the brain or the development of a brain-computer interface (BCI), the notion of Homo superior has been associated with a dystopian near-future that owes much to Aldous Huxley's Brave New World. After reading about current research into repairing damaged areas of the brain and spinal cord, I thought it would be good to examine this darkly-tinged area.

Back in 2009 I posted about how science fiction has to some extent been confused with science fact, which coupled with the fairly appalling quality of much mainstream media coverage of science stories, has led to public fear where none is necessary and a lack of concern where there should be heaps. When it comes to anything suggestive of enhancing the mind, many people immediately fall back on pessimistic fictional examples, from Frankenstein to Star Trek's the Borg. This use of anti-scientific material in the consideration of real-world STEM is not an optimal response, to say the least.

Rather than working to augment normal humans, real research projects on the brain are usually funded on the basis that they will generate improved medical techniques for individuals with brain or spinal cord injuries. However, a combination of the fictional tropes mentioned above and the plethora of internet-disseminated conspiracy theories, usually concerning alleged secret military projects, have caused the public to concentrate on entirely the wrong aspects.

The most recent material I have read concerning cutting-edge work on the brain covers three teams' use of astrocytes to repair damaged areas. This is an alternative to converting induced pluripotent stem cells (iPSCs) to nerve cells, which has shown promise for many other types of cell. Astrocytes are amazing things, able to connect with several million synapses. Apparently Einstein's brain had far more of them than usual in the region connected with mathematical thinking. The big question would be whether this accumulation was due to nature or nurture, the latter being the high level of exercise Einstein demanded of this region of his brain.

Astrocyte research for brain and spinal cord repair has been ongoing since the 1990s, in order to discover if they can be reprogrammed as functional replacements for lost neurons without any side effects. To this end, mice have been deliberately brain-damaged and then attempts made to repair that damage via converted astrocytes. The intention is to study if stroke victims could be cured via this method, although there are hopes that eventually it may also be a solution for Parkinson's disease, Alzheimer's and even ALS (motor neurone disease). The conversion from astrocyte to neuron is courtesy of a virus that introduces the relevant DNA, although none of the research has as yet proven that the converted cells are fully functional neurons.

Therefore, it would seem we are some decades away from claiming that genetic manipulation can cure brain-impairing diseases. But geneticists must share some of the blame for giving the public the wrong impression. The hyperbole surrounding the Human Genome Project gave both public and medical workers a false sense of optimism regarding the outcome of the genome mapping. In the late 1990s, a pioneer gene therapist predicted that by 2020 virtually every disease would include gene therapy as part of the treatment. We are only just over a year short of this date, but most research is still in first phase trial - and only concern diseases that don't have a conventional cure. It turned out that the mapping was just the simplest stage of a multi-part programme to understand the complexities of which genes code for which disorders.

Meanwhile, gene expression in the form of epigenetics has inspired a large and extremely lucrative wave of pseudo-scientific quackery that belongs in the same genre as homeopathy, crystal healing and all the other New Age flim-flam that uses real scientific terminology to part the gullible from their cash. The poor standard of science education outside of schools (and in many regions, probably within them too) has led to the belief that changing your lifestyle can fix genetic defects or affect cures of serious brain-based illnesses.

Alas, although gene expression can be affected by environmental influences, we are ultimately at the mercy of what we inherited from our parents. Until the astrocyte research has been verified, or a stem cell solution found, the terrible truth is that the victims of strokes and other brain-based maladies must rely upon established medical treatments.

This isn't to say that we may in some cases be able to reduce or postpone the risk with a better lifestyle; diet and exercise (of both the body and brain) are clearly important, but they won't work miracles. We need to wait for the outcome of the current research into astrocytes and iPSCs to find out if the human brain can be repaired after devastating attacks from within or without. Somehow I doubt that Homo superior is waiting round the corner, ready to take over the world from us unenhanced humans…

Thursday, 29 November 2018

Setting low standards: bovine TB, badger culls and political pressure on science

If there's a single type of news story that's almost guaranteed to generate widespread sympathy across the British Isles it is one concerning the mistreatment of animals. Over the past five years, badger culls aimed at preventing the spread of bovine tuberculosis have generated much public debate, with opinions varying from those who think badgers are completely innocent victims to some who want to see the species eradicated anywhere domestic cattle are kept. Since the number of farmed cattle in the British Isles is close to ten million, this presumably means the no-badger zone is rather on the large size!

When debates concerning agriculture start to get overheated it usually reduces to a battleground between farmers and so-called townies, with mudslinging and emotive slogans taking precedence over the facts. In this particular case the badgers have an unusual ally in the form of rock musician and amateur astronomer Brian May, who has received much of the criticism usually reserved for tree huggers, animal rights' campaigners and environmentalist types in general.

As I've mentioned before, a species often receives support based more on its cuteness factor than anything else (I consider the irascible and curmudgeonly Mr Badger in Wind in the Willows as a fairly accurate representation of the true critter) so the farming community has seen fit to complain that ignorant, urban-based activists are unaware of the challenges Mother Nature throws at the agricultural sector.

Such stereotyping and reductionism does nothing to alleviate the issue, which other nations face in similar circumstances. New Zealand, for example, has a rapidly escalating battle over the use of 1080 to poison introduced predators. Even though many environmental organisations such as Forest and Bird proclaim it the most effective method the debate is far from settled, with the anti-1080 movement using emotive pleas in their campaign that at times combines hysteria and aggression in equal measure.

The UK's Department for Environment, Food and Rural Affairs (Defra) has funded an independent scientific review from Oxford University as to the efficacy of the cull, resulting in popular press reports that the evidence does not support it. Indeed, the high ratio of dead badgers in return for a 'modest' reduction in the disease has been given as a key reason to stop the culls. This might appear to be a nod towards animal welfare, until you read that other issues include their cost and complexity and a desire for the Government to gain in the opinion polls. A key scientific argument against the effectiveness of the culls come from rural vets, who support data suggesting even at maximum success, the reduction in new cases of cattle TB would only be 12-16% - in exchange for a culling of over 70% of local badger populations.

So what does this example say about humanity's attitude towards the environment and the use of science to reinforce that attitude? In terms of numbers of individuals, humans and our domesticated species (both livestock and household pets) vastly outnumber the inhabitants of the wilderness. The once diverse ecosystem has been vastly reduced, predominantly in the temperate regions suitable for intensive farming. But in return for this largely irreversible loss we have gained all-year round access to an incredible variety of inexpensive foodstuffs; clearly, our gastronomic gains take precedence over the wider ecosystem.

In the case of wild badgers as disease vectors, it isn't just the livelihood of individual farmers that are at stake. The European Union's threat to impose trade sanctions on the UK, such as a ban on the export of live cattle, must be considered as a potential loss at the national level. Little wonder then that the British Government implemented the cull after what has been termed 'a randomised trial period' or more impressively, 'over fifteen years of intensive research.' Even so, was the result of all this enough to justify the finality of the chosen method - or was the scientific data manipulated in the name of political expediency?

One telling example of how the culling might have been ordered due to political pressure rather than any scientific smarts was the use of evidence from other nations that are successfully controlling bovine TB. Australia and New Zealand have been held up as examples of how control of the disease vectors can vastly reduce or indeed remove the problem altogether. Except of course that those two nations don't have any badgers; it is the possum, a semi-arboreal marsupial, that is responsible for the spread of tuberculosis there. It seems to me that two creatures from such vastly different lineages should never have been seen as workable comparisons; the natural world just doesn't fall into the neat categories we would like it to. As a matter of fact, the UK Government has partly blamed the lack of success on the badgers themselves for failing to follow predicted behaviour. In 2013 the then Environment Secretary Owen Paterson stated that the animals had cheated by 'moving the goal posts'!

The Oxford University research reports that far more cases of bovine TB result from transmission between cattle rather than directly from badgers, explaining that farmers are not following Defra guidelines to minimise the spread. Even Defra itself states that there has been not nearly enough implementation of badger-proof feed storage and fencing, while its chief scientific adviser, Ian Boyd, has been quoted as admitting that badgers may only be responsible for as little as 6% of bovine TB! This incidentally comes from the man who in 2013 wanted complete control over what scientific results were reported to Government ministers, presumably so as to maintain a clear-cut, pro-STEM political lobby. Hmm, methinks I smell something fishy...

What can we conclude from these shenanigans? If scientific research doesn't provide reliable support for a method, shouldn't the mistake be admitted and a new approach implemented? Science is the sole invention of humanity with built-in error correction but when it gets embroiled in politics, unabashed use of political tools such as spin can prove fatal. In this particular case, the fatalities in the short term were the badgers. In the long run, an unbalanced ecosystem would have resulted. And we all know which species likes to think of itself as the pinnacle of creation. There's enough denial of scientific results as it is, without distortion for the sake of political convenience. Let's hope Defra has the courage to own up and try other tactics against the wily badger.

Wednesday, 14 November 2018

Swapping gasoline with gas: are hydrogen fuel cells the future of land transport?

When I was a child, I recall being impressed by the sophistication of hydrogen fuel cells, a power source used in spacecraft that generated water as a by-product. What I didn't realise at the time was that the basis for fuel cell technology had been invented back in the 1830s. Now that automobile manufacturers are promoting fuel cell vehicles for consumers, is it time for the technology to expand from niche usage to mass market?

Road vehicles of all sorts have had more than their fair share of ups and downs, not least due to the conservatism of that unholy alliance between the internal combustion engine and fossil fuels sectors.  Although there were hydrogen-powered test vehicles in 1970s, it wasn't until 1991 that the development phase was completed. There are currently three car manufacturers with fuel cell models intended for personal customers: the Honda Clarity, Hyundai Nexo and Toyota Mirai. The latter two are intended to enter to take off (not literally) across South Korea and Australia respectively over the new few years, apparently selling at a loss on the assumption of beating rivals Nissan and BMW into the market. Brand loyalty being what it is, and all.

So what do fuel cell vehicles have that makes them a credible alternative to gas guzzlers and electric cars? Their primary benefit in this time-poor age is that they take only minutes to refuel – and with a range considerably greater than that of electric vehicles. Even so, this is hardly likely to be a convincing argument for petrol heads.

To anyone with even a vague knowledge of interwar air travel, hydrogen brings to mind the Hindenburg and R101 disasters. The gas is far from safe in large quantities, hence the rapid end of airship development; even with helium as a substitute, today's airships are smaller, specialist vehicles, their lack of speed making them an unlikely substitute for passenger jets. Although fuel cells themselves provide a safe power source, large quantities of hydrogen needs to be transported to the refuelling stations. A neat solution is to transport it in the form of ammonia (admittedly hardly a pleasant substance itself) and then convert it to hydrogen at the point of use.

What is less easily resolved is the cost of manufacturing the gas and the resulting high price for the customer. Most of the world's hydrogen is produced from natural gas; it can be made from renewable sources, but only at much greater expense. Wind-to-hydrogen methods are being tested, but in general there is a distinct lack of environmental friendliness to the gas production process that counteracts the emission-friendly usage in the vehicles themselves. To date, analysis is inconclusive as to whether en masse replacement of fossil fuel vehicles with fuel cell equivalents would reduce greenhouse gas emissions. Indeed, some reports claim they use three times the amount of electricity per vehicle than the equivalent battery-powered car!

In addition to the price of hydrogen, fuel cells use rare elements such as platinum, contributing to the production costs. But most importantly of all, how will the vehicle manufacturers resolve the chicken-and-egg issue of providing adequate infrastructure when there is only a small customer base? Without enough refuelling stations and repair depots, most regions are unlikely to attract new customers, but how can a corporation afford to put these facilities in place before there is a demand for them? Most private vehicle owners would require an immediate advantage to migrate to the new technology, regardless of any environmental benefit. Unlike the early days of the internal combustion engine, fuel cell vehicles do not offer the paradigm shift that the automobile had over the horse-drawn carriage.

So with continuous improvements in battery technology, is there in fact any need for the fuel cell vehicles? Aren't electric cars the best alternative to the internal combustion engine? If so, wouldn't it make more sense to concentrate on battery development and not waste effort on a far from optimal alternative that might turn out to be a dead end? Perhaps this is a case of corporate bet hedging. After all, the telecommunications industry was taken completely unawares by the personal consumer demand for mobile phones - a device that was aimed squarely at business users - so this may be a Plan B if something happens with the growth of electric vehicles. At least vehicle manufacturers aren't anti-innovation this time, unlike their voracious gobbling up of advanced steam car development in the early 1970s.

If not for private road vehicles, could there be a future for fuel cell technology in public transport? China and some European nations such as Germany have been trialling hydrogen-powered buses and tram cars, whilst Boeing is one of the aircraft manufacturers investigating the use of fuel cells in small aircraft and unmanned aerial vehicles. That isn't to say the future of commercial air travel excludes the turbofan engine; fuel cells will probably only ever be used for auxiliary power units.

I wouldn't want to disparage innovation but can't help thinking that in this instance, the self-regulating capitalist model is failing to cope with the paradigm shifts required to face the challenges of climate change. Would it be better for governments to incentivise the front-runner replacements for environmentally poor technologies, in this particular case favouring electric-powered vehicles? Solutions are needed now and I'm just not sure that there is the time to solve all the issues surrounding hydrogen fuel cells and the necessary infrastructure. Perhaps this technology should be saved for a rainy day sometime in the future, once our current crises are over and dealt with?

Monday, 29 October 2018

Space is the place: did life begin in the cosmic void?

A few weeks' ago I was watching a television documentary about the search for intelligence aliens and featuring the usual SETI experts Jill Tarter and Seth Shostak when I realised that we rarely see any crossover with research into non-intelligent extra-terrestrial life. Whereas the former is often seen by outsiders as pie-in-the-sky work by idealistic dreamers, the latter has more of a down-to-Earth feel about it, even though it has at times also suffered from a lack of credibility.

Based on current thinking, it seems far more probable that life in the universe will mostly be very small and entirely lacking consciousness, in other words, microbial. After all, life on Earth arose pretty much as soon as the environment was stable enough, around 3.7 billion years ago or even earlier. In contrast, lifeforms large enough to be visible without a microscope evolved around 1 billion or so years ago (for photosynthetic plants) and only about 580 million years ago for complex marine animals.

The recent publicity surrounding the seasonal variations in methane on Mars has provided ever more tantalising hints that microbial life may survive in ultraviolet-free shelters near the Martian surface, although it will be some years before a robot mission sophisticated enough to visit sink holes or canyon walls can investigate likely habitats. (As for the oft-talked about but yet to be planned crewed mission, see this post from 2015.)

Therefore it seems that it is worth concentrating on finding biological or pre-biological compounds in extra-terrestrial objects as much as listening for radio signals. The search can be via remote sensing (e.g. of molecular clouds, comets and asteroids) as well as investigating meteorites - bearing in mind that the Earth receives up to one million kilogrammes of material per day, although less than one percent is large enough to be identified as such.

The problem is that this area of research has at times had a fairly poor reputation due to the occasional premature claim of success. Stories then become widespread via non-specialist media in such a way that the resulting hype frequently bears little relation to the initial scientific report. In addition, if further evidence reverses that conclusion, the public's lack of understanding of the error-correcting methods of science leads to disillusion at best and apathy at worst.

One key hypothesis that has received more than its fair share of negative publicity has been that of panspermia, which suggests not just the chemicals of biology but life itself has been brought to Earth by cosmic impactors. The best known advocates are Fred Hoyle and Chandra Wickramasinghe, but their outspoken championing of an hypothesis severely lacking in evidence has done little to promote the idea. For while it is feasible - especially with the ongoing discovery of extremophiles everywhere from deep ocean vents to the coolant ponds of nuclear reactors - to envisage microbial life reaching Earth from cometary or asteroid material, the notion that these extra-terrestrials have been responsible for various epidemics seems to be a step too far.

It's long been known that comets contain vast amounts of water; indeed, simulations suggest that until the Late Heavy Bombardment around four billion years ago there may have been far less water on Earth than subsequently. Considering the volumes of water ice now being discovered on Mars and the Moon, the probability of life-sustaining environments off the Earth has gained a respectable boost.

It isn't just water, either: organic compounds that are precursors to biological material have been found in vast quantities in interstellar space; and now they are being found in the inner solar system too. That's not to say that this research has been without controversy as well. Since the early 1960s, Bartholomew Nagy has stirred debate by announcing the discovery of sophisticated pre-biological material in impactors such as the Orgueil meteorite. Examination by other teams has found that contamination has skewed the results, implying that Nagy's conclusions were based on inadequate research. Although more recent investigation of meteorites and spectrophotometry of carbonaceous chondrite asteroids have supplied genuine positives, the earlier mistakes have sullied the field.

Luckily, thorough examination of the Australian Murchison meteorite has promoted the discipline again, with numerous amino acids being confirmed as of non-terrestrial origin. The RNA nucleobase uracil has also been found in the Murchison meteorite, with ultraviolet radiation in the outer space vacuum being deemed responsible for the construction of these complex compounds.

Not that there haven't been other examples of premature results leading to unwarranted hype. Perhaps the best known example of this was the 1996 announcement of minute bacteria-like forms in the Martian ALH84001 meteorite. The international headlines soon waned when a potential non-biological origin was found.

In addition to examination of these objects, experiments are increasingly being performed to test the resilience of life forms in either vacuum chambers or real outer space, courtesy of the International Space Station. After all, if terrestrial life can survive in such a hostile environment, the higher the likelihood that alien microbiology could arrive on Earth via meteorite impact or cometary tail (and at least one amino acid, glycine, has been found on several comets).

Unmanned probes are now replicating these findings, with the European Space Agency's Rosetta spacecraft finding glycine in the dust cloud around Comet 67P/Churyumov-Gerasimenko in 2016. Although these extra-terrestrial objects may lack the energy source required to kick-start life itself, some are clearly harbouring many of the complex molecules used in life on Earth.

It has now been proven beyond any doubt that organic and pre-biological material is common in space. The much higher frequency of impacts in the early solar system suggests that key components of Earth's surface chemistry - and its water - were delivered via meteorites and comets. Unfortunately, the unwary publication of provisional results, when combined with the general public's feeble grasp of scientific methodology, has hindered support for what is surely one of the most exciting areas in contemporary science. A multi-faceted approach may in time supply the answers to the ultimate questions surrounding the origin of life and its precursor material. This really is a case of watch (this) space!

Thursday, 11 October 2018

Sonic booms and algal blooms: a smart approach to detoxifying waterways

A recent report here in New Zealand has raised some interesting issues around data interpretation and the need for independent analysis to minimise bias. The study has examined the state of our fresh water environment over the past decade, leading to the conclusion that our lakes and rivers are improving in water quality.

However, some of the data fails to support this: populations of freshwater macro invertebrates remain low, following a steady decline over many decades. Therefore while the overall tone of the report is one of optimism, some researchers have claimed that the data has been deliberately cherry-picked in order to present as positive a result as possible.

Of course, there are countless examples of interested parties skewing scientific data for their own ends, with government organisations and private corporations among the most common culprits. In this case, the recorded drop in nitrate levels has been promoted at the expense of the continued low population of small-scale fauna. You might well ask what use these worms, snails and insects are, but even a basic understanding of food webs shows that numerous native bird and freshwater fish species rely on these invertebrates for food. As I've mentioned so often the apparently insignificant may play a fundamental role in sustaining human agriculture (yes, some other species practice farming too!)

So what is it that is preventing the invertebrates' recovery? The answer seems to be an increase in photosynthetic cyanobacteria, or as is more commonly - and incorrectly known - blue-green algae. If it is identified at all, it's as a health food supplement called spirulina, available in smoothies and tablet form. However, most cyanobacteria species are not nearly as useful - or pleasant. To start with, their presence in water lowers the oxygen content, so thanks to fertiliser runoff - nitrogen and phosphorus in particular - they bloom exponentially wherever intensive farming occurs close to fresh water courses. Another agriculture-related issue is due to clearing the land for grazing: without trees to provide shade, rivers and streams grow warmer, encouraging algal growth. Therefore as global temperatures rise, climate change is having yet another negative effect on the environment.

Most species of cyanobacteria contain toxins that can severely affect animals much larger than fresh water snails. Dogs have been reported as dying in as little as a quarter of an hour from eating it, with New Zealand alone losing over one hundred and fifty pet canines in the past fifteen years; it's difficult to prevent consumption, since dogs seem to love the smell! Kiwis are no stranger to the phylum for other reasons, as over one hundred New Zealand rivers and lakes have been closed to swimmers since 2011 due to cyanobacterial contamination.

Exposure to contaminated water or eating fish from such an environment is enough to cause external irritation to humans and may even damage our internal organs and nervous system. Drinking water containing blue-green algae is even worse; considering their comparable size to some dogs, it is supposed that exposure could prove fatal to young children. Research conducted over the past few years also suggests that high-level contamination can lead to Lou Gehrig's disease, A.K.A. amyotrophic lateral sclerosis, the same condition that Stephen Hawking suffered from.

What research you might ask is being done to discover a solution to this unpleasant organism? Chemicals additives including copper sulphate and calcium hypochlorite have been tried, but many are highly expensive while the toxicity of others is such that fish and crustacean populations also suffer, so this is hardly a suitable answer.

A more elegant solution has been under trial for the past two years, namely the use of ultrasound to sink the blue-green algae too deep to effectively photosynthesise, thus slowly killing it. A joint programme between New Zealand and the Netherlands uses a high-tech approach to identifying and destroying ninety per cent of each bloom. Whereas previous ultrasonic methods tended to be too powerful, thereby releasing algal toxins into the water, the new technique directly targets the individual algal species. Six tests per hour are used to assess water quality and detect the species to be eradicated. Once identified, the sonic blasts are calibrated for the target species and water condition, leading to a slower death for the blue-green algae that avoids cell wall rupture and so prevents the toxins from escaping.

Back to the earlier comment as to why the report's conclusions appear to have placed a positive spin that is unwarranted, the current and previous New Zealand Governments have announced initiatives to clean up our environment and so live up to the tourist slogan of '100% Pure'. The latest scheme requires making ninety percent of the nation's fresh water environments swimmable by 2040, which seems to be something of a tall order without radical changes to agriculture and the heavily polluting dairy sector in particular. Therefore the use of finely target sonic blasting couldn't come a moment too soon.

Our greed and short-sightedness has allowed cyanobacteria to greatly increase at the expense of the freshwater ecosystem, not to mention domesticated animals. Now advanced but small-scale technology has been developed to reduce it to non-toxic levels, but is yet to be implemented beyond the trial stage. Hopefully this eradication method will become widespread in the near future, a small victory in our enormous fight to right the wrongs of over-exploitation of the environment. But as with DDT, CFCs and numerous others, it does make me wonder how many more man-made time bombs could be ticking out there...

Thursday, 27 September 2018

The anaesthetic of familiarity: how our upbringing can blind us to the obvious

In the restored Edwardian school classroom at Auckland's Museum of Transport and Technology (MOTAT) there is a notice on the wall stating 'Do not ask your teacher questions.' Fortunately, education now goes some way in many nations to emphasising the importance of individual curiosity rather than mere obedience to authority. Of course, there are a fair number of politicians and corporation executives who wish it wasn't so, as an incurious mind is easier to sway than a questioning one. As my last post mentioned, the World Wide Web can be something of an ally for them, since the 'winner takes all' approach of a review-based system aids the slogans and rhetoric of those who wish to control who we vote for and what we buy.

Even the most liberal of nations and cultures face self-imposed hurdles centered round which is the best solution and which is just the most familiar one from our formative years. This post therefore looks at another side of the subjective thinking discussed earlier this month, namely a trap that Richard Dawkins has described as the "anaesthetic of familiarity". Basically, this is when conventions are so accepted as to be seen as the primary option instead of being merely one of a series of choices. Or, as the British philosopher Susan Stebbing wrote in her 1939 book Thinking to Some Purpose: "One of the gravest difficulties encountered at the outset of the attempt to think effectively consists in the difficulty of recognizing what we know as distinguished from what we do not know but merely take for granted."

Again, this mind set is much loved by the manufacturing sector; in addition to such well-known ploys as deliberate obsolescence and staggered release cycles, there are worse examples, especially in everyday consumerism. We often hear how little nutritional value many highly processed foods contain, but think what this has done for the vitamin and mineral supplement industry, whose annual worldwide sales now approach US$40 billion!

Citizens of developed nations today face very different key issues to our pre-industrial ancestors, not the least among them being a constant barrage of decision making. Thanks to the enormous variety of choices available concerning almost every aspect of our daily lives, we have to consider everything from what we wear to what we eat. The deluge of predominantly useless information that we receive in the era of the hashtag makes it more difficult for us to concentrate on problem solving, meaning that the easiest way out is just to follow the crowd.

Richard Dawkins' solution to these issues is to imagine yourself as an alien visitor and then observe the world as a curious outsider. This seems to me to be beyond the reach of many, for whom daily routine appears to be their only way to cope. If this sounds harsh, it comes from personal experience; I've met plenty of people who actively seek an ostrich-like head-in-the-sand approach to life to avoid the trials and tribulations - as well as the wonders - of this rapidly-changing world.

Instead, I would suggest an easier option when it comes to some areas of STEM research: ensure that a fair proportion of researchers and other thought leaders are adult migrants from other nations. Then they will be able to apply an outside perspective, hopefully identifying givens that are too obvious to be spotted by those who have grown up with them.

New Zealand is a good example of this, with arguably its two best known science communicators having been born overseas: Siouxsie Wiles and Michelle Dickinson, A.K.A. Nanogirl. Dr Wiles is a UK-trained microbiologist at the University of Auckland. She frequently appears on Radio New Zealand as well as undertaking television and social media work to promote science in general, as well as for her specialism of fighting bacterial infection.

Dr Dickinson is a materials engineering lecturer and nanomaterials researcher at the University of Auckland who studied in both the UK and USA. Her public outreach work includes books, school tours and both broadcast and social media. She has enough sci-comm kudos that last year, despite not having a background in astronomy, she interviewed Professor Neil deGrasse Tyson during the Auckland leg of his A Cosmic Perspective tour.

The work of the above examples is proof that newcomers can recognise a critical need compared to their home grown equivalents. What is interesting is that despite coming from English-speaking backgrounds - and therefore with limited cultural disparity to their adoptive New Zealand - there must have been enough that was different to convince Doctors Wiles and Dickinson of the need for a hands-on, media savvy approach to science communication.

This is still far from the norm: many STEM professionals believe there is little point to promoting their work to the public except via print-based publications. Indeed, some famous science communicators such as Carl Sagan and Stephen Jay Gould were widely criticised during their lifetime by the scientific establishment for what were deemed undue efforts at self-promotion and the associated debasement of science by combining it with show business.

As an aside, I have to say that as brilliant as some volumes of popular science are, they do tend to preach to the converted; how many non-science fans are likely to pick up a book on say string theory, just for a bit of light reading or self-improvement (the latter being a Victorian convention that appears to have largely fallen from favour)? Instead, the outreach work of the expat examples above is aimed at the widest possible audience without over-simplification or distortion of the principles being communicated.

This approach may not solve all issues about how to think outside the box - scientists may be so embedded within their culture as to not realise that there is a box - but surely by stepping outside the comfort zone we grew up in we may find problems that the local population hasn't noticed?

Critical thinking is key to the scientific enterprise, but it would appear, to little else in human cultures. If we can find methods to avoid the anaesthetic of familiarity and acknowledge that what we deem of as normal can be far from optimal, then these should be promoted with all gusto. If the post-modern creed is that all world views are equally valid and science is just another form of culture-biased story-telling, then now more than ever we need cognitive tools to break through the subjective barriers. If more STEM professionals are able to cross borders and work in unfamiliar locations, isn’t there a chance they can recognise issues that fall under the local radar and so supply a new perspective we need if we are to fulfil our potential?

Wednesday, 12 September 2018

Seasons of the mind: how can we escape subjective thinking?

According to some people I've met, the first day of spring in the Southern Hemisphere has been and gone with the first day of September. Not incidentally, there are also some, myself included, who think that it has suddenly started to feel a bit warmer. Apparently, the official start date is at the spring equinox during the third week of September. So on the one hand, the weather has been warming since the start of the month but on the other, why should a planet followed neat calendrical conventions, i.e. the first of anything? Just how accurate is the official definition?

There are many who like to reminisce about how much better the summer weather was back in their school holidays. The rose-tinted memories of childhood can seem idyllic, although I also recall summer days of non-stop rain (I did grow up in the UK, after all). Therefore our personal experiences, particularly during our formative years, can promote an emotion-based response that is so far ingrained we fail to consider they may be inaccurate. Subjectivity and wishful thinking are key to the human experience: how often do we remember the few hits and not the far more misses? As science is practiced by humans it is subject to the same lack of objectivity as anything else; only its built-in error-checking can steer practitioners onto a more rational course than in other disciplines.

What got me to ponder the above was that on meeting someone a few months' ago for the first time, almost his opening sentence was a claim that global warming isn't occurring and that instead we are on the verge of an ice age. I didn't have time for a discussion on the subject, so I filed that one for reply at a later date. Now seems like a good time to ponder what it is that leads people to make such assertions that are seemingly contrary to the evidence.

I admit to being biased on this particular issue, having last year undertaken research for a post on whether agriculture has postponed the next glaciation (note that this woolly - but not mammoth, ho-ho - terminology is one of my bugbears: we are already in an ice age, but currently in an interglacial stage). Satellite imagery taken over the past few decades shows clear evidence of large-scale reductions in global ice sheets. For example, the northern polar ice cap has been reduced by a third since 1980, with what remains only half its previous thickness. Even so, are three decades a long enough period to make accurate predictions? Isn't using a scale that can be sympathetic with the human lifespan just as bad as relying on personal experience?

The UK's Met Office has confirmed that 2018 was that nation's hottest summer since records began - which in this instance, only goes back as far back as 1910.  In contrast, climate change sceptics use a slight growth in Antarctic sea ice (contrary to its steadily decreasing continental icesheet) as evidence of climate equilibrium. Now I would argue that this growth is just a local drop in the global ocean, but I wonder if my ice age enthusiast cherry-picked this data to formulate his ideas? Even so, does he believe that all the photographs and videos of glaciers, etc. have been faked by the twenty or so nations who have undertaken Earth observation space missions? I will find out at some point!

If we try to be as objective as possible, how can we confirm with complete certainty the difference between long term climate change and local, short term variability? In particular, where do you draw the line between the two? If we look at temporary but drastic variations over large areas during the past thousand years, there is a range of time scales to explore. The 15th to 18th centuries, predominantly the periods 1460-1550 and 1645-1715, contained climate variations now known as mini ice ages, although these may have been fairly restricted in geographic extent. Some briefer but severe, wide-scale swings can be traced to single events, such as the four years of cold summers following the Tambora eruption of 1815.

Given such variability over the past millennium, in itself a tiny fragment of geological time, how much certainty surrounds the current changes? The public have come to expect yes or no answers delivered with aplomb, yet some areas of science such as climate studies involve chaos mathematics, thus generating results based on levels of probability. What the public might consider vacillation, researchers consider the ultimate expression of scientific good practice. Could this lack of black-and-white certainty be why some media channels insist on providing a 'counterbalancing' viewpoint from non-expert sources, as ludicrous as this seems?

In-depth thinking about a subject relies upon compartmentalisation and reductionism. Otherwise, we would forever be bogged down in the details and never be able to form an overall picture. But this quantising of reality is not necessarily a good thing if it generates a false impression regarding cause and effect. By suffering from what Richard Dawkins calls the “tyranny of the discontinuous mind” we are prone to creating boundaries that just don't exist. In which case, could a line ever be found between short term local variation and global climate change? Having said that, I doubt many climate scientists would use this as an excuse to switch to weather forecasting instead. Oh dear: this is beginning to look like a 'does not compute' error!

In a sense of course we are exceptionally lucky to have developed science at all. We rely on language to define our ideas, so need a certain level of linguistic sophistication to achieve this focus; tribal cultures whose numbers consist of imprecise values beyond two are unlikely to achieve much headway in, for example, classifying the periodic table.

Unfortunately, our current obsession with generating information of every quality imaginable and then loading it to all available channels for the widest possible audience inevitably leads to a tooth-and-claw form of meme selection. The upshot of this bombardment of noise and trivia is to require an enormous amount of time just filtering it. The knock-on effect being that minimal time is left for identifying the most useful or accurate content rather than simply the most disseminated.

Extremist politicians have long been adept at exploiting this weakness to expound polarising phraseology that initially sounds good but lacks depth; they achieve cut-through with the simplest and loudest of arguments, fulfilling the desire most people have to fit into a rigid social hierarchy - as seen in many other primate species. The problem is that in a similar vein to centrist politicians who can see both sides of an argument but whose rational approach negates emotive rhetoric, scientists are often stuck with the unappealing options of either taking a stand when the outcome is not totally clear, or facing accusations of evasion. There is current trend, particularly espoused by politicians, to disparage experts, but discovering how the universe works doesn't guarantee hard-and-fast answers supplied exactly when required and which provide comfort blankets in a harsh world.

Where then does this leave critical thinking, let alone science? Another quote from Richard Dawkins is that "rigorous common sense is by no means obvious to much of the world". This pessimistic view of the human race is supported by many a news article but somewhat negated by the immense popularity of star science communicators, at least in a number of countries.

Both the methods and results of science need to find a space amongst the humorous kitten videos, conspiracy theorists and those who yearn for humanity to be the pinnacle and purpose of creation. If we can comprehend that our primary mode of thinking includes a subconscious baggage train of hopes, fears and distorted memories, we stand a better chance of seeing the world for how it really is and not how we wish it to be. Whether enough of us can dissipate that fog remains to be seen. Meanwhile, the ice keeps melting and the temperature rising, regardless of what you might hear...

Monday, 27 August 2018

Hammer and chisel: the top ten reasons why fossil hunting is so important

At a time when the constantly evolving world of consumer digital technology seems to echo the mega-budget, cutting-edge experiments of the LHC and LIGO, is there still room for such an old-fashioned, low-tech science as paleontology?

The answer is of course yes, and while non-experts might see little difference between its practice today and that of its Eighteenth and Nineteenth Century pioneers, contemporary paleontology does on occasion utilise MRI scanners among other sophisticated equipment. I've previously discussed the delights of fossil hunting as an easy way to involve children in science, yet the apparent simplicity of its core techniques mask the key role that paleontology still plays in evolutionary biology.

Since the days of Watson and Crick molecular biology has progressed in leaps and bounds, yet the contemporary proliferation of cheap DNA-testing kits and television shows devoted to gene-derived genealogy obscure just how tentatively some of their results should be accepted. The levels of accuracy quoted in non-specialist media is often far greater than what can actually be attained. For example, the data on British populations has so far failed to separate those with Danish Viking ancestry from descendants of earlier Anglo-Saxon immigration, leading to population estimates at odds with the archaeological evidence.


Here then is a list of ten reasons why fossil hunting will always be a relevant branch of science, able to supply information that cannot be supplied by other scientific disciplines:
  1. Locations. Although genetic evidence can show the broad sweeps connecting extant (and occasionally, recently-extinct) species, the details of where animals, plants or fungi evolved, migrated to - and when - relies on fossil evidence.
  2. Absolute dating. While gene analysis can be used to obtain the dates of a last common ancestor shared by contemporary species, the results are provisional at best for when certain key groups or features evolved. Thanks to radiometric dating from some fossiliferous locales, paleontologists are able to fill in the gaps in fossil-filled strata that don't have radioactive mineralogy.
  3. Initial versus canonical. Today we think of land-living tetrapods (i.e. amphibians, reptiles, mammals and birds) as having a maximum of five digits per limb. Although these are reduced in many species – as per horse's hooves – five is considered canonical. However, fossil evidence shows that early terrestrial vertebrates had up to eight digits on some or all of their limbs. We know genetic mutation adds extra digits, but this doesn't help reconstruct the polydactyly of ancestral species; only fossils provide confirmation.
  4. Extinct life. Without finding their fossils, we wouldn't know of even such long-lasting and multifarious groups as the dinosaurs: how could we guess about the existence of a parasaurolophus from looking at its closest extant cousins, such as penguins, pelicans or parrots? There are also many broken branches in the tree of life, with such large-scale dead-ends as the pre-Cambrian Ediacaran biota. These lost lifeforms teach us something about the nature of evolution yet leave no genetic evidence.
  5. Individual history. Genomes show the cellular state of an organism, but thanks to fossilised tooth wear, body wounds and stomach contents (including gastroliths) we have important insights into day-to-day events in the life of ancient animals. This has led to fairly detailed biographies of some creatures, prominent examples being Sue the T-Rex and Al the Allosaurus, their remains being comprehensive enough to identify various pathologies.
  6. Paleoecology. Coprolites (fossilised faeces), along with the casts of burrows, help build a detailed enviromental picture that zoology and molecular biology cannot provide. Sometimes the best source of vegetation data comes from coprolites containing plant matter, due to the differing circumstances of decomposition and mineralisation.
  7. External appearance. Thanks to likes of scanning electron microscopes, fossils of naturally mummified organisms or mineralised skin can offer details that are unlikely to be discovered by any other method. A good example that has emerged in the past two decades is the colour of feathered dinosaurs obtained from the shape of their melanosomes.
  8. Climate analysis. Geological investigation can provide ancient climate data but fossil evidence, such as the giant insects of the Carboniferous period, confirm the hypothesis. After all, dragonflies with seventy centimetre wingspans couldn't survive with today's level of atmospheric oxygen.
  9. Stratigraphy. Paleontology can help geologists trying to sequence an isolated section of folded stratigraphy that doesn't have radioactive mineralogy. By assessing the relative order of known fossil species, the laying down of the strata can be placed in the correct sequence.
  10. Evidence of evolution. Unlike the theories and complex equipment used in molecular biology, anyone without expert knowledge can visit fossils in museums or in situ. They offer a prominent resource as defence against religious fundamentalism, as their ubiquity makes them difficult to explain by alternative theories. The fact that species are never found in strata outside their era supports the scientific view of life's development rather than those found in religious texts (the Old Testament, for example, erroneously states that birds were created prior to all other land animals).
To date, no DNA has been found over about 800,000 years old. This means that many of the details of the history of life rely primarily on fossil evidence. It's therefore good to note that even in an age of high-tech science, the painstaking techniques of paleontology can shed light on biology in a way unobtainable by more recent examples of the scientific toolkit. Of course, the study is far from fool-proof: it is thought that only about ten percent of all species have ever come to light in fossil form, with the found examples heavily skewed in favour of shallow marine environments.

Nevertheless, paleontology is a discipline that constantly proves its immense value in expanding our knowledge of the past in a way no religious text could ever do. It may be easy to understand what fossils are, but they are assuredly worth their weight in gold: precious windows onto an unrecoverable past.

Monday, 13 August 2018

Life on Mars? How accumulated evidence slowly leads to scientific advances

Although the history of science is often presented as a series of eureka moments, with a single scientist's brainstorm paving the way for a paradigm-shifting theory, the truth is usually rather less dramatic. A good example of the latter is the formulation of plate tectonics, with the meteorologist Alfred Wegener's continental drift being rejected by the geological orthodoxy for over thirty years. It was only with the accumulation of data from late 1950's onward that the mobility of Earth's crust slowly gained acceptance, thanks to the multiple strands of new evidence that supported it.

One topic that looks likely to increase in popularity amongst both public and biologists is the search for life on Mars. Last month's announcement of a lake deep beneath the southern polar ice cap is the latest piece of observational data that Mars might still have environments suitable for microbial life. This is just the latest in an increasing body of evidence that conditions may be still be capable of supporting life, long after the planet's biota-friendly heyday. However, the data hasn't always been so positive, having fluctuated in both directions over the past century or so. So what is the correspondence between positive results and the levels of research for life on Mars?

The planet's polar ice caps were first discovered in the late Seventeenth Century, which combined with the Earth-like duration of the Martian day implied the planet might be fairly similar to our own. This was followed a century later by observation of what appeared to be seasonal changes to surface features, leading to the understandable conclusion of Mars as a temperate, hospitable world covered with vegetation. Then another century on, an early use of spectroscopy erroneously described abundant water on Mars; although the mistake was later corrected, the near contemporary reporting of non-existent Martian canals led to soaring public interest and intense speculation. The French astronomer Camille Flammarion helped popularise Mars as a potentially inhabited world, paving the way for H.G. Wells' War of the Worlds and Edgar Rice Burroughs' John Carter series.

As astronomical technology improved and the planet's true environment became known (low temperatures, thin atmosphere and no canals), Mars' popularity waned. By the time of Mariner 4's 1965 fly-by, the arid, cratered and radiation-smothered surface it revealed only served to reinforce the notion of a lifeless desert; the geologically inactive world was long past its prime and any life still existing there probably wouldn't be visible without a microscope.

Despite this disappointing turnabout, NASA somehow managed to gain the funding to incorporate four biological experiments on the two Viking landers that arrived on Mars in 1976. Three of the experiments gave negative results while the fourth was inconclusive, most researchers hypothesising a geochemical rather than biological explanation for the outcome. After a decade and a half of continuous missions to Mars, this lack of positive results - accompanied by experimental cost overruns - probably contributed to a sixteen-year hiatus (excluding two Soviet attempts at missions to the Martian moons). Clearly, Mars' geology by itself was not enough to excite the interplanetary probe funding czars.

In the meantime, it was some distinctly Earth-bound research that reignited interested in Mars as a plausible source of life. The 1996 report that Martian meteorite ALH84001 contained features resembling fossilised (if extremely small) bacteria gained worldwide attention, even though the eventual conclusion repudiated this. Analysis of three other meteorites originating from Mars showed that complex organic chemistry, lava flows and moving water were common features of the planet's past, although they offered no more than tantalising hints that microbial life may have flourished, possibly billions of years ago.

Back on Mars, NASA's 1997 Pathfinder lander delivered the Sojourner rover. Although it appeared to be little more than a very expensive toy, managing a total distance in its operational lifetime of just one hundred metres, the proof of concept led to much larger and more sophisticated vehicles culminating in today’s Curiosity rover.

The plethora of Mars missions over the past two decades has delivered immense amounts of data, including that the planet used to have near-ideal conditions for microbial life - and still has a few types of environment that may be able to support miniscule extremophiles.

Together with research undertaken in Earth-bound simulators, the numerous Mars projects of the Twenty-first Century have to date swung the pendulum back in favour of a Martian biota. Here are a few prominent examples:

  • 2003 - atmospheric methane is discovered (the lack of active geology implying a biological rather than geochemical origin)
  • 2005 - atmospheric formaldehyde is detected (it could be a by-product of methane oxidation)
  • 2007 - silica-rich rocks, similar to hot springs, are found
  • 2010 - giant sinkholes are found (suitable as radiation-proof habitats)
  • 2011 - flowing brines and gypsum deposits discovered
  • 2012 - lichen survived for a month in the Mars Simulation Laboratory
  • 2013 - proof of ancient freshwater lakes and complex organic molecules, along with a long-lost magnetic field
  • 2014 - large-scale seasonal variation in methane, greater than usual if of geochemical origin
  • 2015 - Earth-based research successfully incubates methane-producing bacteria under Mars-like conditions
  • 2018 - a 20 kilometre across brine lake is found under the southern polar ice sheet

Although these facts accumulate into an impressive package in favour of Martian microbes, they should probably be treated as independent points, not as one combined argument. For as well as finding factors supporting microbial life, other research has produced opposing ones. For example, last year NASA found that a solar storm had temporarily doubled surface radiation levels, meaning that even dormant microbes would have to live over seven metres down in order to survive. We should also bear in mind that for some of each orbit, Mars veers outside our solar system's Goldilocks Zone and as such any native life would have its work cut out for it at aphelion.

A fleet of orbiters, landers, rovers and even a robotic helicopter are planned for further exploration in the next decade, so clearly the search for life on Mars is still deemed a worthwhile effort. Indeed, five more missions are scheduled for the next three years alone. Whether any will provide definitive proof is the big question, but conversely, how much of the surface - and sub-surface - would need to be thoroughly searched before concluding that Mars has either never had microscopic life or that it has long since become extinct?

What is apparent from all this is that the quantity of Mars-based missions has fluctuated according to confidence in the hypothesis. In other words, the more that data supports the existence of suitable habitats for microbes, the greater the amount of research to find them. In a world of limited resources, even such profoundly interesting questions as extra-terrestrial life appear to gain funding based on the probability of near-future success. If the next generation of missions fails to find traces of even extinct life, my bet would be a rapid and severe curtailing of probes to the red planet.

There is a caricature of the stages that scientific hypotheses go through, which can ironically best be described using religious terminology: they start as heresy; proceed to acceptance; and are then carved into stone as orthodoxy. Of course, unlike with religions, the vast majority of practitioners accept the new working theory once the data has passed a certain probability threshold, even if it totally negates an earlier one. During the first stage - and as the evidence starts to be favourable - more researchers may join the bandwagon, hoping to be the first to achieve success.

In this particular case, the expense and sophistication of the technology prohibits entries from all except a few key players such as NASA and ESA. It might seem obvious that in expensive, high-tech fields, there has to be a correlation between hypothesis-supporting facts and the amount of research. But this suggests a stumbling block for out-of-the-box thinking, as revolutionary hypotheses fail to gain funding without at least some supporting evidence.

Therefore does the cutting-edge, at least in areas that require expensive experimental confirmation, start life as a chicken-and-egg situation? Until data providentially appears, is it often the case that the powers-that-be have little enticement for funding left-field projects? That certainly seems to have been true for meteorologist Alfred Wegener and his continental drift hypothesis, since it took several research streams to codify plate tectonics as the revolutionary solution. 

Back to Martian microbes. Having now read in greater depth about seasonal methane, it appears that the periodicity could be due to temperature-related atmospheric changes. This only leaves the scale of variation as support for a biological rather than geochemical origin. Having said that, the joint ESA/Roscosmos ExoMars Trace Gas Orbiter may find a definitive answer as to its source in the next year or so, although even a negative result is unlikely to close the matter for some time to come. Surely this has got to be one of the great what-ifs of our time? Happy hunting, Mars mission teams!

Monday, 30 July 2018

Biophilic cities: why green is the new black

I've previously discussed the notion that children who spend more time outside in natural surroundings are more likely to have improved mental and physical health compared to their indoors, gadget-centred peers, but does the same hold true for adults as well? After all, there have been many claims that the likes of the fractal geometry of natural objects, the sensual stimulation, the random behaviour of animals, even feeling breezes or better air quality can have a positive or 'wellness' (horrific term though it is) effect.

It is pretty much a given that the larger the percentage of nature existing within conurbations, the greater the improvement to the local environment. This begins at the practical level, with vegetation mitigating extremes of heat while its roots helps reduce flooding. In addition, fauna and flora gain more room to live in, with a greater number of species able to survive than just the usual urban adaptees such as rats and pigeons. What about the less tangible benefits to humans, culminating in a better quality of life? Science isn't wishful thinking, so what about the evidence for more nature-filled urban environments improving life for all its citizens, not just children?

Studies suggest that having window views of trees can increase concentration and wellbeing in the workplace, while for hospital patients there is a clear correlation between types of view and both the length of recovery periods and painkiller usage. Therefore it seems that even the appearance of close-at-hand nature can have an effect, without the necessity of immersion. Having said that, there are clear advantages to having a public green space, since it allows a wide range of activities such as flying kites, playing ball games, jogging and boot camps, or just having a picnic.

Our largely sedentary, over-caloried lives necessitate as much physical activity as we can get, but there is apparently something greater than just physical exercise behind nature as a promoter of wellbeing. Investigation appears to show that spaces with trees and the hint of wilderness are far more beneficial than the unnatural and restricted geometries of manicured lawns and neatly maintained flower beds. It seems that we are still very much beholden to the call of the wild. If this is a fundamental component of our highly civilised lives, are urban planners aware of this and do they incorporate such elements into our artificial environments?

The concept of integrating nature into our towns and cities certainly isn't a new one. As a child, I occasionally visited Letchworth Garden City, a town just north of London. As the name suggests, it was an early form of 'Green Belt' planning, created at the start of the Twentieth century and divided into sectors for residential, industrial and agricultural usage. In its first half century it tried to live up to its intention to be self-sufficient in food, water and power generation, but this later proved impractical. I don't recall it being anything special, but then its heyday as a mecca for the health conscious (at a time when the likes of exercise and vegetarianism were associated with far left-wing politics) has long since passed. As to whether the inhabitants have ever been mentally - or even physically - advantaged compared to the older conurbations elsewhere in the UK, I cannot find any evidence.

Across the Atlantic, the great American architect Frank Lloyd-Wright conceived of something similar but on a far larger scale. His Broadacre City concept was first published in 1932, with the key idea that every family would live on an acre-sized plot. However, Lloyd-Wright's concept - apart from being economically prohibitive - relied on private cars (later updated to aerator, a form of personal helicopter) for most transportation; sidewalks were largely absent from his drawings and models. Incidentally, some US cities today have partially adopted the sidewalk-free model but without Lloyd-Wright's green-oriented features. For example, there are suburbs in oil-centric Houston that are only reachable by car; you have to drive even to reach shopping malls you can see from your own home, with high pedestrian mortality rates proving the dangers of attempting to walk anywhere. Back to Lloyd-Wright: like many of his schemes, his own predilections and aesthetic sensibilities seem to have influenced his design rather more than any evidence-based insight into social engineering.

In recent years the term 'biophilic cities' has been used to describe conurbations attempting to increase their ratio of nature to artifice, often due to a combination of public campaigning and far-sighted local governments. Although these schemes cover much wider ground than just human wellbeing (prominent issues being reduction in power usage and waste, greater recycling and ecological diversity, etc), one of the side effects of the improvements is to quality of life. Thirteen cities joined the Biophilic Cities project in 2013, but others are just as committed in the long-term to offsetting the downsides of urban living. Here are three cities I have visited that are dedicated to improving their environment:

  1. Singapore. Despite the abundance of tower blocks, especially in its southern half, this city that is also a nation has a half-century history of planting vegetation in order to live up to the motto ‘Singapore - City in a Garden’. Despite its large-scale adoption of high-tech, high-rise architecture, Singapore has preserved an equivalent area of green space and now ranks top of the Green View Index. Even the maximal artificiality of the main highways is tempered by continuous rows of tall, closedly-packed trees while building regulations dictate replacement of ground-level vegetation lost to development. A new 280-metre tall office, retail and residential building, due for completion in 2021, is set to incorporate overtly green elements such as a rainforest plaza. It could be argued that it's easy for Singapore to undertake such green initiatives considering that much of city didn't exist before the late Twentieth century and what did has been subject to wide-scale demolition. However, it seems that Singapore's Government has a long-term strategy to incorporate nature into the city, with the resulting improvements in the mental and physical wellbeing of its inhabitants.
  2. Toronto. Although not as ecologically renowned as Vancouver, the local government and University of Toronto are engaged in a comprehensive series of plans to improve the quality of life for both humans and the rest of nature. From the green roof bylaw and eco-friendly building subsidies to Live Green Toronto Program, there is a set of strategies to aid the local environment and planet in general. It is already paying dividends in a large reduction of air pollution-related medical cases, while quality of life improvements are shown by the substantial bicycle-biased infrastructure and increase in safe swimming days. There's still plenty to do in order to achieve their long term goals, particularly around traffic-related issues, but the city and its inhabitants are clearly aiming high.
  3. Wellington. New Zealand's capital has wooded parks and tree-filled valleys that the council promotes as part of the city's quality of life. The recreated wetlands at Waitangi Park and the Zealandia (formerly Karori) predator-proof wildlife sanctuary are key components in the integration of large-scale nature into the urban environment. Indeed, the latter is proving so successful that rare native birds such as the kaka are being increasingly found in neighbourhood gardens. Both the city and regional councils are committed to improving both the quality of life for citizens as well as for the environment in general, from storm water filtering in Waitangi Park to the wind turbines on the hilltops of what may be the world's windiest city.

These cities are just the tip of the iceberg when it comes to conurbations around the world seeking to make amends for the appalling environmental and psychological consequences of cramming immense numbers of humans into a small region that cannot possibly supply all their needs. In some respects these biophilic cities appear too good to be true, as their schemes reduce pollution and greenhouse gas emissions, improve the local ecosystem, and at the same time appear to aid the physical and mental wellbeing of their inhabitants. Yet it shouldn't be surprising really; cities are a recent invention and before that a nomadic lifestyle embedded us in landscapes that were mostly devoid of human intervention. If we are to achieve any sort of comfortable equilibrium in these hectic times, then surely covering bare concrete with greenery is the key? You don't have to be a hippy tree hugger to appreciate what nature can bring to our lives.

Sunday, 15 July 2018

Minding the miniscule: the scale prejudice in everyday life

I was recently weeding a vegetable bed in the garden when out of the corner of my eye I noticed a centipede frantically heading for cover after I had inadvertently disturbed its hiding spot. In my experience, most gardeners are oblivious to the diminutive fauna and flora around them unless they are pests targeted for removal or obliteration. It's only when the likes of a biting or stinging organism - or even just a large and/or hairy yet harmless spider - comes into view do people consciously think about the miniature cornucopia of life around them.

Even then, we consider our needs rather greater than theirs: how many of us stop to consider the effect we are having when we dig up paving slabs and find a bustling ant colony underneath? In his 2004 essay Dolittle and Darwin, Richard Dawkins pondered what contemporary foible or -ism future generations will castigate us for. Something I consider worth looking at in this context is scale-ism, which might be defined as the failure to apply a suitable level of consideration to life outside of 'everyday' measurements.

I've previously discussed neo-microscopic water-based life but larger fauna visible without optical aids is easy to overlook when humans are living in a primarily artificial environment - as over half our species is now doing. Several ideas spring to mind as to why breaking this scale-based prejudice could be important:
  1. Unthinking destruction or pollution of the natural environment doesn't just cause problems for 'poster' species, predominantly cuddly mammals. The invertebrates that live on or around larger life-forms may be critical to these ecosystems or even further afield. Removal of one, seemingly inconsequential, species could allow another to proliferate at potentially great cost to humans (for example, as disease vectors or agricultural pests). Food webs don't begin at the chicken and egg level we are used to from pre-school picture books onwards.
  2. The recognition that size doesn't necessarily equate to importance is critical to the preservation of the environment not just for nature's sake but for the future of humanity. Think of the power of the small water mould Phytophthora agathidicida which is responsible for killing the largest residents of New Zealand's podocarp forests, the ancient coniferous kauri Agathis australis. The conservation organisation Forest and Bird claims that kauri are the lynchpin for seventeen other plant species in these forests: losing them will have a severe domino effect.
  3. Early detection of small-scale pests may help to prevent their spread but this requires vigilance from the wider public, not just specialists; failure to recognise that tiny organisms may be far more than a slight nuisance can be immensely costly. In recent years there have been two cases in New Zealand where the accidental import of unwanted insects had severe if temporary repercussions for the economy. In late 2017 three car carriers were denied entry to Auckland when they were found to contain the brown marmorated stink bug Halyomorpha halys. If they had not been detected, it is thought this insect would have caused NZ$4 billion in crop damage over the next twenty years. Two years earlier, the Queensland fruit fly Bactrocera tryoni was found in central Auckland. As a consequence, NZ$15 million was spent eradicating it, a small price to pay for the NZ$5 billion per annum it would have cost the horticulture industry had it spread.
Clearly, these critters are to be ignored at our peril! Although the previous New Zealand government introduced the Predator Free 2050 programme, conservation organisations are claiming the lack of central funding and detailed planning makes the scheme unrealistic by a large margin (if anything, the official website suggests that local communities should organise volunteer groups and undertake most of the work themselves!) Even so, this scheme is intended to eradicate alien mammal species, presumably on the grounds that despite their importance, pest invertebrates are just too small to keep excluded permanently - the five introduced wasp species springing to mind at this point.

It isn't just smaller scale animals that are important; and how many people have you met who think that the word animal means only a creature with a backbone, not insects and other invertebrates? Minute and inconspicuous plants and fungi also need considering. As curator at Auckland Botanic Gardens Bec Stanley is keen to point out, most of the public appear to have plant blindness. Myrtle rust is a fungus that attacks native plants such as the iconic pōhutukawa or New Zealand Christmas tree, having most probably been carried on the wind to New Zealand from Australia. It isn't just New Zealand's Department of Conservation that is asking the public to watch out for it: the Ministry for Primary Industries also requests notification of its spread across the North Island, due to the potential damage to commercial species such as eucalyptus. This is yet another example of a botanical David versus Goliath situation going on right under our oblivious noses.

Even without the economic impact, paying attention to the smaller elements within our environment is undoubtedly beneficial. Thinking more holistically and less parochially is often a good thing when it comes to science and technology; paradigm shifts are rarely achieved by being comfortable and content with the status quo. Going beyond the daily centimetre-to-metre range that we are used to dealing with allows us to comprehend a bit more of the cosmic perspective that Neil deGrasse Tyson and other science communicators endeavour to promote - surely no bad thing when it comes to lowering boundaries between cultures in a time with increasingly sectarian states of mind?

Understanding anything a little out of the humdrum can be interesting in and of itself. As Brian Cox's BBC documentary series Wonders of Life showed, a slight change of scale can lead to apparent miracles, such as the insects that can walk up glass walls or support hundreds of times their own weight and shrug off equally outsized falls. Who knows, preservation or research into some of our small-scale friends might lead to considerable benefits too, as with the recent discovery of the immensely strong silk produced by Darwin's bark spider Caerostris darwini. Expanding our horizons isn't difficult, it just requires the ability to look down now and then and see what else is going on in the world around us.

Wednesday, 27 June 2018

A necessary evil? Is scientific whaling worthwhile - or even valid science?

There are some phrases - 'creation science' and 'military intelligence' spring readily to mind - that are worth rather more attention than a first or second glance. Another example is 'scientific whaling', which I believe deserves wider dissemination in the global public consciousness. I previously mentioned this predominantly Japanese phenomenon back in 2010 and it has subsequently had the habit of occasionally appearing in the news. It likewise has a tendency to aggravate emotions rather than promote rational discourse, making it difficult to discern exactly what is going on and whether it fulfils the first part of the phrase.

I remember being about ten years' old when a classmate's older sister visited our school and gave a talk describing her work for Greenpeace. At the time this organisation was in the midst of the Save the Whale campaign, which from my memory appears to have been at the heart of environmental activism in the 1970s. As such, it gained a high level of international publicity and support, perhaps more so than any previous conservation campaign.

Although this finally led to a ban on whale hunting in 1986, several nations opted out. In addition to a small-scale continuation in some indigenous, traditional, whale-hunting communities, Iceland and Norway continue to hunt various species. As a result, various multi-national corporations have followed public opinion and removed their operations from these nations. Japan, on the other hand - with a much larger economy and population, yet home to a far greater whale-hunting operation - is a very different prospect.

There was an international outcry back in March when Norway announced that it was increasing its annual whaling quota by 28%. It's difficult to understand the motivation behind this rise, bearing in mind that Norway's shrinking whale fleet are already failing to meet government quotas. Thanks to warming oceans, the remaining whale populations are moving closer to the North Pole, depriving the Norwegians of an easy catch. What is caught is used for human consumption as well as for pet and livestock food, as it is in Iceland, where the same tourists who go on whale-watching trips are then encouraged to tuck in to cetacean steaks and whale burgers (along with the likes of puffin and other local delicacies).

Although we think of pre-1980s whaling as a voracious industry there have been periods of temporary bans dating back to at least the 1870s, admittedly driven by profit-led concern of declining stocks rather than animal welfare and environmentalism in general. It wasn't just the meat that was economically significant; it's easy to forget that before modern plastics were invented, baleen served a multitude of purposes while the bones and oil of cetaceans were also important materials.

But hasn't modern technology superseded the need for whale-based products? Thanks to a scientific research exemption, Japanese vessels in Antarctica and the North Pacific can work to catch quotas set by the Japanese government, independent of the International Whaling Commission. The relevant legislation also gives the Japanese Institute of Cetacean Research permission to sell whale meat for human consumption, even if it was obtained within the otherwise commercially off-limits Southern Ocean Whale Sanctuary. That's some loophole! So what research is being undertaken?

The various Japanese whaling programmes of the past thirty years have been conducted principally in the name of population management for Bryde's, Fin, Minke and Sei whales. The role of these four species within their local ecosystem and the mapping of levels of toxic pollutants are among the research objectives. The overarching aim is simple: to evaluate if the stocks are robust enough to allow the resumption of large-scale yet sustainable commercial whaling. In other words, Japan is killing a smaller number of whales to assess when they can start killing a greater number of whales!

Following examination of the Japanese whaling programmes, including the current JARPA II study, environmental groups including the World Wildlife Fund as well as the Australian Government have declared Japan's scientific whaling as not fit for purpose. The programmes have led to a very limited number of published research papers, especially when compared to the data released by other nations using non-lethal methods of assessment.

There is now an extremely wide range of non-fatal data collection techniques, such as biopsy sampling and GPS tagging. Small drones nicknamed 'snotbots' are being used to obtain samples from blowhole emissions, while even good old-fashioned sighting surveys that rely on identification of individuals from diagnostics such as tail flukes can be used for population statistics. Japanese scientists have continually stated that they would stop whale hunting if other techniques proved as effective, yet the quality and quantity of research they have published since the 1980s completely negates this.

After examining the results, even some Japanese researchers have admitted that killing whales has not proven to be an accurate way to gain data. Indeed, sessions in 2014 at the United Nations' International Court of Justice confirmed that if anything, the Japanese whale quotas are far too small to provide definitive evidence for their objectives. To put it another way, Japan's Institute of Cetacean Research would have to kill far more whales to confirm if the populations are healthy enough to bear the brunt of pre-1980's scale commercial whaling! Anyone for a large dollop of irony?

Looking at the wider picture, does Japan really need increased volumes of cetacean flesh anyway? After the Second World War, food shortages led to whale meat becoming a primary protein source. Today, Japanese consumption has dropped to just one percent of what it was in the decade post-war. The domestic stockpile is no doubt becoming a burden, since whale meat is now even used in subsidised school lunches, despite the danger of heavy metal poisoning.

Due to the reduction in market size, Japan's scientific whaling programmes are no longer economically viable. So how is it that the long-term aim is to increase catch to fully commercial levels - and who do they think will be eating it? Most countries abide by the International Whaling Commission legislation, so presumably it will be for the domestic market. Although approximately half the nation's population support whale hunting, possibly due its traditional roots (or as a reaction to perceived Western cultural imperialism?) most no longer eat whale meat. So why are the Japanese steadfast in pursuing research that generates poor science, is unprofitable, internationally divisive, and generates an unwanted surplus?

The answer is: no-one really knows, at least outside of the Institute of Cetacean Research; and they're not saying. If ever there was a case of running on automatic pilot, this seems to be it. The name of science is being misused in order to continue with the needless exploitation of marine resources in the Pacific and Southern oceans. Thousands of whales have been unnecessarily slaughtered (I realise that's an emotive word, but it's worth using under the circumstances) at a time when non-lethal techniques are proving their superior research value. Other countries are under pressure to preserve fish stocks and reduce by-catch - by comparison Japan's attitude appears anachronistic in the extreme. By allowing the loophole of scientific whaling, the International Whaling Commission has compromised both science and cetaceans for something of about as much value as fox hunting.

Wednesday, 13 June 2018

Debunking DNA: A new search for the Loch Ness monster

I was recently surprised to read that a New Zealand genomics scientist, Neil Gemmell of Otago University, is about to lead an international team in the search for the Loch Ness monster. Surely, I thought, that myth has long since been put to bed and is only something exploited for the purposes of tourism? I remember some years ago that a fleet of vessels using side-sweeping sonar had covered much of the loch without discovering anything conclusive. When combined with the fact that the most famous photograph is a known fake and the lack of evidence from the plethora of tourist cameras (never mind those of dedicated Nessie watchers) that have convened on the spot, the conclusion seems obvious.

I've put together a few points that don't bode well for the search, even assuming that Nessie is a 'living fossil' (à la coelacanth) rather than a supernatural creature; the usual explanation is a cold water-adapted descendant of a long-necked plesiosaur - last known to have lived in the Cretaceous Period:
  1. Loch Ness was formed by glacial action around 10,000 years ago, so where did Nessie come from? 
  2. Glacial action implies no underwater caves for hiding in
  3. How can a single creature maintain a long-term population (the earliest mentions date back thirteen hundred years)? 
  4. What does such a large creature eat without noticeably reducing the loch's fish population?
  5. Why have no remains ever been found, such as large bones, even on sonar?
All in all, I didn't think much of the expedition's chances and therefore I initially thought that the new research would be a distinct waste of money that could be much better used elsewhere in Scotland. After all, the Shetland seabird population is rapidly decreasing thanks to over-fishing, plastic pollution and loss of plankton due to increasing ocean temperatures. It would make more sense to protect the likes of puffins (who have suffered a 98% decline over the past 20 years), along with guillemots and kittiwakes amongst others.

However, I then read that separate from the headline-grabbing monster hunt, the expedition's underlying purpose concerns environmental DNA sampling, a type of test never before used at Loch Ness. Gemmell's team have proffered a range of scientifically valid reasons for their project:
  1. To survey the loch's ecosystem, from bacteria upwards 
  2. Demonstrate the scientific process to the public (presumably versus all the pseudoscientific nonsense surrounding cryptozoology)
  3. Test for trace DNA from potential but realistic causes of 'monster' sightings, such as large sturgeon or catfish 
  4. Understand local biodiversity with a view to conservation, especially as regards the effect caused by invasive species such as the Pacific pink salmon. 
Should the expedition find any trace of reptile DNA, this would of course prove the presence of something highly unusual in the loch. Gemmell has admitted he doubts they will find traces of any monster-sized creatures, plesiosaur or otherwise, noting that the largest unknown species likely to be found are bacteria. Doesn't it seem strange though that sometimes the best way to engage the public - and gain funding - for real science is to use what at best could be described as pseudoscience?

Imagine if NASA could only get funding for Earth observation missions by including the potential to prove whether our planet was flat or not? (Incidentally, you might think a flat Earth was just the territory of a few nutbars, but a poll conducted in February this year suggests that fully two percent of Americans are convinced the Earth is a disk, not spherical).

Back to reality. Despite the great work of scientists who write popular books and hold lectures on their area of expertise, it seems that the media - particularly Hollywood - are the primary source of science knowledge to the general public. Hollywood's version of de-extinction science, particularly for ancient species such as dinosaurs, seems to be far better known than the relatively unglamorous reality. Dr Beth Shapiro's book How to clone a mammoth for example is an excellent introduction to the subject, but would find it difficult to compete along side the adventures of the Jurassic World/Park films.

The problem is that many if not most people want to believe in a world that is more exciting than their daily routine would suggest, with cryptozoology offering itself as an alternative to hard science thanks to its vast library of sightings over the centuries. Of course it's easy to scoff: one million tourists visit Loch Ness each year but consistently fail to find anything; surely in this case absence of evidence is enough to prove evidence of absence?

The Loch Ness monster is of course merely the tip of the mythological creature iceberg. The Wikipedia entry on cryptids lists over 170 species - can they all be just as suspect? The deep ocean is the best bet today for large creatures new to science. In a 2010 post I mentioned that the still largely unexplored depths could possibly contain unknown megafauna, such as a larger version of the oarfish that could prove to be the fabled sea serpent.

I've long had a fascination with large creatures, both real (dinosaurs, of course) and imaginary. When I was eight years old David Attenborough made a television series called Fabulous Animals and I had the tie-in book. In a similar fashion to the new Loch Ness research project, Attenborough used the programmes to bring natural history and evolutionary biology to a pre-teen audience via the lure of cryptozoology. For example, he discussed komodo dragons and giant squid, comparing extant megafauna to extinct species such as woolly mammoth and to mythical beasts, including the Loch Ness Monster.

A few years later, another television series that I avidly watched covered some of the same ground, namely Arthur C. Clarke's Mysterious World. No less than four episodes covered submarine cryptozoology, including the giant squid, sea serpents and of course Nessie him (or her) self. Unfortunately the quality of such programmes has plummeted since, although as the popularity of the (frankly ridiculous) seven-year running series Finding Bigfoot shows, the public have an inexhaustible appetite for this sort of stuff.

I've read that it is estimated only about ten percent of extinct species have been discovered in the fossil record, so there are no doubt some potential surprises out there (Home floresiensis, anyone?) However, the evidence - or lack thereof - seems firmly stacked against the Loch Ness monster. What is unlikely though is that the latest expedition will dampen the spirits of the cryptid believers. A recent wolf-like corpse found in Montana, USA, may turn out to be coyote-wolf hybrid, but this hasn't stopped the Bigfoot and werewolf fans from spreading X-Files style theories across the internet. I suppose it’s mostly harmless fun, and if Professor Gemmell’s team can spread some real science along the way, who am I to argue with that? Long live Nessie!

Wednesday, 30 May 2018

Photons vs print: the pitfalls of online science research for non-scientists


It's common knowledge that school teachers and university lecturers are tired of discovering that their students' research is often limited to one search phrase on Google or Bing. Ignoring the minimal amount of rewriting that often accompanies this shoddy behaviour - leading to some very same-y coursework - one of the most important questions to arise is how easy is it to confirm the veracity of online material compared to conventionally-published sources? This is especially important when it comes to science research, particularly when the subject matter involves new hypotheses and cutting-edge ideas.

One of the many problems with the public's attitude to science is that it is nearly always thought of as an expanding body of knowledge rather than as a toolkit to explore reality. Popular science books such as Bill Bryson's 2003 best-seller A Short History of Nearly Everything follow this convention, disseminating facts whilst failing to illuminate the methodologies behind them. If non-scientists don't understand how science works is it little wonder that the plethora of online sources - of immensely variable quality - can cause confusion?

The use of models and the concurrent application of two seemingly conflicting theories (such as Newton's Universal Gravitation and Einstein's General Theory of Relativity) can only be understood with a grounding in how the scientific method(s) proceed. By assuming that scientific facts are largely immutable, non-scientists can become unstuck when trying to summarise research outcomes, regardless of the difficulty in understanding the technicalities. Of course this isn't true for every theory: the Second Law of Thermodynamics is unlikely to ever need updating; but as the discovery of dark energy hints, even Einstein's work on gravity might need amending in future. Humility and caution should be the bywords of hypotheses not yet verified as working theories; dogma and unthinking belief have their own place elsewhere!

In a 1997 talk Richard Dawkins stated that the methods of science are 'testability, evidential support, precision, quantifiability, consistency, intersubjectivity, repeatability, universality, and independence of cultural milieu.' The last phrase implies that the methodologies and conclusions for any piece of research should not differ from nation to nation. Of course the real world intrudes into this model and so culture, gender, politics and even religion play their part as to what is funded and how the results are presented (or even which results are reported and which obfuscated).

For those who want to stay ahead of the crowd by disseminating the most recent breakthroughs it seems obvious that web resources are far superior to most printed publications, professional journals excepted - although the latter are rarely suitable for non-specialist consumption. The expenses associated with producing popular science books means that online sources are often the first port of call.

Therein lies the danger: in the rush to skim seemingly inexhaustible yet easy to find resources, non-professional researchers frequently fail to differentiate between articles written by scientists, those by journalists with science training, those by unspecialised writers, largely on general news sites, and those by biased individuals. It's usually quite easy to spot material from cranks, even within the quagmire of the World Wide Web (searching for proof that the Earth is flat will generate tens of millions of results) but online content written by intelligent people with an agenda can be more difficult to discern. Sometimes, the slick design of a website offers reassurance that the content is more authentic than it really is, the visual aspects implying an authority that is not justified.

So in the spirit of science (okay, so it's hardly comprehensive being just a single trial) I recently conducted a simple experiment. Having read an interesting hypothesis in a popular science book I borrowed from the library last year, I decided to see what Google's first few pages had to say on the same subject, namely that the Y chromosome has been shrinking over the past few hundred million years to such an extent that its days - or in this case, millennia - are numbered.

I had previously read about the role of artificial oestrogens and other disruptive chemicals in the loss of human male fertility, but the decline in the male chromosome itself was something new to me. I therefore did a little background research first. One of the earliest sources I could find for this contentious idea was a 2002 paper in the journal Nature, in which the Australian geneticist Professor Jennifer Graves described the steady shrinking of the Y chromosome in the primate order. Her extrapolation of the data, combined with the knowledge that several rodent groups have already lost their Y chromosome, suggested that the Home sapiens equivalent has perhaps no more than ten million years left before it disappears.

2003 saw the publication of British geneticist Bryan Sykes' controversial book Adam's Curse: A Future Without Men. His prediction based on the rate of atrophy in the human Y chromosome was that it will only last another 125,000 years. To my mind, this eighty-fold difference in timescales suggests that for these early days in its history, very little of the hypothesis could be confirmed with any degree of certainty.

Back to the experiment itself. The top results for 'Y chromosome disappearing' and similar search phrases lead to articles published between 2009 and 2018. They mostly fall into one of two categories: (1) that the Y chromosome is rapidly degenerating and that males, at least of humans and potentially all other mammal species, are possibly endangered; and (2) that although the Y chromosome has shrunk over the past few hundred million years it has been stable for the past 25 million and so is no longer deteriorating. A third, far less common category, concerns the informal polls taken of chromosomal researchers, who have been fairly evenly divided between the two opinions and thus nicknamed the "leavers" and the "remainers". Considering the wildly differing timescales mentioned above, perhaps this lack of consensus is proof of science in action; there just hasn't been firm enough evidence for either category to claim victory.

What is common to many of the results is that inflammatory terms and hyperbole are prevalent, with little in the way of caution you would hope to find with cutting-edge research. Article titles include 'Last Man on Earth?', 'The End of Men' and 'Sorry, Guys: Your Y Chromosome May Be Doomed ', with paragraph text contain provocative phrases such as 'poorly designed' and 'the demise of men'. This approach is friendly to organic search at the same time as amalgamating socio-political concerns with the science.

You might expect that the results would show a change in trend of time, first preferring one category and then the other, but this doesn't appear to be the case. Rearranged in date order, the search results across the period 2009-2017 include both opinions running concurrently. This year however has seen a change, with the leading 2018 search results so far only offering support to the rapid degeneration hypothesis. The reason for this difference is readily apparent: publication of a Danish study that bolsters support for it. This new report is available online, but is difficult for a non-specialist to digest. Therefore, most researchers such as myself would have to either rely upon second-hand summaries or, if there was enough time, wait for the next popular science book that discusses it in layman's terms.

As it is, I cannot tell from my skimming approach to the subject whether the new research is thorough enough to be completely reliable. For example, it only examined the genes of sixty-two Danish men, so I have no idea if this is a large enough sample to be considered valid beyond doubt. However, all of the 2018 online material I read accepted the report without question, which at least suggests that after a decade and a half of vacillating between two theories, there may now be an answer. Even so, by examining the content in the "remainers" category, I wonder how the new research confirms a long term trend rather than short term blip in chromosomal decline. I can't help thinking that the sort of authoritative synthesis found in the better sort of popular science books would answer these queries, such is my faith in the general superiority of print volumes!

Of course books have been known to emphasise pet theories and denigrate those of opponents, but the risk of similar issues for online content is far greater. Professor Graves' work seems to dominate the "leavers" category, via her various papers subsequent to her 2002 original, but just about every reference to them is contaminated with overly emotive language. I somehow doubt that if her research was only applicable to other types of animals, say reptiles, there would be nearly so many online stories covering it, let alone the colourful phrasing that permeates this topic. The history of the Y chromosome is as extraordinary as the chromosome itself, but treating serious scientific speculation - and some limited experimental evidence - with tabloid reductionism and show business hoopla won't help when it comes to non-specialists researching the subject.

There may be an argument here for the education system to systematically teach such basics as common sense and rigour, in the hopes of giving non-scientists a better chance of detecting baloney. This of course includes the ability to accurately filter online material during research. Personally, I tend to do a lot of cross-checking before committing to something I haven't read about on paper. If even such highly-resourced and respected websites as the BBC Science News site can make howlers (how about claiming that chimpanzees are human ancestors?) why should we take any of these resources on trust? Unfortunately, the seductive ease with which information can be found on the World Wide Web does not in any way correlate with its quality. As I found out with the shrinking Y chromosome hypothesis, there are plenty of traps for the unwary.