Wednesday, 27 June 2018

A necessary evil? Is scientific whaling worthwhile - or even valid science?

There are some phrases - 'creation science' and 'military intelligence' spring readily to mind - that are worth rather more attention than a first or second glance. Another example is 'scientific whaling', which I believe deserves wider dissemination in the global public consciousness. I previously mentioned this predominantly Japanese phenomenon back in 2010 and it has subsequently had the habit of occasionally appearing in the news. It likewise has a tendency to aggravate emotions rather than promote rational discourse, making it difficult to discern exactly what is going on and whether it fulfils the first part of the phrase.

I remember being about ten years' old when a classmate's older sister visited our school and gave a talk describing her work for Greenpeace. At the time this organisation was in the midst of the Save the Whale campaign, which from my memory appears to have been at the heart of environmental activism in the 1970s. As such, it gained a high level of international publicity and support, perhaps more so than any previous conservation campaign.

Although this finally led to a ban on whale hunting in 1986, several nations opted out. In addition to a small-scale continuation in some indigenous, traditional, whale-hunting communities, Iceland and Norway continue to hunt various species. As a result, various multi-national corporations have followed public opinion and removed their operations from these nations. Japan, on the other hand - with a much larger economy and population, yet home to a far greater whale-hunting operation - is a very different prospect.

There was an international outcry back in March when Norway announced that it was increasing its annual whaling quota by 28%. It's difficult to understand the motivation behind this rise, bearing in mind that Norway's shrinking whale fleet are already failing to meet government quotas. Thanks to warming oceans, the remaining whale populations are moving closer to the North Pole, depriving the Norwegians of an easy catch. What is caught is used for human consumption as well as for pet and livestock food, as it is in Iceland, where the same tourists who go on whale-watching trips are then encouraged to tuck in to cetacean steaks and whale burgers (along with the likes of puffin and other local delicacies).

Although we think of pre-1980s whaling as a voracious industry there have been periods of temporary bans dating back to at least the 1870s, admittedly driven by profit-led concern of declining stocks rather than animal welfare and environmentalism in general. It wasn't just the meat that was economically significant; it's easy to forget that before modern plastics were invented, baleen served a multitude of purposes while the bones and oil of cetaceans were also important materials.

But hasn't modern technology superseded the need for whale-based products? Thanks to a scientific research exemption, Japanese vessels in Antarctica and the North Pacific can work to catch quotas set by the Japanese government, independent of the International Whaling Commission. The relevant legislation also gives the Japanese Institute of Cetacean Research permission to sell whale meat for human consumption, even if it was obtained within the otherwise commercially off-limits Southern Ocean Whale Sanctuary. That's some loophole! So what research is being undertaken?

The various Japanese whaling programmes of the past thirty years have been conducted principally in the name of population management for Bryde's, Fin, Minke and Sei whales. The role of these four species within their local ecosystem and the mapping of levels of toxic pollutants are among the research objectives. The overarching aim is simple: to evaluate if the stocks are robust enough to allow the resumption of large-scale yet sustainable commercial whaling. In other words, Japan is killing a smaller number of whales to assess when they can start killing a greater number of whales!

Following examination of the Japanese whaling programmes, including the current JARPA II study, environmental groups including the World Wildlife Fund as well as the Australian Government have declared Japan's scientific whaling as not fit for purpose. The programmes have led to a very limited number of published research papers, especially when compared to the data released by other nations using non-lethal methods of assessment.

There is now an extremely wide range of non-fatal data collection techniques, such as biopsy sampling and GPS tagging. Small drones nicknamed 'snotbots' are being used to obtain samples from blowhole emissions, while even good old-fashioned sighting surveys that rely on identification of individuals from diagnostics such as tail flukes can be used for population statistics. Japanese scientists have continually stated that they would stop whale hunting if other techniques proved as effective, yet the quality and quantity of research they have published since the 1980s completely negates this.

After examining the results, even some Japanese researchers have admitted that killing whales has not proven to be an accurate way to gain data. Indeed, sessions in 2014 at the United Nations' International Court of Justice confirmed that if anything, the Japanese whale quotas are far too small to provide definitive evidence for their objectives. To put it another way, Japan's Institute of Cetacean Research would have to kill far more whales to confirm if the populations are healthy enough to bear the brunt of pre-1980's scale commercial whaling! Anyone for a large dollop of irony?

Looking at the wider picture, does Japan really need increased volumes of cetacean flesh anyway? After the Second World War, food shortages led to whale meat becoming a primary protein source. Today, Japanese consumption has dropped to just one percent of what it was in the decade post-war. The domestic stockpile is no doubt becoming a burden, since whale meat is now even used in subsidised school lunches, despite the danger of heavy metal poisoning.

Due to the reduction in market size, Japan's scientific whaling programmes are no longer economically viable. So how is it that the long-term aim is to increase catch to fully commercial levels - and who do they think will be eating it? Most countries abide by the International Whaling Commission legislation, so presumably it will be for the domestic market. Although approximately half the nation's population support whale hunting, possibly due its traditional roots (or as a reaction to perceived Western cultural imperialism?) most no longer eat whale meat. So why are the Japanese steadfast in pursuing research that generates poor science, is unprofitable, internationally divisive, and generates an unwanted surplus?

The answer is: no-one really knows, at least outside of the Institute of Cetacean Research; and they're not saying. If ever there was a case of running on automatic pilot, this seems to be it. The name of science is being misused in order to continue with the needless exploitation of marine resources in the Pacific and Southern oceans. Thousands of whales have been unnecessarily slaughtered (I realise that's an emotive word, but it's worth using under the circumstances) at a time when non-lethal techniques are proving their superior research value. Other countries are under pressure to preserve fish stocks and reduce by-catch - by comparison Japan's attitude appears anachronistic in the extreme. By allowing the loophole of scientific whaling, the International Whaling Commission has compromised both science and cetaceans for something of about as much value as fox hunting.

Wednesday, 13 June 2018

Debunking DNA: A new search for the Loch Ness monster

I was recently surprised to read that a New Zealand genomics scientist, Neil Gemmell of Otago University, is about to lead an international team in the search for the Loch Ness monster. Surely, I thought, that myth has long since been put to bed and is only something exploited for the purposes of tourism? I remember some years ago that a fleet of vessels using side-sweeping sonar had covered much of the loch without discovering anything conclusive. When combined with the fact that the most famous photograph is a known fake and the lack of evidence from the plethora of tourist cameras (never mind those of dedicated Nessie watchers) that have convened on the spot, the conclusion seems obvious.

I've put together a few points that don't bode well for the search, even assuming that Nessie is a 'living fossil' (à la coelacanth) rather than a supernatural creature; the usual explanation is a cold water-adapted descendant of a long-necked plesiosaur - last known to have lived in the Cretaceous Period:
  1. Loch Ness was formed by glacial action around 10,000 years ago, so where did Nessie come from? 
  2. Glacial action implies no underwater caves for hiding in
  3. How can a single creature maintain a long-term population (the earliest mentions date back thirteen hundred years)? 
  4. What does such a large creature eat without noticeably reducing the loch's fish population?
  5. Why have no remains ever been found, such as large bones, even on sonar?
All in all, I didn't think much of the expedition's chances and therefore I initially thought that the new research would be a distinct waste of money that could be much better used elsewhere in Scotland. After all, the Shetland seabird population is rapidly decreasing thanks to over-fishing, plastic pollution and loss of plankton due to increasing ocean temperatures. It would make more sense to protect the likes of puffins (who have suffered a 98% decline over the past 20 years), along with guillemots and kittiwakes amongst others.

However, I then read that separate from the headline-grabbing monster hunt, the expedition's underlying purpose concerns environmental DNA sampling, a type of test never before used at Loch Ness. Gemmell's team have proffered a range of scientifically valid reasons for their project:
  1. To survey the loch's ecosystem, from bacteria upwards 
  2. Demonstrate the scientific process to the public (presumably versus all the pseudoscientific nonsense surrounding cryptozoology)
  3. Test for trace DNA from potential but realistic causes of 'monster' sightings, such as large sturgeon or catfish 
  4. Understand local biodiversity with a view to conservation, especially as regards the effect caused by invasive species such as the Pacific pink salmon. 
Should the expedition find any trace of reptile DNA, this would of course prove the presence of something highly unusual in the loch. Gemmell has admitted he doubts they will find traces of any monster-sized creatures, plesiosaur or otherwise, noting that the largest unknown species likely to be found are bacteria. Doesn't it seem strange though that sometimes the best way to engage the public - and gain funding - for real science is to use what at best could be described as pseudoscience?

Imagine if NASA could only get funding for Earth observation missions by including the potential to prove whether our planet was flat or not? (Incidentally, you might think a flat Earth was just the territory of a few nutbars, but a poll conducted in February this year suggests that fully two percent of Americans are convinced the Earth is a disk, not spherical).

Back to reality. Despite the great work of scientists who write popular books and hold lectures on their area of expertise, it seems that the media - particularly Hollywood - are the primary source of science knowledge to the general public. Hollywood's version of de-extinction science, particularly for ancient species such as dinosaurs, seems to be far better known than the relatively unglamorous reality. Dr Beth Shapiro's book How to clone a mammoth for example is an excellent introduction to the subject, but would find it difficult to compete along side the adventures of the Jurassic World/Park films.

The problem is that many if not most people want to believe in a world that is more exciting than their daily routine would suggest, with cryptozoology offering itself as an alternative to hard science thanks to its vast library of sightings over the centuries. Of course it's easy to scoff: one million tourists visit Loch Ness each year but consistently fail to find anything; surely in this case absence of evidence is enough to prove evidence of absence?

The Loch Ness monster is of course merely the tip of the mythological creature iceberg. The Wikipedia entry on cryptids lists over 170 species - can they all be just as suspect? The deep ocean is the best bet today for large creatures new to science. In a 2010 post I mentioned that the still largely unexplored depths could possibly contain unknown megafauna, such as a larger version of the oarfish that could prove to be the fabled sea serpent.

I've long had a fascination with large creatures, both real (dinosaurs, of course) and imaginary. When I was eight years old David Attenborough made a television series called Fabulous Animals and I had the tie-in book. In a similar fashion to the new Loch Ness research project, Attenborough used the programmes to bring natural history and evolutionary biology to a pre-teen audience via the lure of cryptozoology. For example, he discussed komodo dragons and giant squid, comparing extant megafauna to extinct species such as woolly mammoth and to mythical beasts, including the Loch Ness Monster.

A few years later, another television series that I avidly watched covered some of the same ground, namely Arthur C. Clarke's Mysterious World. No less than four episodes covered submarine cryptozoology, including the giant squid, sea serpents and of course Nessie him (or her) self. Unfortunately the quality of such programmes has plummeted since, although as the popularity of the (frankly ridiculous) seven-year running series Finding Bigfoot shows, the public have an inexhaustible appetite for this sort of stuff.

I've read that it is estimated only about ten percent of extinct species have been discovered in the fossil record, so there are no doubt some potential surprises out there (Home floresiensis, anyone?) However, the evidence - or lack thereof - seems firmly stacked against the Loch Ness monster. What is unlikely though is that the latest expedition will dampen the spirits of the cryptid believers. A recent wolf-like corpse found in Montana, USA, may turn out to be coyote-wolf hybrid, but this hasn't stopped the Bigfoot and werewolf fans from spreading X-Files style theories across the internet. I suppose it’s mostly harmless fun, and if Professor Gemmell’s team can spread some real science along the way, who am I to argue with that? Long live Nessie!

Wednesday, 30 May 2018

Photons vs print: the pitfalls of online science research for non-scientists


It's common knowledge that school teachers and university lecturers are tired of discovering that their students' research is often limited to one search phrase on Google or Bing. Ignoring the minimal amount of rewriting that often accompanies this shoddy behaviour - leading to some very same-y coursework - one of the most important questions to arise is how easy is it to confirm the veracity of online material compared to conventionally-published sources? This is especially important when it comes to science research, particularly when the subject matter involves new hypotheses and cutting-edge ideas.

One of the many problems with the public's attitude to science is that it is nearly always thought of as an expanding body of knowledge rather than as a toolkit to explore reality. Popular science books such as Bill Bryson's 2003 best-seller A Short History of Nearly Everything follow this convention, disseminating facts whilst failing to illuminate the methodologies behind them. If non-scientists don't understand how science works is it little wonder that the plethora of online sources - of immensely variable quality - can cause confusion?

The use of models and the concurrent application of two seemingly conflicting theories (such as Newton's Universal Gravitation and Einstein's General Theory of Relativity) can only be understood with a grounding in how the scientific method(s) proceed. By assuming that scientific facts are largely immutable, non-scientists can become unstuck when trying to summarise research outcomes, regardless of the difficulty in understanding the technicalities. Of course this isn't true for every theory: the Second Law of Thermodynamics is unlikely to ever need updating; but as the discovery of dark energy hints, even Einstein's work on gravity might need amending in future. Humility and caution should be the bywords of hypotheses not yet verified as working theories; dogma and unthinking belief have their own place elsewhere!

In a 1997 talk Richard Dawkins stated that the methods of science are 'testability, evidential support, precision, quantifiability, consistency, intersubjectivity, repeatability, universality, and independence of cultural milieu.' The last phrase implies that the methodologies and conclusions for any piece of research should not differ from nation to nation. Of course the real world intrudes into this model and so culture, gender, politics and even religion play their part as to what is funded and how the results are presented (or even which results are reported and which obfuscated).

For those who want to stay ahead of the crowd by disseminating the most recent breakthroughs it seems obvious that web resources are far superior to most printed publications, professional journals excepted - although the latter are rarely suitable for non-specialist consumption. The expenses associated with producing popular science books means that online sources are often the first port of call.

Therein lies the danger: in the rush to skim seemingly inexhaustible yet easy to find resources, non-professional researchers frequently fail to differentiate between articles written by scientists, those by journalists with science training, those by unspecialised writers, largely on general news sites, and those by biased individuals. It's usually quite easy to spot material from cranks, even within the quagmire of the World Wide Web (searching for proof that the Earth is flat will generate tens of millions of results) but online content written by intelligent people with an agenda can be more difficult to discern. Sometimes, the slick design of a website offers reassurance that the content is more authentic than it really is, the visual aspects implying an authority that is not justified.

So in the spirit of science (okay, so it's hardly comprehensive being just a single trial) I recently conducted a simple experiment. Having read an interesting hypothesis in a popular science book I borrowed from the library last year, I decided to see what Google's first few pages had to say on the same subject, namely that the Y chromosome has been shrinking over the past few hundred million years to such an extent that its days - or in this case, millennia - are numbered.

I had previously read about the role of artificial oestrogens and other disruptive chemicals in the loss of human male fertility, but the decline in the male chromosome itself was something new to me. I therefore did a little background research first. One of the earliest sources I could find for this contentious idea was a 2002 paper in the journal Nature, in which the Australian geneticist Professor Jennifer Graves described the steady shrinking of the Y chromosome in the primate order. Her extrapolation of the data, combined with the knowledge that several rodent groups have already lost their Y chromosome, suggested that the Home sapiens equivalent has perhaps no more than ten million years left before it disappears.

2003 saw the publication of British geneticist Bryan Sykes' controversial book Adam's Curse: A Future Without Men. His prediction based on the rate of atrophy in the human Y chromosome was that it will only last another 125,000 years. To my mind, this eighty-fold difference in timescales suggests that for these early days in its history, very little of the hypothesis could be confirmed with any degree of certainty.

Back to the experiment itself. The top results for 'Y chromosome disappearing' and similar search phrases lead to articles published between 2009 and 2018. They mostly fall into one of two categories: (1) that the Y chromosome is rapidly degenerating and that males, at least of humans and potentially all other mammal species, are possibly endangered; and (2) that although the Y chromosome has shrunk over the past few hundred million years it has been stable for the past 25 million and so is no longer deteriorating. A third, far less common category, concerns the informal polls taken of chromosomal researchers, who have been fairly evenly divided between the two opinions and thus nicknamed the "leavers" and the "remainers". Considering the wildly differing timescales mentioned above, perhaps this lack of consensus is proof of science in action; there just hasn't been firm enough evidence for either category to claim victory.

What is common to many of the results is that inflammatory terms and hyperbole are prevalent, with little in the way of caution you would hope to find with cutting-edge research. Article titles include 'Last Man on Earth?', 'The End of Men' and 'Sorry, Guys: Your Y Chromosome May Be Doomed ', with paragraph text contain provocative phrases such as 'poorly designed' and 'the demise of men'. This approach is friendly to organic search at the same time as amalgamating socio-political concerns with the science.

You might expect that the results would show a change in trend of time, first preferring one category and then the other, but this doesn't appear to be the case. Rearranged in date order, the search results across the period 2009-2017 include both opinions running concurrently. This year however has seen a change, with the leading 2018 search results so far only offering support to the rapid degeneration hypothesis. The reason for this difference is readily apparent: publication of a Danish study that bolsters support for it. This new report is available online, but is difficult for a non-specialist to digest. Therefore, most researchers such as myself would have to either rely upon second-hand summaries or, if there was enough time, wait for the next popular science book that discusses it in layman's terms.

As it is, I cannot tell from my skimming approach to the subject whether the new research is thorough enough to be completely reliable. For example, it only examined the genes of sixty-two Danish men, so I have no idea if this is a large enough sample to be considered valid beyond doubt. However, all of the 2018 online material I read accepted the report without question, which at least suggests that after a decade and a half of vacillating between two theories, there may now be an answer. Even so, by examining the content in the "remainers" category, I wonder how the new research confirms a long term trend rather than short term blip in chromosomal decline. I can't help thinking that the sort of authoritative synthesis found in the better sort of popular science books would answer these queries, such is my faith in the general superiority of print volumes!

Of course books have been known to emphasise pet theories and denigrate those of opponents, but the risk of similar issues for online content is far greater. Professor Graves' work seems to dominate the "leavers" category, via her various papers subsequent to her 2002 original, but just about every reference to them is contaminated with overly emotive language. I somehow doubt that if her research was only applicable to other types of animals, say reptiles, there would be nearly so many online stories covering it, let alone the colourful phrasing that permeates this topic. The history of the Y chromosome is as extraordinary as the chromosome itself, but treating serious scientific speculation - and some limited experimental evidence - with tabloid reductionism and show business hoopla won't help when it comes to non-specialists researching the subject.

There may be an argument here for the education system to systematically teach such basics as common sense and rigour, in the hopes of giving non-scientists a better chance of detecting baloney. This of course includes the ability to accurately filter online material during research. Personally, I tend to do a lot of cross-checking before committing to something I haven't read about on paper. If even such highly-resourced and respected websites as the BBC Science News site can make howlers (how about claiming that chimpanzees are human ancestors?) why should we take any of these resources on trust? Unfortunately, the seductive ease with which information can be found on the World Wide Web does not in any way correlate with its quality. As I found out with the shrinking Y chromosome hypothesis, there are plenty of traps for the unwary.

Tuesday, 15 May 2018

Troublesome trawling: how New Zealand's fishing industry hid the truth about by-kill

I recently signed a petition to reduce by-kill in New Zealand waters by installing cameras on all commercial fishing vessels. Forest and Bird and World Wildlife Fund New Zealand are jointly campaigning for this monitoring, as only a small percentage of boats as yet have cameras. The previous New Zealand government agreed to the wider introduction, but Fisheries Minister Stuart Nash is considering reversing this due to industry pressure. Considering that the current administration is a coalition involving the Green Party, this seems highly ironic. Is this yet another nail in the coffin of New Zealand's tourist brand as 100% Pure?

Despite requests from the fishing industry not to release it to the public, on-board footage shows the extent of the by-kill. High numbers of rare and endangered species have been drowned in nets, from seabirds such as wandering albatross and yellow-eyed penguins/hoiho, to cetaceans (there are thought to be only fifty or so Māui's dolphin/popoto left), seals and sea lions.

Many of the cameras already installed on New Zealand boats failed in the first three months due to inadequate waterproofing; when allied with the fact that the supplier of the technology was an integrated part of the seafood industry, there's more than a whiff of something fishy going on. Although official statistics are often considered to be of dubious quality, Occam's razor can be used to decipher the by-kill figures as they have been reported in the past decade. Only three percent of New Zealand's set net boats are officially monitored, yet they account for the vast majority of the recorded by-kill. Given a choice between sheer coincidence (i.e. only monitored vessels are catching large numbers of non-target species) and severe under-reporting from unmonitored boats, it is obvious that the latter hypothesis follows the law of parsimony.

Sadly, widespread deception by New Zealand's fishing industry isn't something new. A third-party report involving undercover operatives stated that between 1950 and 2010 up to 2.7 times the official tonnage of fish was actually being caught, peaking in 1990. All this comes from an industry that is laden with checks and measures, not to mention sustainability certificates. Killing marine mammals within the country's Exclusive Economic Zone isn't just a minor inconvenience: since 1978 it's been illegal, with severe fines and even prison sentences for those convicted. Small wonder then that the majority of by-kill has been undeclared.

What is equally sad is the lack of interest from the New Zealand public in resolving the problems. After all, over ninety percent of the population are not vegetarian, so we must assume the vast majority enjoy seafood in their diet. The rapid replacement of over-fished sharks with Humboldt squid in the Sea of Cortez off Mexico's Pacific coast shows how the removal of a key species can severely affect food webs. If New Zealanders are to continue to enjoy eating fish with their chips, the sea needs better protection.

Over the past decade, other nations have shown commitment to reducing by-kill and lessening waste. In 2010 the British celebrity chef Hugh Fearnley-Whittingstall launched the Fish Fight campaign to stop the discard of about half of the European Union's catch (due to it being either undersized or from non-quota species). Immense public support over the subsequent four years led to phased changes in the European Union's Common Fisheries Policy, proof that citizen action can make fundamental improvements.

Incidentally, it wasn't even a case of division along party lines; I was living in the UK at the time and wrote to my Labour Member of Parliament, who replied in a typically circumlocutory fashion that she would look into the matter! Even the then Conservative Prime Minister David Cameron agreed that EU policy needed a radical overhaul, a rare instance of cross-party sense and sensibility over pride and prejudice.

So what solutions are there to reducing by-kill? After all, installing cameras would only be the first step in assessing the scale of the problem, not removing it. Since Australia started monitoring its long-line tuna fleet, there has been a whopping seven hundred percent increase in the reporting of seabird and marine mammal by-kill. Some seaboard states in the USA have already banned set netting, which is still in widespread use in New Zealand. Several areas around the New Zealand coast such as between Kaipara Harbour and Mokau already prohibit this method of fishing - in this case to protect the few remaining Māui's dolphin - so there are precedents.

In addition, there are programmes currently testing new technology that may provide the answer. In 2002 the now charitable trust Southern Seabird Solutions was created to reduce by-kill of albatross and other endangered pelagic species.  This alliance of fishing industry leaders, recreational fishers, researchers and government analysts are trialling wondrously-named devices such as the Brady Bird Baffler, Hook Pod, tori lines and warp scarers.

Elsewhere, nocturnal experiments have been conducted using acoustic pingers to deter dolphins, although the results to date aren't especially promising. Equally dubious is the amended trawl net design for squid fishing vessels that incorporates the Sea Lion Exclusion Device (SLED); only today, it was reported that a juvenile sea lion had been found dead in such a net. Clearly, STEM ingenuity is being brought into play, but it will require further development and widespread introduction of the best solutions without industry interference in order to minimalise by-kill.

There are also some simple changes of practice that don't require equipment, only for the boat crews to be more aware of wildlife and act accordingly. Such procedures include moving away from areas with marine mammals present, not dumping offal, recovering lost gear, and changing the operating depth and retrieval speed of nets.

As usual, the financial considerations have taken precedence over the ecological ones. New Zealand has a comparatively small economy and as seafood is the nation's fifth largest export earner - over one billion dollars annually - it is hardly surprising that successive governments have tended to side with industry rather than environmentalists. However, could it be that there is now enough apprehension about the general state of the oceans to overhaul the sector's laissez faire practice? After all, in 2007 a fishing ban in New Zealand waters was placed on orange roughy, whose rapid decline caused huge concern.

There are of course plenty of other environmental issues affecting marine life: plastic pollution (including microbeads); increasing temperature and acidity, the latter especially drastic for shellfish; offshore algal blooms due to agricultural nutrient run-off; and numerous problems created by the oil and gas industry, from spillages to the far less reported exploratory air guns that impact cetacean behaviour.

The longer I've been writing this blog the more I'm convinced that science cannot be considered independent of the wider society in which it exists. Social, political and religious pressures and viewpoints can all adversely affect both what research is funded, what the time constraints are and how the results are presented or even skewed in favour of a particular outcome.

In the case outlined above, government ministries hid evidence to protect short-term industry profits at the expense of long-term environmental degradation - and of course the increase in public spending the latter will require for mitigation. New Zealand's precious dairy sector is already taking a pounding for the problems it has knowingly generated, so no doubt the fishing industry is keen to avoid a similar fate.

By allowing such sectors to regulate and police themselves and thus avoid public transparency, the entire nation suffers in the long run. We don't know what the decline or disappearance of populations of for example wandering albatross and Māui's dolphins might have on the (dis)appearance of snapper or blue cod at the dinner table, but as the alarming loss of Mediterranean and Californian anchovies and sardines suggests, negative cascades in the food chain can occur with extreme rapidity. Natural selection is a wonderful method of evolution but we are pushing it to the limit if we expect it to cope with the radical changes we are making to the environment at such a high speed. By-kill is something we can reduce, but only if industry and governments give science and the public a 'fair go'. Now isn't that something New Zealanders are supposed to be good at?

Saturday, 14 April 2018

Avian Einsteins: are some bird species as clever as primates?


One of the strangest examples of animal behaviour I've ever seen in real life took place in my neighbourhood last year, with what for all intents and purposes appeared to be a vigil or wake. Half a dozen Common myna birds (Acridotheres tristis) were gathered in a circle around a dead or dying member of their species, making the occasional muted noise. Unfortunately I was rushing to get to work and didn't stop to take a photograph, which was a shame as the birds ignored me even when I passed within a few metres of them.

Even though the street was a cul-de-sac, I couldn't help thinking that sitting in the road was not the safest place for the birds to congregate, considering that they could have stayed close by on the grass verge; instead their proximity to the central, non-moving individual seemed to override their concerns for personal safety. 

Some biologists have suggested that this behaviour, mostly known from corvids (that is, the crow family) is due to the birds' instinctive need to advertise the area to others as particularly dangerous. Although there are plenty of cats in my neighbourhood this idea doesn't seem to make sense, at least in this particular instance. There were several trees that would have served as convenient perching locations for the myna birds, who weren't nearly as loud as they usually are.

Without getting too anthropomorphic about it, they were a lot less garrulous than normal, implying a sombre occasion. Far from providing warnings about the locality, the birds were extremely quiet for a gathering of this size; I should know, as myna birds are probably the third or fourth most common species in my garden and their routine screeches and squawks are far from subtle, to say the least. 

So is it possible that despite one of their number no longer moving or making sounds, its fellow birds understood that this inanimate object was one of their kind?  I've occasionally found dead birds of other species such as goldfinches, song thrushes and blackbirds in my garden and none have been the subject of similar behaviour. As an amateur scientist - or indeed anyone with curiosity might do - I researched the subject and found that crows are well known for gathering around bodies of the same species while magpies (another corvid) have even been reported as covering up dead fellows with twigs and the like. Are these reports all April Fool jokes or are some species of Aves unsung geniuses?

Further enquiry led me to discover that the corvid family, which includes ravens and jays, is the pinnacle of avian intelligence, closely followed by parrots. I initially thought that my observations of the myna bird, a member of the Sturnidae family, constituted something new, until I read a 2014 report from the School of Biological Sciences in Malaysia stating that laboratory testing proved them to be better at counting food items than House crows (Corvus splendens). In certain situations then, myna birds are up there with the brainiest of their kind.

In January this year I received another surprise on reading that the three most common avian raptors in northern Australia's tropical savannas - the Brown falcon (Falco berigora), the Black kite (Milvus migrans) and the Whistling kite (Haliastur sphenurus) - have been reported as deliberately spreading bush fires. It appears that after lightning has started a wild fire, the birds pick up burning twigs in their beaks or claws and drop them on untouched forest or grassland some tens of metres away. This then causes prey items such as lizards, snakes, rodents and amphibians to flee the new fire zone, only to be picked off by the waiting raptors.

Although birds of prey in North and South America, West African and Papua New Guinea are known to hunt on the edges of wild fires, the ingenuity of their Australian counterparts is without precedent.  What's more, they appear to have been using this behaviour for thousands of years, since it is clearly recorded in local Aboriginal legends concerning 'fire hawks'; only until now, white settlers have ignored the stories due to their implausibility.

Intelligent corvids

A 2016 report by the U.S. National Academy of Sciences has uncovered biological evidence to support advanced avian intelligence. Although their brains are a lot smaller than mammals, especially primates, this is obviously due to the overall diminutive size of the animals themselves. The brain mass to body mass ratio of some bird families is far larger than expected for an animal of that size, approximating that of the most intelligent mammals.

Although bird brains have a somewhat different structure to mammalian brains, the corresponding higher-functioning regions are both comparatively large and have a neuron density double that of primate equivalents. Therefore it appears corvids and some other birds have undergone parallel evolution that has maximised their cognition.

Birds are good at far more than just adapting to new conditions and environments, with the most social (as opposed to solitary) species leading the way in problem solving and abstract thinking. Here are few more examples that prove their cognition can go far beyond basic instinct:
  1. Self-recognition: Eurasian magpies (Pica pica) pass the mirror test, meaning they can recognise their reflection as themselves rather than as another member of their species. 
  2. Tool usage: various birds use twigs and cactus spines to extract insects, much as chimpanzees insert sticks into termite mounds.  
  3. Deception: Woodhouse's scrub jays (Aphelocoma woodhouseii) have been observed moving food caches to deceive onlookers and keep the food to themselves.
  4. Planning: Crows have used multi-step planning in tests to retrieve progressively longer sticks with which to reach food. This ability isn't new either, since the 1st century Roman polymath Pliny the Elder observed corvids undertaking similar behaviour to that described in Aesop's fable The Crow and the Pitcher.
  5. Exploiting artificial environments: The kea (Nestor notabilis), New Zealand's alpine parrot, has learnt to unzip rucksack pockets to obtain food. Despite being unlike anything in nature, some bird species understand man-made objects.
When I was a child, the term 'bird brain' was employed for derogatory purposes while 'talking' caged birds such as cockatiels, budgies and parakeets were thought of as just mimics without any understanding of what they were saying. This continuation of the Western tradition that humanity is the pinnacle of creation, far superior to all other lifeforms, is now under serious attack. Our prejudices have caused us to ignore the evidence right under our noses, but as per my post on animals that farm, we humans have very few unique traits left. Avian intelligence is undoubtedly different from ours, but perhaps less so than that of dolphins, whose watery environment means they are unlikely to ever be tool makers.

Another issue is that compared to say mammals, there is a far smaller variety of bird forms; even such specialised species as penguins, ostrich and kiwi don't stray far from the generic Aves design, meaning we tend to associate bird intelligence with the most ubiquitous - and comparatively slow-witted - urban species such as house sparrows and feral pigeons.

Beginning in the 1970s, researchers have explored the sometimes controversial notion that if the dinosaurs hadn't died out at the end of the Cretaceous, a small- to medium-sized carnivore such as Troodon would have eventually evolved into a reptile with human-level intelligence. Crows and their kind may not have a primate-sized brain, but these dinosaur descendants are evidently far superior to the dim stereotype we usually assign to them. They may be small, but clearly in this case, size doesn't seem to matter: our feathered friends are capable of far greater mental activity than their songs, squawks and screeches imply.

Sunday, 1 April 2018

Engagement with Oumuamua: is our first interstellar visitor an alien spacecraft?

It's often said that fact follows fiction but there are times when some such instances appear to be uncanny beyond belief.  One relatively well-known example comes from the American writer Morgan Robertson, whose 1898 novella The Wreck of the Titan (originally entitled Futility) eerily prefigured the 1912 loss of the Titanic. The resemblances between the fictional precursor and the infamous passenger liner are remarkable, including the month of the sinking, the impact location, and similarities of size, speed and passenger capacity. I was first introduced to this series of quirky coincidences via Arthur C. Clarke's 1990 novel The Ghost from the Grand Banks, which not incidentally is about attempts to raise the Titanic. The reason for including the latter reference is that there may have just been an occurrence that involves another of Clarke's own works.

Clarke's 1973 award-winning novel Rendezvous with Rama tells of a 22nd century expedition to a giant interstellar object that is approaching the inner solar system. The fifty-four kilometre long cylinder, dubbed Rama, is discovered by an Earthbound asteroid detection system called Project Spaceguard, a name which since the 1990s has been adopted by real life surveys aiming to provide early warning for Earth-crossing asteroids. Rama is revealed to be a dormant alien spacecraft, whose trajectory confirms its origin outside of our solar system. After a journey of hundreds of thousands of years, Rama appears to be on a collision course with the Sun, only for it to scoop up solar material as a fuel source before heading back into interstellar space (sorry for the spoiler, but if you haven't yet read it, why not?)

In October last year astronomer Robert Weryk at the Haleakala Observatory in Hawaii found an unusual object forty days after its closest encounter with the Sun. Initially catalogued as 1I/2017 U1, the object was at first thought to be a comet, but after no sign of a tail or coma it was reclassified as an asteroid. After another week's examination 1I/2017 U1 was put into a class all by itself and this is when observers began to get excited, as its trajectory appeared to proclaim an interstellar origin.

As it was not spotted until about thirty-three million kilometres from the Earth, the object was far too small to be photographed in any detail; all that appears to telescope-mounted digital cameras is a single pixel. Therefore its shape was inferred from the light curve, which implied a longest-to-shortest axis ratio of 5:1 or even larger, with the longest dimension being between two hundred and four hundred metres. As this data became public, requests were made for a more familiar name than just 1I/2017; perhaps unsurprisingly, Rama became a leading contender. However, the Hawaiian observatory's Pan-STARRS team finally opted for the common name Oumuamua, which in the local language means 'scout'.

Various hypotheses have been raised as to exactly what type of object Oumuamua is, from a planetary fragment to a Kuiper belt object similar - although far smaller than - Pluto.  However, the lack of off-gassing even at perihelion (closest approach to the Sun) implies that any icy material must lie below a thick crust and the light curve suggests a denser material such as metal-rich rock. This sounds most unlike any known Kuiper belt object.

These unusual properties attracted the attention of senior figures in the search for extra-terrestrial intelligence. Project Breakthrough Listen, whose leadership includes SETI luminaries Frank Drake, Ann Druyan and Astronomer Royal Martin Rees, directed the world's largest manoeuvrable radio telescope towards Oumuamua. It failed to find any radio emissions, although the lack of a signal is tempered with the knowledge that SETI astronomers are now considering lasers as a potentially superior form of interstellar communication to radio.

The more that Oumuamua has been studied, the more surprising it appears. Travelling at over eighty kilometres per second relative to the Sun, its path shows that it has not originated from any of the twenty neighbouring solar systems. Yet it homed in on our star, getting seventeen percent nearer to the Sun than Mercury does at its closest. This seems to be almost impossible to have occurred simply by chance - space is just too vast for an interstellar object to have achieved such proximity. So how likely is it that Oumuamua is a real-life Rama? Let's consider the facts:
  1. Trajectory. The area of a solar system with potentially habitable planets is nicknamed the 'Goldilocks zone', which for our system includes the Earth. It's such a small percentage of the system, extremely close to the parent star, that for a fast-moving interstellar object to approach at random seems almost impossible. Instead, Oumuamua's trajectory was perfectly placed to obtain a gravity assist from the Sun, allowing it to both gain speed and change course, with it now heading in the direction of the constellation Pegasus.
  2. Motion. Dr Jason Wright, an associate professor of astronomy and astrophysics at Penn State University, likened the apparent tumbling motion to that of a derelict spacecraft, only to retract his ideas when criticised for sensationalism.
  3. Shape. All known asteroids and Kuiper belt objects are much less elongated than Oumuamua, even though most are far too small to settle into spherical shape due to gravitational attraction (the minimum diameter being around six hundred kilometres for predominantly rocky objects). The exact appearance is unknown, with the ubiquitous crater-covered asteroid artwork being merely an artist's impression. Astronautical experts have agreed that Oumuamua's shape is eminently suitable for minimising damage from particles.
  4. Composition. One definitive piece of data is that Oumuamua doesn't emit clouds of gas or dust that are usually associated with objects of a similar size. In addition, according to a report by the American Astronomical Society, it has an 'implausibly high density'. Somehow, it has survived a relatively close encounter with the Sun while remaining in one piece - at a maximum velocity of almost eighty-eight kilometres per second relative to our star!
  5. Colour. There appears to be a red region on the surface, rather than a uniform colour expected for an object that has been bombarded with radiation on all sides whilst in deep space for an extremely long period.
So where does this leave us? There is an enormous amount of nonsense written about alien encounters, conspiracy theories and the like, with various governments and the military seeking to hide their strategies in deliberate misinformation. For example, last year the hacker collective Anonymous stated that NASA would soon be releasing confirmation of contact with extraterrestrials; to date, in case you were wondering, there's been no such announcement. Besides which, wouldn't it more likely to come from a SETI research organisation such as the Planetary Society or Project Breakthrough Listen?

Is there any evidence to imply cover-up regarding Oumuamua? Here's some suggestions:
  1. The name Rama - already familiar to many from Arthur C. Clarke's novel and therefore evocative of an artificial object - was abandoned for a far less expressive and more obscure common name. Was this an attempt to distance Oumuamua from anything out of the ordinary?
  2. Dr Wright's proposals were luridly overstated in the tabloid media, forcing him to abandon further investigation. Was this a deliberate attempt by the authorities to make light of his ideas, so as to prevent too much analysis while the object was still observable?
  3. Limited attempts at listening for radio signals have been made, even though laser signalling is now thought to be a far superior method. So why have these efforts been so half-hearted for such a unique object?
  4. The only images available in the media are a few very samey artist's impressions of an elongated asteroid, some pock-marked with craters, others, especially animations, with striations (the latter reminding me more of fossilised wood). Not only are these pure speculation but none feature the red area reported from the light curve data. It's almost as if the intention was to show a totally standard asteroid, albeit of unusual proportions. But this appearance is complete guesswork: Oumuamua has been shoe-horned into a conventional natural object, despite its idiosyncrasies.
Thanks to Hollywood, most people's ideas of aliens are as implacable invaders. If - and when - the public receive confirmation of intelligent alien life will there be widespread panic and disorder? After all, the Orson Welles' 1938 radio version of H.G. Wells' War of the Worlds led some listeners to flee their homes, believing a Martian invasion had begun. Would people today be any different? The current following of dangerous fads such as paleo diets and raw water, never mind the paranoid conspiracy theories that fill the World Wide Web, lead me to expect little change from our credulous forbears.

The issue of course, comes down to one of security. Again, science fiction movies tend to overshadow real life space exploration, but the fact is that we have no spacecraft capable of matching orbits with the likes of Oumuamua. In Arthur C. Clarke's Rendezvous with Rama, colonists on 22nd century Mercury become paranoid with the giant spacecraft's approach and attempt to destroy it with a nuclear missile (oops, another spoiler there). There is no 21st century technology that could match this feat, so if Oumuamua did turn out to be an alien craft, we would have to hope for the best. Therefore if, for example, the U.S. Government gained some data that even implied the possibility of artifice about Oumuamua, wouldn't it be in their best interest to keep it quiet, at least until it is long gone?

In which case, promoting disinformation and encouraging wild speculation in the media would be the perfect way to disguise the truth. Far from being an advanced - if dead or dormant - starship, our leaders would rather we believed it to be a simple rocky asteroid, despite the evidence to the contrary. Less one entry for the Captain's log, and more a case of 'to boulderly go' - geddit?

Sunday, 18 March 2018

Smart phone, dumb people: is technology really reducing our intelligence?

IQ testing is one of those areas that always seems to polarise opinion, with many considering it useful for children as long as it is understood to be related to specific areas of intelligence rather than a person's entire intellectual capabilities. However, many organisations, including some employers, use IQ tests as a primary filter, so unfortunately it cannot be ignored as either irrelevant or outdated. Just as much of the education system is still geared towards passing exams, IQ tests are seen as a valid method to sort potential candidates. They may not be completely valid, but are used as a short-cut tool that serves a limited purpose.

James Flynn of the University of Otago in New Zealand has undertaken long-term research into intelligence, so much so that the 'Flynn Effect' is the name given to the worldwide increase in intelligence since IQ tests were developed over a century ago. The reasons behind this increase are not fully understood, but are probably due to the complex interaction of numerous environmental factors such as enriched audio-visual stimulation, better - and more interactive - education methods, even good artificial lighting for longer hours of reading and writing. It is interesting that as developing nations rapidly gain these improvements to society and infrastructure, their average IQ shows a correspondingly rapid increase when compared to the already developed West and its more staid advancement.

Research suggests that while young children's IQ continues to increase in developed nations, albeit at a reduced rate, the intelligence of teenagers in these countries has been in slow decline over the past thirty years. What is more, the higher the income decile, the larger the decrease. This hints that the causes are more predominant in middle-class lifestyles; basically, family wealth equates to loss of IQ! Data for the UK and Scandinavian countries indicates that a key factor may be the development of consumer electronics, starting with VCRs, games consoles and home computers and now complemented by smart phones, tablets and social media. This would align with the statistics, since the drop is highest among children likely to have greatest access to the devices. So could it be true that our digital distractions are dumbing us down?

1) Time

By spending more time on electronic devices, children live in a narrower world, where audio-visual stimulation aims for maximum enjoyment with minimal effort, the information and imagery flying by at dizzying speed. This isn't just the AV presentation of course: digital content itself closely aligns to pop cultural cornerstones, being glamorous, gimmicky, transient and expendable. As such, the infinitesimally small gradations of social status and friendship that exist amongst children and teenagers requires enormous effort on their part to maintain a constant online presence, both pro-actively and reactively responding to their peers' (and role models') endless inanities.

The amount of effort it would take to filter this is mind-boggling and presumably takes away a lot of time that could be much better spent on other activities. This doesn't have to be something as constructive as reading or traditional studying: going outdoors has been shown to have all sorts of positive effects, as described in Richard Louv's 2005 best-seller Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder.

Studies around the world have shown that there are all sorts of positive effects, including on mood, by mere immersion in nature, not just strenuous physical activity. Whether humans have an innate need for observing the intricate fractal patterns of vegetation (grass lawns and playing fields have been found to be ineffective) or whether it's noticing the seemingly unorganised behaviour of non-human life forms, the Japanese government have promoted Shinrin-yoku or 'forest air bathing' as a counterbalance to the stresses of urbanised existence. It sounds a bit New Age, but there is enough research to back up the idea that time spent in the natural environment can profoundly affect us.

Meanwhile, other nations appear to have given in, as if admitting that their citizens have turned into digitally-preoccupied zombies. Last year, the Dutch town of Bodegraven decided to reduce accidents to mobile-distracted pedestrians by installing red and green LED strips at a busy road junction, so that phone users could tell if it was safe to cross without having to look up!

2) Speed

One obvious change in the past four decades has been in the increased pace of life in developed nations. As we have communication and information retrieval tools that are relatively instantaneous, so employers expect their workforce to respond in tune with the speed of these machines. This act-now approach hardly encourages in-depth cogitation but relies upon seat-of-the-pants thinking, which no doubt requires a regular input of caffeine and adrenaline. The emphasis on rapid turnaround, when coupled with lack of patience, has led to an extremely heavy reliance on the first page of online search results: being smart at sifting through other people's data is fast becoming a replacement for original thought, as lazy students have discovered and no doubt as many school teachers and university lecturers could testify.

Having a convenient source of information means that it is easier for anyone to find a solution to almost anything rather than working something out for themselves. This can lead to a decline in initiative, something which separates thought leaders from everyone else. There is a joy to figuring out something, which after all is a key motivation for many STEM professionals. Some scientists and engineers have explained that being able to understand the inner workings of common objects was a key component of their childhood, leading to an obvious career choice. For example, New Zealand-based scientist and science communicator Michelle Dickinson (A.K.A. Nanogirl) spent her childhood dismantling and repairing such disparate household items as home computers and toasters, echoing Ellie Arroway, the heroine in Carl Sagan's novel Contact, who as a child repaired a defective valve radio before going on to become a radio astronomer.

Of course, these days it would be more difficult to repair contemporary versions of these items, since they are often built so that they cannot even be opened except in a machine shop. Laptops and tablets are prime examples and I've known cases where the likes of Microsoft simply replace rather than repair a screen-damaged device. When I had a desktop computer I frequently installed video and memory cards, but then how-to videos are ubiquitous on YouTube. The latest generation of technology doesn't allow for such do-it-yourself upgrades, to the manufacturer's advantage and the consumer's detriment. As an aside, it's worrying that so many core skills such as basic repairs or map navigation are being lost; in the event of a massive power and/or network outage due to the likes of a solar flare, there could be a lot of people stuck in headless chicken mode. Squawk!

3) Quality

While the World Wide Web covers every subject imaginable (if being of immensely variable quality), that once fairly reliable source of information, television, has largely downgraded the sometimes staid but usually authoritative documentaries of yesteryear into music promo-style pieces of infotainment. Frequently unnecessary computer graphics and overly-dramatic reconstructions and voice overs are interwoven between miniscule sound bites from the experts, the amount of actual information being conveyed reduced to a bare minimum.

In many cases, the likes of the Discovery Channel are even disguising pure fiction as fact, meaning that children - and frequently adults - are hard-placed to differentiate nonsense from reality. This blurring of demarcation does little to encourage critical or even sustained thinking; knowledge in the media and online has been reduced to a consumer-led circus with an emphasis on marketing and hype. Arguably, radio provides the last media format where the majority of content maintains a semblance of sustained, informative discussion on STEM issues.

4) Quantity

The brave new world of technology that surrounds us is primarily geared towards consumerism; after all, even social media is fundamentally a tool for targeted marketing. If there's one thing that manufacturers do not want it is inquisitive customers, since the buzzwords and hype often hide a lack of quality underneath. Unfortunately, the ubiquity of social media and online news in general means that ridiculous ideas rapidly become must-have fads.

Even such commodities as food and drink have become mired with trendy products like charcoal-infused juice, unpasteurised milk and now raw water, attracting the same sort of uncritical punters who think that nutrition gurus know what really constituted human diets in the Palaeolithic. The fact that some of Silicon Valley's smartest have failed to consider the numerous dangers of raw water shows that again, analytical thinking is taking a back seat to whatever is the latest 'awesome' and 'cool' lifestyle choice.

Perhaps then certain types of thinking are becoming difficult to inculcate and sustain in our mentally vulnerable teenagers due to the constant demands of consumerism and its oh-so-seductive delivery channels. Whether today's youth will fall into the viewing habits of older generations, such as the myriad of 'food porn' shows remains to be seen; with so much on offer, is it any wonder people spend entire weekends binge watching series, oblivious to the wider world?

The desire to fit into a peer group and not be left behind by lack of knowledge about some trivia or other, for example about the latest series on Netflix, means that so much time is wasted on activities that only require a limited number of thought processes. Even a good memory isn't required anymore, with electronic calendars and calculators among the simplest of tools available to replace brain power. Besides which, the transience in popular culture means there's little need to remember most of what happened last week!

Ultimately, western nations are falling prey to the insular decadence well known from history as great civilisations pass their prime. Technology and the pace of contemporary life dictated by it must certainly play a part in any decline in IQ, although the human brain being what it is - after all, the most complex object in the known universe - I wouldn't dare guess how much is due to them.

There are probably other causes that are so familiar as to be practically invisible. Take for instance background noise, both visual and aural, which permeates man-made environments. My commute yesterday offers a typical example of the latter sort, with schoolchildren on my train playing loud music on their phones that could be heard some metres away to the two building sites I walked by, plus a main road packed with vehicles up to the size of construction trucks. As a final bonus, I passed ten shops and cafes that were all playing loud if inane pop music that could be heard on the street, through open doors. Gone are the days of tedious elevator muzak: even fairly expensive restaurants play material so fast and loud it barely constitutes the term 'background music'. If such sensory pollution is everywhere, when do we get to enjoy quality cogitation time?

If you think that consumerism isn't as all-encompassing as I state, then consider that the USA spends more per year on pet grooming than it does on nuclear fusion research. I mean, do you honestly really need a knee-high wall-mounted video phone to keep in touch with your dog or cat while you're at work? Talking of which, did you know that in 2015 the Kickstarter crowdfunding platform's Exploding Kittens card game raised almost US$9 million in less than a month? Let's be frank, we've got some work to do if we are to save subsequent generations from declining into trivia-obsessed sheeple. Baa!

Saturday, 3 March 2018

Hi-tech roadblock: is some upcoming technology just too radical for society to handle?

Many people still consider science to be a discipline wholly separate from other facets of human existence. If there's one thing I've learnt during the eight years I've been writing this blog it's that there are so many connections between STEM and society that much of the scientific enterprise cannot be considered in isolation.

Cutting-edge theories can take a long time to be assimilated into mainstream society, in some cases their complexity (for example, quantum mechanics) or their emotive value (most obviously, natural selection) meaning that they get respectively misinterpreted or rejected. New technologies emerge out of scientific principles and methodology, if not always from the archetypal laboratory. STEM practitioners are sometimes the driving force behind new devices aimed at the mass market; could it be that their enthusiasm and in-depth knowledge prohibits them from realising that the world isn't yet ready for their brainchild? In some cases the "Hey, wow, cool, look what we can do!" excitement masks the elaborate web of socio-economic factors that mean the invention will never be suitable for a world outside the test environment.

There are plenty of examples of pioneering consumer-oriented technology that either could never fit into its intended niche (such as the UK's Sinclair C5 electric vehicle of the mid-1980s), or missed public demand, the Sony Betamax video recorder having been aimed at home movie makers rather than audiences just wanting to watch pre-recorded material (hence losing out to the inferior-quality VHS format).

At the opposite pole, mobile phone manufacturers in the early 1990s completely underestimated the public interest in their products, which were initially aimed at business users. Bearing in mind that there is considerable worldwide interest in certain new radical technologies that will presumably be aimed at the widest possible market, I thought I'd look at their pros and cons so as to ascertain whether non-STEM factors are likely to dictate their fortunes.

1) Driverless automobiles

There has been recent confirmation that in the next month or so vehicle manufacturers may be able to test their autonomous cars on California's state highways. With Nissan poised to test self-driving taxis in time for a 2020 launch, the era of human drivers could be entering its last few decades. Critics of the technology usually focus on the potential dangers, as shown by the system's first fatality in May 2016.

But what of the reverse? Could the widespread introduction of driverless road vehicles - once the public is convinced of their superior safety attributes - be opposed by authorities or multinational corporations? After all, in 2016 almost 21% of drivers in the USA received a speeding ticket, generating enormous revenue. Exact figures for these fines are unknown, but estimates for annual totals usually centre around six billion dollars. In addition to the fines themselves adding to national or local government coffers (for all sorts of traffic misdemeanours including parking offences), insurance companies benefit from the increase in premiums for drivers with convictions.

Whether vested interests would find the economic losses suitably offset by the prevention of thousands of deaths due to driver error remains to be seen. This stance might seem unjustly anti-corporate, but when the past half-century's history of private profit ahead of public interest is examined (for example, the millions paid by the fossil fuel and tobacco industries to support their products) there are obvious precedents.

One key scientific method is parsimony, A.K.A. Occam's razor. According to this principle, the simplest explanation is usually the correct one, at least in classical science; quantum mechanics plays by its own rules. An example counter to this line of thought can be seen in the work of the statistician, geneticist and tobacco industry spokesman R.A. Fisher, a keen pipe smoker who argued that rather than a cause-and-effect between smoking and lung cancer, there was a more complicated correlation between people who were both genetically susceptible to lung disease and hereditarily predisposed to nicotine addiction! Cigarette, anyone?

As for relinquishing the steering wheel to a machine, I think that a fair proportion of the public enjoy the 'freedom' of driving and that a larger contingent than just boy racers won't give up manual control without a fight, i.e. state intervention will required to put safety ahead of individuality.

2) Extending human lifespan

It might seem odd that anyone would want to oppose technology that could increase longevity, but there would have to be some fairly fundamental changes to society to accommodate anything beyond the most moderate of extended lifespans. According to a 2009 report in The Lancet medical journal, about half of all children born since 2000 could reach their hundredth birthday.

Various reports state that from 2030-2050 - about as far in the future as anyone can offer realistic prognostication for - the proportion of retirees, including far greater numbers of Alzheimer and dementia sufferers, will require many times more geriatricians than are practicing today. The ratio of working-age population to retiree will also drop, from 5:1 to 3:1 in the case of the USA, implying a far greater pensions crisis than that already looming. Numerous companies are using cutting-edge biotech to find cell renewal techniques, including the fifteen teams racing for the Palo Alto Longevity Prize, so the chances of a breakthrough are fairly high.

Japan offers a hint of how developed nations will alter once extended lifespans are available on a widespread basis: one-third of the population are over sixty and one in eight over seventy-five. In 2016 its public debt was more double the GDP and Japan also faces low labour productivity compared to other nations within the OECD. Figures such as these show that governments will find it economically challenging to support the corresponding population demographics, even if many of the healthcare issues usually associated with the elderly are diminished.

However, unlike driverless cars it's difficult to conceive of arguments in favour of legislation to prevent extended lifespans. If all nations achieved equilibrium in economy, technology and demographics there would be far fewer issues, but the gap between developed and developing nations is wide enough to deem that unlikely for many decades.

Discussions around quality of life for the elderly will presumably become more prominent as the age group gains as a proportion of the electorate. There are already various types of companion robots for those living alone, shaped anything from cats to bears to anthropomorphic designs such as the French Buddy and German Care-O-bot, the latter to my mind resembling a giant, mobile chess piece.

3) Artificial intelligence

I've already looked at international attitudes to the expansion and development of AI but if there's one thing most reports discuss it is the loss of jobs to even semi-intelligent machines. Even if there is a lower proportion of younger people, there will still be a need to keep the populace engaged, constructive or otherwise.

Surveys suggest that far from reducing working hours, information technology has caused employees in developed nations to spend more time outside work still working. For example, over half of all American and British employees now check their work email while on holiday. Therefore will governments be able to fund and organise replacement activities for an obsolete workforce, involving for example life-long learning and job sharing?

The old adage about idle hands rings true and unlike during the Great Depression, the sophistication of modern technology doesn't allow for commissioning of large-scale infrastructure projects utilising an unskilled labour pool. Granted that AI will generate new jobs in novel specialisms, but these will be a drop in the ocean compared to the lost roles. So far, the internet and home computing have created work, frequently in areas largely unpredicted by futurists, but it seems doubtful the trend will continue once heuristic machines and the 'internet of things' become commonplace.

So is it possible that governments will interfere with the implementation of cutting-edge technology in order to preserve the status quo, at least until the impending paradigm shift becomes manageable? I could include other examples, but many are developments that are more likely to incur the annoyance of certain industries rather than governments or societies as a whole. One of the prominent examples used for the up-coming Internet of Things is the smart fridge, which would presumably reduce grocery wastage - and therefore lower sales - via its cataloguing of use-by dates.

Also, if people can buy cheap (or dare I mention pirated?) plans for 3D printing at home, they won't have to repeatedly pay for physical goods, plus in some cases their delivery costs. Current designs that are available to print items for use around the home and garage range from soap dishes to measuring cups, flower vases to car windscreen ice scrapers. Therefore it's obvious that a lot of companies producing less sophisticated household goods are in for a sticky future as 3D printers become ubiquitous.

If these examples prove anything, it's that scientific advances cannot be treated in isolation when they have the potential of direct implementation in the real world. It's also difficult to predict how a technology developed for a single purpose can end up being co-opted into wholly other sectors, as happened with ferrofluids, designed to pump rocket fuel in the 1960's and now used in kinetic sculptures and toys. I've discussed the problems of attempting to predict upcoming technology and its future implementation and as such suggest that even if an area of technological progress follows some sort of predictable development, the wider society that encapsulates it may not be ready for its implementation.

It may not be future shock per se, but there are vested interests who like things just the way they are - certain technology may be just too good for the public. Who said anything about how much fossil fuel industries have spent denying man-made climate change? Or could it be time to consider Occam's razor again?

Wednesday, 21 February 2018

Teslas in space: trivialising the final frontier

Earlier this month Elon Musk's SpaceX achieved great kudos thanks to the maiden flight of the Falcon Heavy rocket and recovery of two of the three first stage boosters. Although it has the fourth highest payload capacity in the history of spaceflight, the test did not include satellites or ballast but the unlikely shape of Musk's own $100,000 Tesla Roadster, complete with dummy astronaut. Perhaps unsurprisingly, the unique payload of this largely successful mission has led to it being labelled everything from a pioneering achievement to a foolish publicity stunt. So what's the truth behind it?

I discussed the near-future development of private spaceflight back in 2012 and if there's one thing that the programmes mentioned therein have in common is that they have all since been delayed. Rocket technology has proved to be more tricky than the current crop of entrepreneurs envisaged, Elon Musk himself giving the Falcon Heavy a fifty-fifty chance of success. As it was, two of the core booster's engines failed to fire before touchdown, leading it to crash into the sea. Musk admitted that due to safety concerns this design will never - as originally intended - be used to launch crews into space. But a successful first flight for such a large vehicle had the potential to bring enormous kudos - translate that to customers - at the expense of his lagging rivals.

It could be argued that with such a high probability of getting egg on his face, Musk was right to choose a joke payload, albeit an expensive one, as opposed to boring ballast or a (presumably heavily-insured) set of commercial satellites. Even so, some critics have argued that there is enough manmade junk floating around the solar system without adding the Tesla, never mind the slight risk of a crash-landing on Mars. The latter might seem of little import, but there's presumably the risk of microbial contamination - it's thought some bacteria could survive atmospheric entry - and as yet we're far from certain whether Martian microbes might exist in places sheltered from the ultraviolet flux.

However, researchers have run computer simulations and if anything, Earth stands a far greater chance of being the Tesla's target, albeit millions of years in the future. Indeed, Venus is the next most likely, with Mars a poor third. That's if the car doesn't fall apart long before then due to the radiation, temperature variations and micrometeoroid impacts: the 70,000km/h or so velocity means that even dust grains can behave like bullets and there's plenty of natural rock fragments whipping around the solar system.

Musk has said that his low-cost, private alternative to state-led missions is intended to spur competitors into developing similarly reusable launch vehicles, bearing in mind that fossil fuel-powered rockets are likely to be the only way into space for some time to come. Talking of Government-controlled space programmes, NASA has long since decided to concentrate on research and development and leave much of the day-to-day operations, such as cargo runs to the International Space Station, to commercial outfits. In other words, Elon Musk is only touting for business much like any other corporation. His customers already include the communications company Arabsat and the United States Air Force, so interest in the new rocket is clearly building.

As to whether Musk should have fired a $100,000 car on a one-way trip (thanks to orbital mechanics, it's not strictly speaking one-way of course but let's face it, he's never going to get it back) it also comes down to a matter of taste, when you consider the environmental and economic crises facing humanity and the planet in general. The reusability factor to the Falcon Heavy rocket design does assuage the ecological aspect, but only slightly; rockets are a pretty primitive form of transport with some hefty pollutant statistics. Unfortunately, they currently have the monopoly on any space travel for the foreseeable future - I wonder if Virgin Galactic passengers could be encouraged to buy carbon credits?

A rather smaller rocket also launched into the headlines last month in the form of the US-New Zealand Rocket Lab's Electron vehicle. Cheekily called 'Still Testing', this second - and first successful - flight of the two-stage Electron paves the way for New Zealand-based launches of small satellites at comparatively low cost. This particular mission launched several commercial satellites plus the controversial 'Humanity Star', a reflective one-metre geodesic sphere that has been likened to both a disco ball and 'glittery space garbage'. Set to decay and burn up after nine months, Rocket Lab's founder Peter Beck intended it to generate a sense of perspective among the wider public but it has instead instigated a lot of negative commentary from astronomers, environmentalists and people who enjoy getting annoyed about almost anything.

Again, all publicity might seem like good publicity, but it goes to show that many people like their space technology serious and on the level, not frivolous or containing airy gestures (or should that be vacuous ones, space being space and all?) Even this individual rocket's name goes against tradition, which usually comes down to either Greco-Roman machismo or dull acronyms such as NASA's new SLS. In addition, to the unaided eye the cosmos appears to be largely pristine and pure, lacking the visual noise that commercialism bombards us with down here on Earth. Therefore the Humanity Star appears a bit tacky and is unlikely to supply the inspiration that Beck intended, a symbol that is somewhat too puny for its lofty purpose.

An older example of an out-and-out publicity stunt at the edge of space is Felix Baumgartner's record-breaking freefall jump back in October 2012. The Red Bull Stratos mission claimed to be a serious technology test (of for example, the reefed parachute design) as well as a medical experiment on the effects of supersonic travel on a human body outside a vehicle but ultimately it appeared to be an opportunity to fulfil, at least approximately, the company slogan 'Red Bull gives you wings'.

It could be argued that the jump aided research into escaping from damaged spacecraft, but even my limited understanding of the physics involved suggests an enormous difference between Baumgartner's slow, helium-led ascent and the velocity of both newly-launched rockets and deorbiting spacecraft. The mission also claimed to be at the 'edge of space' but at thirty-nine kilometres above the Earth, the altitude was far below the nominal one hundred kilometre boundary known as the Kármán line. As so often the case in advertising, why adhere to the facts when hyperbole will help to sell your product instead? Although the jump broke a fifty-two year old free-fall altitude record, it has since been beaten in much quieter fashion by Google's Senior Vice President of Knowledge, no less. In October 2014 Dr. Alan Eustace undertook a slightly higher self-funded jump that was devoid of publicity, suggesting that far from being a technological milestone, these jumps are more akin to climbing Mount Everest: once the pioneer has been successful, the mission becomes relatively routine.

With a cynical eye it would be very easy to claim that these three missions are the result of over-inflated egos and crass commercialism. The practical issue of unnecessary space junk, combined with the uneasy impression that the universe is now available as a billboard for selling stuff, have soured these projects for many. Several space stations have already utilised food tie-ins while in 1999 Coca Cola investigated projecting advertising onto the moon, only to find the lasers required would be too powerful to be allowed (perhaps they should have contacted Dr Evil?)

In 1993 the US Government banned 'obtrusive' advertising in space, but this hasn't stopped companies in other nations from planning such stunts. A Japanese soft drink manufacturer announced in 2014 that it wanted to land a capsule of its powered Pocari Sweat beverage (sounds delightful) on the moon, the launch vehicle being none other than a SpaceX Falcon rocket. With NASA's increasing reliance on private companies, is it only a matter of time before the final frontier becomes a mere extension of the noisy, polluted, consumer goods-obsessed environment we call civilisation? Frankly, we've made a pig's ear of our planet, so how about we don't make profit margins our number one concern in outer space too?

Tuesday, 13 February 2018

Back to nature: why saving other species could save mankind

Humanity has come a long way from reliance on biologically-derived materials such as wood, bone, antler and fur. Yet this doesn't mean that organic materials have been replaced or many respects surpassed by wholly artificial ones. There are of course new carbon-based materials such as 3D graphene and carbyne that may prove to be the 'ultimate' materials when it comes to properties such as strength, but the history of the past century has shown how natural substances can inspire research too.

Perhaps the most obvious example of this is the hook and loop fastener best known by the trademark Velcro, which is essentially a copy of the burr design on Arctium (burdock) plants. Considering that taxonomists disagree wildly on the global totals of current plant, animal and fungi species - many claiming that less than 20% have been scientifically classified - it seems apparent that nature has plenty more surprises up her sleeve.

Spider silk has long been recognised as an incredibly strong material for its weight, with that generated by many species being up to five times the strength of the equivalent amount of steel. The silk produced by the Madagascan Darwin's bark spider (Caerostris darwini) is ten times stronger than Kevlar, suggesting that bullet-proof clothing manufacturers could do well by investigating it. However, a discovery by an engineering team at Portsmouth University in the UK makes even this seem humdrum: the teeth of limpets are potentially so strong - thanks to a mineral called goethite - that artificial versions of them could be used in high-performance situations, even aircraft components.

In addition to their use in construction, natural substances may prove useful in the development of new pharmaceuticals. I've previously discussed animal defence mechanisms such as that of the bombardier beetle and how small, barely noticed critters such as the peripatus deserve far more investigation. Of course the problem has been that size and aesthetics directly correlate with public attention and newsworthiness, meaning that the likes of the giant panda are used as poster species despite offering little in the way of practical advance for science and technology.

I'm not of course suggesting that species should be judged on the merits of their usefulness to humanity, but that we could probably gain a lot of practical usage from much greater study of the less well known flora and fauna still 'out there'. The resilience of tardigrades is becoming fairly well known, but there are no doubt other seemingly insignificant species with even more astonishing properties. Hydra for example are small, tentacled animals that live in fresh water; thanks to being composed mostly of stem cells they appear to have life cycles that just keep going. There also been limited research on the 'immortal' jellyfish Turritopsis dohrnii; this is surprising, given that the advances in gene splicing technology such as CRISPR-Cas9 and TALEN might lead to important medical breakthroughs, not just glow-in-the-dark pets.

In addition, the race to generate new antibiotics to replace those ineffective against 'superbugs' would suggest any short-cuts that can be taken should be taken. I remember watching a 2006 British murder mystery programme in which people were killed during a hunt for rare South American seeds containing anti-malarial properties. This may be pure fiction, but considering that artemisinin-resistant 'supermalaria' is now on the horizon, the script was somewhat prescient.

The idea behind all this is simple: delving into an existing complex chemical compound is far easier than trying to generate a purely synthetic one from scratch. This is why it is important to conserve small and insignificant species, not just the pandas, elephants and rhinos. Who's to say that a breakthrough medicine or construction material isn't already in existence, just hiding around the corner (or rather, in the genome) of some overlooked species of animal, plant or fungi?

With superbug-beating pharmaceuticals and climate mitigation technology a priority, we're shooting ourselves in the foot if we let an increasing number of unconsidered species became extinct. As I discussed last month all sorts of organisms are now in serious trouble from global amphibian populations via North American snakes and bats to the mighty kauri trees of New Zealand. Just saving a few specimens of doomed species in freezers or formalin is unlikely to be enough: shouldn't we endeavour to minimise species loss for many reasons; and if we must have an economic motive, what about their potential benefit to mankind? Not for nothing has nature been deemed 'the master crafts(person) of molecules' and we lose volumes in that library at own expense.