Showing posts with label Richard Feynman. Show all posts
Showing posts with label Richard Feynman. Show all posts

Monday 25 January 2021

Ignorance is bliss: why admitting lack of knowledge could be good for science

"We just don't know" might be one of the best phrases in support of the scientific method ever written. But unfortunately it carries an inherent danger: if a STEM professional - or indeed an amateur scientist/citizen scientist - uses the term, it can be used by those wishing to disavow the subject under discussion. Even adding "- yet" to the end of it won't necessarily improve matters; we humans have an unfortunate tendency to rely on gut instinct rather than rational analysis for our world model, hence - well, just about any man-made problem you care to name, now or throughout history.

Even though trust in scientists and the real-world application of their work may have taken an upswing thanks to some rapid vaccine development during the current pandemic, there are many areas of scientifically-gleaned knowledge that are still as unpopular as ever. Incidentally, I wonder whether if it wasn't for much stricter laws in most countries today, we would have seen far more of the quackery that arose during the 1918 Spanish flu epidemic. During this period low-tech 'cures' included gas inhalation, enemas and blood-letting, the former about as safe as last year's suggestion to drink bleach. I've seen very little about alternative cures, no doubt involving crystals, holy water or good old-fashioned prayer, but then I probably don't mix in those sort of circles (and certainly don't have that type of online cookie profile). But while legislation might have prevented alternative pandemic treatments from being advertised as legitimate and effective, it hasn't helped other areas of science that suffer from widespread hostility. 

Partly this is due to the concept - at least in liberal democracies - of free speech and the idea that every thesis must surely have an antithesis worthy of discussion. Spherical planets not your bag, baby? Why not join the Flat Earth Society. It's easy to be glib about this sort of thing, but there are plenty of more serious examples of anti-scientific thinking that show no sign of abating. The key element that disparate groups opposing science seem to have in common is simple; it all comes down to where it disagrees with the world picture they learnt as a child. In most cases this can be reduced even further to just two words: religious doctrine.

This is where a humble approach to cutting-edge research comes in. Humility has rarely been a key characteristic of fictional scientists; Hollywood for example has often depicted (usually male) scientists as somewhere on a crude line between power-crazed megalomaniacs and naive, misguided innocents. The more sensational printed volumes and tv documentaries communicating scientific research to a popular audience likewise frequently eschew ambiguities or dead-ends in favour of this-is-how-it-is approach. Only, quite often, it isn't how it works at all. Doubts and negative results are not only a key element of science, they are a fundamental component; only by discarding failures can the search for an answer to an hypothesis (or if you prefer the description of the brilliant-yet-humble physicist Richard Feynman: a guess) be narrowed down. 

There are plenty of examples where even the most accomplished of scientists have admitted they don't know the answer to something in their area of expertise, such as Sir Isaac Newton being unable to resolve the ultimate cause of gravity. As it was, it took over two centuries for another genius - Albert Einstein - to figure it out. Despite all the research undertaken over the past century or so, the old adage remains as true as ever: good science creates as many new questions as it answers. Key issues today that are unlikely to gain resolution in the next few years - although never say never - include what is the nature of dark energy (and possibly likewise for dark/non-baryonic matter) and what is the ultimate theory behind quantum mechanics? 

Of course, these questions, fascinating though they are, hold little appeal to most people; they are just too esoteric and far removed from everyday existence to be bothered about. So what areas of scientific knowledge or research do non-scientists worry about? As mentioned above, usually it is something that involves faith. This can be broken down into several factors:

  1. Disagreement with a key religious text
  2. Implication that humans lack an non-corporeal element, such as an immortal soul
  3. Removal of mankind as a central component or focal point for the universe 

These obviously relate to some areas of science - from a layman's viewpoint - far more than others. Most non-specialists, even religious fundamentalists, don't appear to have an issue with atomic theory and the periodic table. Instead, cosmology and evolutionary biology are the disciplines likely to raise their ire. Both are not in any sense complete; the amount of questions still being asked is far greater than the answers so far gleaned from research. The former is yet to understand what 96% of the universe is composed of, while the latter is still piecing together the details of the origin and development of life of our planet, from primordial slime up to Donald Trump (so possibly more of a sideways move, then). 

Herein lies the issue: if scientists claim they are 'certain' about the cause of a particular phenomenon or feature of reality, but further research confirms a different theory, then non-scientists are  legitimately able to ask why the new idea is any more final than the previous one? In addition, the word 'theory' is also prone to misinterpretation, implying it is only an idea and not an hypothesis (guess, if you like) that hasn't yet failed any tests thrown at it, be they practical experiments, digital simulations or mathematical constructions. Bill Bryson's best-selling A Short History of Nearly Everything is an example of how science can be done a disservice by material meant to promote it, in that the book treats science as if it were an ever-expanding body of knowledge rather than as a collection of methods that are used to explore answerable questions about life, the universe, and of course, everything.

Perhaps one answer to all this would be for popular science journalism, from books written by professional scientists to short news items, to include elements related to what is not yet known. The simplistic approach that avoids the failures only serves to strengthen the opinion that experts are arrogant believers in their own personal doctrines, as inflexible and uncompromising as holy writ. 

Unfortunately, in efforts to be both concise and easy-to-comprehend, much science communication appears to render the discipline in this manner, avoiding dissension and doubt. In addition, the often wonderful - and yet to be resolved subtleties - of research are neglected. For example, the majority of specialists agree that birds are descended from theropod (i.e. carnivorous) dinosaurs, and yet the primary growth axis on the forelimbs of the two groups differs. This issue has not been satisfactorily answered, but the vast collection of evidence, both from fossils and experimentation, claims it as the most plausible solution to this particular phylogenetics tree. Further research, especially in embryology, may one day find a more complete solution.

Ultimately then, science education would probably benefit from it confirming boundaries of uncertainty, where they exist. This may help allay fears that the discipline wants to impose absolutes about everything; in most areas (the second law of thermodynamics excepted) we are still in the early stages of understanding. This doesn't mean that the Earth may be flat or only six thousand years old, but it does mean that science usually works in small steps, not giant paradigm shifts that offer the final say on an aspect of reality. After all, if scientists already knew everything about a subject, there wouldn't be any need for further research. What a boring world that would be!

Tuesday 17 March 2020

Printing ourselves into a corner? Mankind and additive manufacturing

One technology that has seemingly come out of nowhere in recent years is the 3D printer. More correctly called additive manufacturing, it has only taken a few years between the building of early industrial models and a thriving consumer market - unlike say, the gestation period between the invention and availability of affordable domestic video cassette recorders.

Some years ago I mentioned the similarities between the iPAD and Star Trek The Next Generation's PADD, with only several decades separating the real-world item from its science fiction equivalent. Today's 3D printers are not so much a primitive precursor of the USS Enterprise-D's replicator as a paradigm shift away in terms of their profound limitations. And yet they still have capabilities that would have seemed incredibly futuristic when I was a child. As an aside, devices such as 3D printers and tablets show just how flexible and adaptable we humans are. Although my generation would have considered them as pure sci-fi, today's children regularly use them in schools and even at home and consider the pocket calculators and digital watches of my childhood in the same way as I looked at steam engines.

But whilst it can't yet produce an instant cup of earl grey tea, additive manufacturing tools are now being tested to create organic, even biological components. Bioprinting promises custom-made organs and replacement tissue in the next few decades, meaning that organ rejection and immune system repression could become a thing of the past. Other naturally-occurring substances such as ice crystals are also being replicated, in this case for realistic testing of how aircraft wings can be designed to minimise problems caused by ice. All in all, the technology seems to find a home in practically every sector of our society and our lives.

Even our remotest of outposts such as the International Space Station are benefiting from the use of additive manufacturing in cutting-edge research as well as the more humdrum role of creating replacement parts - saving the great expense of having to ship components into space. I wouldn't be surprised if polar and underwater research bases are also planning to use 3D printers for these purposes, as well as for fabricating structures in hostile environments. The European Space Agency has even been looking into how to construct a lunar base using 3D printing, with tests involving Italian volcanic rock as a substitute for lunar regolith.

However, even such promising, paradigm-shifting technologies as additive manufacturing can have their negative aspects. In this particular case there are some obvious examples, such as home-printed handguns (originally with very short lifespans, but with the development of 3D printed projectiles instead of conventional ammunition, that is changing.) There are also subtle but more profound issues that arise from the technology, including how reliance on these systems can lead to over-confidence and the loss of ingenuity. It's easy to see the failure due to hubris around such monumental disasters as the sinking of the Titanic, but the dangers of potentially ubiquitous 3D printing technology are more elusive.

During the Apollo 13 mission in 1970, astronauts and engineers on the ground developed a way to connect the CSM's lithium hydroxide canisters to the LM's air scrubbers, literally a case of fitting a square peg into a round hole. If today's equivalents had to rely solely on a 3D printer - with its power consumption making it a less than viable option - they could very well be stuck. Might reliance on a virtual catalogue of components that can be manufactured at the push of a button sap the creativity vital to the next generation of space explorers?

I know young people who don't have some of the skills that my generation deemed fairly essential, such as map reading and basic arithmetic. But deeper than this, creative thinking is as important as analytical rigour and mathematics to the STEM disciplines. Great physicists such as Einstein and Richard Feynman stated how much new ideas in science come from daydreaming and guesswork, not by sticking to robot-like algorithmic processes. Could it be that by using unintelligent machines in so many aspects of our lives we are starting to think more like them, not vice versa?

I've previously touched on how consumerism may be decreasing our intelligence in general, but in this case might such wonder devices as 3D printers be turning us into drones, reducing our ability to problem-solve in a crisis? Yes, they are a brave new world - and bioprinting may prove to be a revolution in medicine - but we need to maintain good, old-fashioned ingenuity; what we in New Zealand call the 'Number 8 wire mentality'. Otherwise, our species risks falling into the trap that there is a wonder device for every occasion - when in actual fact the most sophisticated object in the known universe rests firmly inside our heads.

Sunday 24 February 2019

Core solidification and the Cambrian explosion: did one begat the other?

Let's face it, we all find it easier to live our lives with the help of patterns. Whether it's a daily routine or consultation of an astrology column (insert expletive of choice here) - or even us amateur astronomers guiding our telescopes via the constellations - our continued existence relies on patterns. After all, if we didn't innately recognise our mother's face or differentiate harmless creatures from the shape of a predator, we wouldn't last long. So it shouldn't be any surprise that scientists also rely on patterns to investigate the complexities of creation.

Richard Feynman once said that a scientific hypothesis starts with a guess, which should perhaps be taken with a pinch of salt. But nonetheless scientists like to use patterns when considering explanations for phenomena; at a first glance, this technique matches the principle of parsimony, or Occam's Razor, i.e. the simplest explanation is usually the correct one - excluding quantum mechanics, of course!

An example in which a potential pattern was widely publicised prior to confirmation via hard data was that of periodic mass extinction, the idea being that a single cause might be behind the five greatest extinction events. Four years after Luis Alvarez's team's suggestion that the 66 million year-old Chicxulub impactor could have caused the Cretaceous-Paleogene extinction, paleontologists David Raup and Jack Sepkoski published a 1984 paper hypothesising extinctions at regular intervals due to extraterrestrial impacts.

This necessitated the existance of an object that could cause a periodic gravitational perturbation, in order for asteroids and comets to be diverted into the inner solar system. The new hypothesis was that we live in binary star system, with a dwarf companion star in an highly elliptical, 26 million-year orbit. This would be responsible for the perturbation when it was at perihelion (i.e. closest approach to the sun).

What's interesting is that despite the lack of evidence, the hypothesis was widely publicised in popular science media, with the death-dealing star being appropriately named Nemesis after the Greek goddess of retribution. After all, the diversification of mammals was a direct result of the K-T extinction and so of no small importance to our species.

Unfortunately, further research has shown that mass extinctions don't fall into a neat 26 million-year cycle. In addition, orbiting and ground-based telescopes now have the ability to detect Nemesis and yet have failed to do so. It appears that the hypothesis has reached a dead end; our local corner of the universe probably just isn't as tidy as we would like it to be.

Now another hypothesis has appeared that might appear to belong in a similar category of neat pattern matching taking precedence over solid evidence. Bearing in mind the importance of the subject under scrutiny - the origin of complex life - are researchers jumping the gun in order to gain kudos if proven correct? A report on 565 million year-old minerals from Quebec, Canada, suggests that at that time the Earth's magnetic field was less than ten percent of what it is today. This is considerably lower than earlier estimate of forty percent. Also, the magnetic poles appear to have reversed far more frequently during this period than they have since.

As this is directly related to the composition of the Earth's core, it has led to speculation that the inner core was then in the final stage of solidification. This would have caused increased movement in the outer liquid, iron-rich core, and thus to the rapid generation of a much higher magnetic field. In turn, the larger the magnetic field dipole intensity, the lower the amount of high energy particles that reach the Earth's surface, both cosmic rays and from our own sun. What is particularly interesting about this time is that it is just (i.e. about twenty million years) prior to the so-called Cambrian explosion, following three billion years or so of only microbial life. So were these geophysical changes responsible for a paradigm shift in evolution? To confirm, we would need to confirm the accuracy of this apparently neat match.

It's well known that some forms of bacteria can survive in much higher radiation environments than us larger scale life forms; extremophiles such as Deinococcus radiodurans have even been found thriving inside nuclear reactors. Therefore it would seem obvious that more complex organisms couldn't evolve until the magnetic field was fairly high. But until circa 430 million years ago there was no life on land (there is now evidence that fungi may have been the first organisms to survive in this harsh environment). If all life was therefore in the sea, wouldn't the deep ocean have provided the necessary radiation protection for early plants and animals?

By 600 million years ago the atmospheric oxygen content was only about ten percent of today's value; clearly, those conditions would not have been much use to pretty much any air-breathing animals we know to have ever existed. In addition, the Ediacaran assemblage, albeit somewhat different from most subsequent higher animals, arose no later than this time - with chemical evidence suggesting their development stretched back a further 100 million years. Therefore the Canadian magnetic mineral evidence seems to be too late for the core solidification/higher magnetic field generation to have given the kick start to a more sophisticated biota.

In addition, we shouldn't forget that it is the ozone layer that acts as an ultraviolet shield; UVB is just as dangerous to many organisms, including near-surface marine life, as cosmic rays and high-energy solar particles. High-altitude ozone is thought to have reached current density by 600 million years ago, with blue-green algae as its primary source. O2 levels also increased at this time, perhaps driven by climate change at the end of a global glaciation.

Although the "Snowball Earth" hypothesis - that at least half of all ocean water was frozen solid during three or four periods of glaciation - is still controversial, there is something of a correlation in time between the geophysical evidence and the emergence of the Ediacaran fauna. As to the cause of this glacial period, it is thought to have been a concatenation of circumstances, with emergent plate tectonics as a primary factor.

How to conclude? Well, we would all like to find neat, obvious solutions, especially to key questions about our own origin. Unfortunately, the hypothesis based on the magnetic mineral evidence appears to selectively ignore the evolution of the Ediacaran life forms and the development of the ozone layer. The correlation between the end of "Snowball Earth" and the Ediacaran biota evolution is on slightly firmer ground, but the period is so long ago that even dating deposits cannot be accurate except to the nearest million years or so.

It's certainly a fascinating topic, so let's hope that one day the evidence will be solid enough for us to finally understand how and when life took on the complexity we take for granted. Meanwhile, I would take any speculation based on new evidence with a Feynman-esque pinch of salt; the universe frequently fails to match the nice, neat, parcels of explanations we would like it to. Isn't that one of the factors that makes science so interesting in the first place?

Tuesday 14 May 2013

What, how and why? Are there 3 stages to science?

Not being philosophically inclined I was recently surprised to find myself constructing an armchair thesis: it had suddenly dawned on me that there might be three, broad phases or stages to the development of scientific ideas. I'm fairly certain I haven't read about anything along similar lines, so let me explain,  safe in the knowledge that if it's a load of fetid dingo's kidneys, it's entirely of my own doing.

Stage 1

Stage one is the 'what' phase: simply stated, it is about naming and categorising natural phenomena, a delineation of cause and effect. In a sense, it is about finding rational explanations for things and events at the expense of superstition and mysticism.  In addition, it utilises the principle of parsimony, otherwise known as Occam's (or Ockham's) Razor: that the simplest explanation is usually correct. 

Although there were a few clear moments of stage one in Ancient Greece - Eratosthenes' attempt to measure the size of the Earth using Euclidean Geometry being a prime example - it seems to have taken off in earnest with Galileo. Although his work is frequently mythologised (I follow the rolling weights rather than dropping objects from the Leaning Tower of Pisa brigade), Galileo most likely devised both actual and thought experiments to test fundamental findings, such as the separate effects of air resistance and gravity.

Of course, Galileo was primarily interested in physics but the other areas of science followed soon after. Systematic biology came to the fore in such practical work as the anatomical investigations of William Harvey - pioneer in the understanding of blood circulation - and the glass bead microscopes of Antony van Leeuwenhoek. The work of the latter, interestingly enough, was largely to understand how small-scale structure in edible substances created flavours.  It's also worth thinking about how this research expanded horizons: after all, no-one had ever seen the miniature marvels such as bacteria. I wonder how difficult the engravers of illustrated volumes found it, working from sketches and verbal descriptions on sights they have never seen themselves? But then again, no-one has ever directly imaged a quark either…

Talking of biology, we shouldn't ignore Carl Linnaeus, the Swedish scientist who started the cataloguing methodology in use today. New Zealand physicist Ernest Rutherford may have disparagingly referred to all branches of science other than physics as mere stamp collecting but apart from the wild inaccuracy of his statement it is seemingly obvious that without various standards of basic definitions there is no bedrock for more sophisticated research.

The repetitive, largely practical aspect of the phase in such disciplines as geology and taxonomy meant that largely untrained amateurs could make major contributions, such as the multitude of Victorian parsons (of whom Charles Darwin was almost a member) who worked on the quantity over quality principle in collecting and cataloguing immense amounts of data. Of course, Darwin went far beyond phase one but his work built on the evaluation of evolutionary ideas (try saying that three times fast) that numerous predecessors had discussed, from the Ancient Greeks to John Ray in the late Seventeenth Century.

This isn't to say that stage one science will be finished any time soon. The Human Genome Project is a good example of a principally descriptive project that generated many surprises, not least that it is proving more difficult than predicted to utilise the results in practical applications. Although in the BBC television series The Kingdom of Plants David Attenborough mentioned that the Royal Botanic Gardens at Kew contains 90% of known plant species, there are still plenty of remote regions - not to mention the oceans - yet to yield all their secrets to systematic scientific exploration.  In addition to the biota yet to be described in scientific records, the existing catalogues are in the process of major reorganisation. For example, the multitude of duplicate plant names is currently being addressed by taxonomic experts, having so far led to the finding of 600,000 superfluous designations. It isn't just plants either: a recent example was the announcement that DNA evidence suggests there is probably only a single species of giant squid rather than seven. It may sound tedious and repetitive, but without comprehensive labelling and description of natural elements, it would be impossible to progress to the next stage.

Stage 2

Who was the first person to move beyond cataloguing nature to in-depth analysis? We'll probably never know, but bearing in mind that some of the Ionian philosophers and Alexandrian Greeks performed practical experiments, it may well have been one of them.

By looking to explore why phenomena occur and events unfold the way they do, our species took a step beyond description to evaluation. If art is holding a mirror up to nature, then could the second phase be explained as holding a magnifying glass up to nature, reducing a phenomenon to an approximation, and explaining how that approximation works?

For example, Newton took Galileo and Kepler's astronomical work and ran with it, producing his Law of Universal Gravitation. The ‘how' in this case is the gravitational constant that explained how bodies orbit their common centre of gravity. However, Newton was unable to delineate what caused the force to act across infinite, empty space, a theory that had to wait for stage three.

So different from the smug, self-satisfied attitude of scientists at the beginning of the Twentieth Century, the techniques of modern science suggest that there is a feedback cycle in which knowing which questions to ask is at least as important as gaining answers, the adage in this case being ‘good experiments generate new questions'. Having said that, some of the largest and most expensive contemporary experiments such as the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Large Hadron Collider (LHC) have each been principally designed to confirm a single hypothesis.

As recent evidence has shown, even some of the fundamentals of the nature, including dark matter and dark energy, are only just being recognised. Therefore science is a long way from recognising all first principles, let alone understanding them. Closer to home, that most complex of known objects, the human brain, still holds a lot of secrets, and probably will continue to do so for some time to come.
Though microelectronics in general and computers in particular have allowed the execution of experiments in such fields as quantum teleportation, considered close to impossible by the finest minds only half a century ago, there are several reasons why computer processing power is getting closer to a theoretical maximum using current manufacturing techniques and materials. Therefore the near future may see a slowing down in the sorts of leading edge experimental science that has been achieved in recent decades. But how much progress has been made in phase three science?

Stage 3

This is more difficult to define than the other two phases and can easily veer into philosophy, a discipline that has a poor press from many professional scientists. Physicist Richard Feynman for example is supposed to have disparaged it as ‘about as useful to scientists as ornithology is to birds'.  Despite this - and the probability that there as many philosophies of science as there are philosophers -  it's easy to see that the cutting edge of science, particularly theoretical physics, generates as much discussion over its validity as any work of art. If you've read one of the myriad critiques of superstring theory for example, then you will know that it can be viewed as a series of intellectual patterns (accompanied by diabolical equations) that may never be experimentally confirmed. In that case is string theory really just a collection of philosophical hypotheses, unproven by experiment or observation and likely to remain so? The minuteness of the scale (an underwhelming description if ever there was one) makes the prospect of directly recording strings themselves  - as opposed to their effects - highly unlikely.

If that is the case then just where can you draw the line between science and philosophy? Of course one of the fundamental tenets of a valid hypothesis is to make testable predictions that no other hypothesis can account for. But with over a century of theories that increasingly fail to follow common sense  or match everyday experience perhaps this is a sign of approaching maturity in science, as we finally advance beyond the crude limitations of our biological inheritance and its limited senses. Surely one key result of this is that the boundaries between new ideas promulgated by scientists and the thoughts of armchair philosophers will become increasingly blurred? Or is that just fighting talk?

Whereas scientists engaged in phase two investigations seek to find more accurate approximations for phenomena, phase three includes the search for why one theory is thought to be correct over another. A prominent example may help elucidate. Further to Galileo in phase one and Newton in phase two, Einstein's General Relativity, which explains the cause of gravity via the curvature of spacetime, is clearly an example of phase three. Of course, contemporary physicists would argue that Einstein's equations are already known to be lacking finality due to its incompatible with quantum mechanics. Herein lies the rub!

One problem that has caused dissension amongst many scientists is a possibly even more ‘ultimate' question: why is the universe finely tuned enough for life and more than that, intelligent life, to exist? The potential answers cover the entire gamut of human thought, from the conscious design principle supported by some religiously-minded scientists, to the invocation of the laws of probability in a multiverse hypothesis, requiring an immense number of universes all with the different fundamentals (and therefore including a lucky few capable of producing life). But the obvious issue here is that wouldn't Occam's Razor suggest the former is more likely than the latter? As Astronomer Royal Sir Martin Rees states, this is veering into metaphysical territory, which except for the scientists with religious convictions, is usually an area avoided like the plague. However, it may eventually become possible to run computer models that simulate the creation of multiple universes and so as bizarre as it seems, go some way to creating a workable theory out of something that to most people is still a purely philosophical notion. Talk about counting angels on a pinhead!

I can't say I'm entirely convinced by my own theory of three stages to science, but it's been interesting to see how the history and practice of the discipline can be fitted into it. After all, as stated earlier no-one has ever observed a quark, which in the first days of their formulation were sometimes seen as purely mathematical objects any way. So if you're doubtful I don't blame you, but never say never...