Why we think the weather affects how a cricket ball swings ... when it doesn't

Swing and a miss. Anthony Devlin/PA

Following the rollercoaster 2015 Ashes series, which saw England defeat Australia 3-2, the two teams are set to meet again in a series of one-day games – weather permitting, that is. It’s been a cloudy and humid summer in much of the UK, and if you believe folklore, that might have been affecting the games.

In the last Test Match at the Oval, England captain Alastair Cook, unusually for that ground, elected to field first after winning the toss. The Australians subsequently went on to amass nearly 500 runs in their first innings and England lost the game. That no commentator questioned the wisdom of Cook’s decision, despite it backfiring in spectacular fashion, is because so many believe that cloudy, humid weather conditions favour swing bowling (where the ball curves in the air after being released by the bowler and before it hits the ground).

At the beginning of the Oval Test Match, television and press pundits were talking about the conditions favouring the bowlers. Cook himself explicitly mentioned the overcast conditions as being a factor in his decision to bowl first in an interview with Channel 5.

The power of the weather

Where does this almost unanimous belief in the power of the weather come from? In part, it comes from potent legends such as that of Australian bowler Bob Massie, who famously took 16 wickets on his Test Match debut at Lords on a day where conditions have been described as “Perfect … humid, the air was heavy and the clouds were oyster in hue.” But is there any systematic evidence for the phenomenon beyond this type of compelling anecdote?

It turns out that there is actually rather a lot of scientific evidence that draws on aerodynamic experiments, often using wind tunnels with variable atmospheric conditions. The result? None of it finds support for the idea that humid overcast conditions affect how much the ball swings.

The first scientific study of cricket ball swing was published as long ago as 1955 and a growing body of research has periodically been reported in mainstream media outlets. There is no reason to think that an increasingly data-driven, professional cricket community is unaware of the evidence. So what accounts for the limpet-like stickiness of this roundly debunked theory?

Perceptions are not what they seem

Part of the reason might simply be lack of information, understanding or the possession of outright misinformation. The idea that humid air is thicker than dry air seems to provide a plausible common-sense explanation for the swing effect. In fact – and counter-intuitively – humid air is less dense than dry air.

A growing body of evidence on “motivated cognition” explains why even when faced with clear scientific evidence, people may not alter their opinions to match the information. Motivated cognition essentially suggests that individuals unconsciously process information in order to derive conclusions that suit their goals or preferences.

Research on motivated cognition has shown how people with different ideologies do not come to the same conclusions on controversial scientific issues such as global warming or vaccination, given the same information. In my own work, I have shown this to be the case for attitudes to prenatal genetic testing.

On the ropes. From www.shutterstock.com

Why would people want to believe that cloudy days are better for swing bowling? Simply, this is the accepted view of the entire cricket community. To believe otherwise would be to lose credibility with one’s in-group.

Imagine the reaction if Alastair Cook had batted first at the Oval on a cloudy, humid day and England had gone on to lose the match. Motivated cognition would suggest that in such cases where the predicted conditions are not accompanied by curving deliveries, cognitive work in the form of ad hoc explanations will be carried out to preserve the initial belief. Hence we saw commentators during the final Ashes test match suggesting that the bowlers didn’t get it quite right on the first morning or that the Australians batted well enough to neutralise the swing.

Paradoxically, the more expert knowledge of cricket someone has, the more these ad hoc rationalisations will convince both themselves and others, based on their deep knowledge of other aspects of the game.

Alternative explanations

Psychological research on the persistence of misinformation suggests that false beliefs are difficult to correct without a plausible causal account to replace the erroneous one. There is little if anything written or discussed in this regard in the media that could capture the imagination of those currently in thrall to the accepted narrative. In reality, there are several hypotheses that appear eminently plausible and that could be advanced as alternative explanations.

The most obvious of these is that if a bowler believes that overcast conditions are conducive to swing, then they will make sure to bowl so as to impart maximum swing when those conditions prevail. In fact this is exactly what Bob Massie said in a radio interview after his spectacular 1972 performance at Lords:

Once I woke up and looked out of the window and saw the greyness there, I knew it was going to be a day that if I, you know, bowled fairly well, I should get wickets because it was one of those tailor-made days for swing bowling.

Another possibility is that, believing that the conditions will favour swing bowling, swing bowlers in the squad are more likely to be picked in the final team by selectors on the morning of a match and then captains will deploy this type of bowler more often during the period of play when conditions appear to be favourable.

Is there even anything to explain?

Full swing. PitchVision, CC BY-ND

But isn’t this all jumping a little ahead? What systematic evidence do we have that shows that there is more swing on display under these conditions (irrespective of why it may be)? To the best of this writer’s knowledge, there is none. There are only the powerful anecdotes previously mentioned and the firm conviction from those who play and watch the game that the phenomenon is real. Is it therefore possible that there really isn’t any observable connection between weather and swing but that psychological illusions are at play?

Nobel prize-winning psychologist Daniel Kahneman and his colleague Amos Tversky showed that the way people think about probability and come to judgements based on evidence is subject to biases that make us rather poor intuitive scientists. Examples such as Massie’s test match are vivid and memorable.

Confirmation bias means that we seek out, or retrieve from memory, examples of events that are consistent with our prior beliefs and ignore information that is inconsistent. These two well-documented cognitive biases mean that we are far more likely to recall the instances when the ball swung strongly under cloudy skies than when it behaved in the same way under the blazing sun. To see an illustration, click here.

How to win more often

What should we conclude from all this? If I were England’s Director of Cricket, Andrew Strauss, I would want to know at minimum whether there is any truth to the basic contention that weather affects swing, in order to maximise the advantage gained from the England captain winning the toss.

The English Cricket Board has access to the necessary data from TV and the Hawkeye technology to work this out. If data and science can make the next series an even greater triumph for England than the last, why not embrace it?

The Conversation

Six amazing sights that look even better from the International Space Station

Hurricane Arthur photographed by ESA astronaut Alexander Gerst. ESA/NASA

Imagine seeing the lights of cities spreading around the Nile Delta and then in less than an hour gazing down on Mount Everest. The astronauts on the International Space Station (ISS) are among the lucky few who will have this humbling, once-in-a-lifetime experience of seeing the beauty of Earth from space.

The ISS doesn’t just offer spectacular and countless views of the natural and man-made landscapes of our planet. It also immerses its residents into the Earth’s space environment and reveals how dynamic its atmosphere is, from its lower layers to its protective magnetic shield, constantly swept by the solar wind.

The best views are seen from the Cupola, an observation deck module attached to the ISS in 2010 and comprising seven windows. So, what are the amazing sights that you can see from the space station?

1. Storms and lightning

When the ISS orbits over a sea of thunderclouds, it’s not rare for astronauts to witness an impressive amount of lightning. What is unusual, however, is seeing lightning sprites, which were observed on August 10th by astronauts aboard the space station.

ISS astronauts spotted a sprite (the red jellyfish-like structure on the right of the image) appearing above thunder clouds on August 10, 2015 NASA

Sprites are electrical discharges, similar to thunder lights. However, instead of occurring in the lower layer of Earth’s atmosphere, these very fast, red-coloured discharges (due to the excited nitrogen at this altitude) occur much higher up and are as such difficult to observe from the ground.

2. Sunrises and sunsets

Sunset over the Indian Ocean. NASA/ESA/G Bacon

With the ISS orbiting the Earth every 90 minutes, astronauts can see the Sun rise and set around 16 times every 24 hours. The dramatic views from the station display a rainbow-like horizon as the Sun appears and disappears beyond the horizon.

Swiftly flow the days

The changes in colour are due to the angle of the solar rays and their scattering in the Earth’s atmosphere. If similar jaw-dropping views can be seen from Earth, seeing our mother planet lit up in the rising Sun certainly adds to the intensity of the picture.

3. Stars and the Milky Way

Amazing sightings of distant astronomical objects as seen from the space shuttle

From the ground, atmospheric conditions and light pollution affect our ability to see stars and other celestial bodies. As light travels through layers of hot and cold air, the bending of its rays render a flickering image of these distant objects, while atmospheric particles such as dust prevent from seeing fainter objects such as nebulae and galaxies. The lack of an atmosphere at the orbiting altitude of the ISS allows the residents on the space station to see the stars, the Milky Way and other astronomical features with much greater clarity than is possible on Earth.

4. Meteor showers

The disintegration of a Perseid meteor photographed in August 2011 from the ISS. NASA

Astronauts aboard the ISS can also witness the disintegration of meteoroids in the Earth’s atmosphere. Those small bodies are fragments detached from celestial bodies such as asteroids and comets. As they enter in the Earth’s atmosphere at great speed, the heat due to the body interaction with air rapidly destroys them. Whereas the chance of seeing them from the ground is very much weather dependent, being on the ISS guarantees the best seats to watch these shooting stars flaming across our planet’s sky.

5. Auroras

Also known as northern and southern lights, auroras are created when solar storms, consisting of large magnetised clouds of energetic particles launched from the sun, or strong solar wind, interact with the Earth’s magnetic shield. Upon collision with the Earth, these solar streams energise particles within the planet’s magnetic shield.

Time lapses showing the ISS travelling through auroras

When they enter the upper layer of the Earth’s atmosphere, these energetic particles excite nitrogen and oxygen atoms present at these altitudes. Then when they return from their excited state, these atoms emit light of different colours indicative of the amount of energy they absorbed. This typically produces green and red, ribbon-like curtains.

6. Cosmic rays

Galactic cosmic rays aren’t really a phenomenon you can see. These energetic sub-atomic particles come from intense astronomical sources such as exploding stars or black holes. If they pass into the body they can damage tissue and break DNA, causing various diseases over the course of time.

Most cosmic rays do not penetrate in the thick atmosphere of the Earth. Since the ISS sits outside this protected zone, its astronauts are much more likely to be struck by the particles. Astronauts regularly see flashes of light when they close their eyes, which is thought to be caused by cosmic rays interacting with body parts that play role in vision, such as the optic nerve or visual centres in the brain.

Solar storms, which have a strong magnetic structure, act as a shield against cosmic rays. A solar storm passing by the Earth can be indirectly witnessed by astronauts aboard the ISS via a drop in the count of cosmic rays, also known as the “Forbush decrease”. What a sensation it must be to “feel” a storm passing by the Earth’s system.

The Conversation

Overthinking could be driving creativity in people with neurotic disorders

Constantly lost in thought? You may want to make the most of it. Radharani, Shutterstock

People who suffer from neuroticism – a condition characterised by anxiety, fear and negative thoughts – are extremely tuned in to looking for threats. For that reason, you may expect them to perform well in jobs requiring vigilance: stunt pilots, aviators and bomb defusement. Yet, the evidence suggests they are actually more suited to creative jobs.

Exactly what drives neuroticism and the creativity it is associated with is not known. But researchers have now come up with a theory which suggests that it could be down to the fact that people who score highly on neuroticism tests, meaning they are prone to anxiety or depression, tend to do a lot of thinking – often at the expense of concentrating at the task at hand.

Past, present and future

The hypothesis, which is yet to be experimentally verified, is an extension of what we already know. People who have neurotic traits typically look for things to worry about (a mechanism dubbed “self-generated thinking”). For example, people who get depressed are consumed by such self-generated negative thoughts that they forget what they are supposed to be doing. In other words, they are not very tuned in to the ”here and now”, which is pretty important if you need somebody to concentrate on defusing a bomb.

What the new research helps to do is explain the underlying brain mechanisms that interfere with “on the job thinking”. A certain amount of brain arousal is great for concentration but too much interferes with clear thinking and that’s what you want when performing stunts, flying planes, and disposing of bombs. Isaac Newton is sometimes described as a neurotic. Bonhams, wikimedia

So where does the creativity come in? The authors argue that people who engage in self-generated thinking are creative because they are not rooted in reality – they are away with the fairies. Indeed, they may resist attempts to get them to concentrate on reality whilst they focus on their own thoughts. It is hardly a surprise, then, that their ideas can be new, whacky and original.

So while people scoring high on neuroticism may struggle with a lot of stress, they can still have a successful working life. They may actually be able to find creative solutions to problems that didn’t exist in the first place, and in the process come with some pretty useful and imaginative stuff. Rather like Billy Liar, in his escape from his tedious existence conjuring up some fairly exciting daydreams.

Remaining questions

We know that people who are clinically depressed spend an extraordinary amount of time living in the past. We know that people diagnosed with chronic worry (Generalised Anxiety Disorder) spend an extraordinary amount of time living in the future. The strength of the study is that it pulls together what is already known about people who spend a lot of time engaged in distorted thinking, some of which can be labelled as creative.

The authors argue that this creative flair applies specifically to problem solving, as they believe rumination and worry improve such skills. However, this is questionable as there is actually evidence that people who are depressed or worry are not very good at problem solving at all. Indeed, one of the interventions recommended for both conditions is Problem Solving Therapy. To adequately solve problems you need to be approaching reality and its problems, not avoiding them through aimless thinking. The new study falls short by not discussing this.

Anxiety and depression can be a lonely place. hikrcn

The authors also argue that psychological interventions such as meditation and mindfulness – which are thought to dampen some of these heightened responses by grounding people in the “here and now” – may do more harm than good. The jury is still out, but there is enough evidence available suggesting the benefits of mindfulness for people who are depressed and anxious with limited side effects.

Neuroticism, by its very nature, alerts you to past and future danger and some individuals can make good use of that. And that can be good. Our caveman ancestors came equipped with primitive brain parts allowing them to engage in predicting threat. But even if anxious or depressed people are able to come up with some great ideas, they are surely far more likely to contribute to society in the long run if they can find relief from their suffering.

The Conversation

With silicon pushed to its limits, what will power the next electronics revolution?

rbulmahn, CC BY

The semiconducting silicon chip launched the revolution of electronics and computerisation that has made life in the opening years of the 21st century scarcely recognisable from the start of the last. Silicon integrated circuits (IC) underpin practically everything we take for granted now in our interconnected, digital world: controlling the systems we use and allowing us to access and share information at will.

The rate of progress since the first silicon transistor in 1947 has been enormous, with the number of transistors on a single chip growing from a few thousand in the earliest integrated circuits to more than two billion today. Moore’s law – that transistor density will double every decade – still holds true 50 years after it was proposed.

Moore’s law still holds true after 50 years. shigeru23, CC BY-SA

Nevertheless, silicon electronics faces a challenge: the latest circuits measure just 7nm wide – between a red blood cell (7,500nm) and a single strand of DNA (2.5nm). The size of individual silicon atoms (around 0.2nm) would be a hard physical limit (with circuits one atom wide), but its behaviour becomes unstable and difficult to control before then.

Without the ability to shrink ICs further silicon cannot continue producing the gains it has so far. Meeting this challenge may require rethinking how we manufacture devices, or even whether we need an alternative to silicon itself.

Speed, heat, and light

To understand the challenge, we must look at why silicon became the material of choice for electronics. While it has many points in its favour – abundant, relatively easy to process, has good physical properties and possesses a stable native oxide (SiO2) which happens to be a good insulator – it also has several drawbacks.

For example, a great advantage of combining more and more transistors into a single chip is that it enables an IC to process information faster. But this speed boost depends critically on how easily electrons are able to move within the semiconductor material. This is known as electron mobility, and while electrons in silicon are quite mobile, they are much more so in other semiconductor materials such as gallium arsenide, indium arsenide, and indium antimonide.

The useful conductive properties of semiconductors don’t just concern the movement of electrons, however, but also the movement of what are called electron holes – the gaps left behind in the lattice of electrons circling around the nucleus after electrons have been pushed out.

Modern ICs use a technique called complementary metal-oxide semiconductor (CMOS) which uses a pair of transistors, one using electrons and the other electron holes. But electron hole mobility in silicon is very poor, and this is a barrier to higher performance – so much so that for several years manufacturers have had to boost it by including germanium with the silicon.

Silicon’s second problem is that performance degrades badly at high temperatures. Modern ICs with billions of transistors generate considerable heat, which is why a lot of effort goes into cooling them – think of the fans and heatsinks strapped to a typical desktop computer processor. Alternative semiconductors such as gallium nitride (GaN) and silicon carbide (SiC) cope much better at higher temperatures, which means they can be run faster and have begun to replace silicon in critical high-power applications such as amplifiers.

Lastly, silicon is very poor at transmitting light. While lasers, LEDs and other photonic devices are commonplace today, they use alternative semiconductor compounds to silicon. As a result two distinct industries have evolved, silicon for electronics and compound semiconductors for photonics. This situation has existed for years, but now there is a big push to combine electronics and photonics on a single chip. For the manufacturers, that’s quite a problem.

Semiconductor lasers, where alternatives to silicon such as germanium have already found a role. 彭家杰, CC BY-SA

New materials for future

Of the many materials under investigation as partners for silicon to improve its electronic performance, perhaps three have promise in the short term.

The first concerns silicon’s poor electron hole mobility. A small amount of germanium is already added to improve this, but using large amounts or even a move to all-germanium transistors would be better still. Germanium was the first material used for semiconductor devices, so really this is a “back to the future” move. But re-aligning the established industry around germanium would be quite a problem for manufacturers.

The second concerns metal oxides. Silicon dioxide was used within transistors for many years, but with miniaturisation the layer of silicon dioxide has shrunk to be so thin that it has begun to lose its insulating properties, leading to unreliable transistors. Despite a move to using rare-earth hafnium dioxide (HfO2) as a replacement insulator, the search is on for alternatives with even better insulating properties.

Most interesting, perhaps, is the use of so-called III-V compound semiconductors, particularly those containing indium such as indium arsenide and indium antimonide. These semiconductors have electron mobility up to 50 times higher than silicon. When combined with germanium-rich transistors, this approach could provide a major speed increase.

Yet all is not as simple as it seems. Silicon, germanium, oxides and the III-V materials are crystalline structures that depend on the integrity of the crystal for their properties. We cannot simply throw them together with silicon and get the best of both. Dealing with this problem, crystal lattice mismatch, is the major ongoing technological challenge.

Different flavours of silicon

Despite its limitations, silicon electronics has proved adaptable, able to be fashioned into reliable, mass market devices available at minimal cost. So despite headlines about the “end of silicon” or the spectacular (and sometimes rather unrealistic) promise of alternative materials, silicon is still king and, backed by a huge and extremely well-developed global industry, will not be deposed in our lifetime.

Instead progress in electronics will come from improving silicon by integrating other materials. Companies like IBM and Intel and university labs worldwide have poured time and effort into this challenge, and the results are promising: a hybrid approach that blends III-V materials, silicon and germanium could reach the market within a few years. Compound semiconductors have already found important uses in lasers, LED lighting/displays and solar panels where silicon simply cannot compete. More advanced compounds will be needed as electronic devices become progressively smaller and lower powered and also for high-power electronics where their characteristics are a significant improvement upon silicon’s capabilities.

The future of electronics is bright, and it’s still going to be largely based on silicon – but now that silicon comes in many different flavours.

The Conversation

Solved: the mystery of why it's impossible to pull apart interleaved phone books

No glue, only friction. Danny Nicholson/Flickr, CC BY-NC-ND

People, trucks and even military tanks have tried and failed the task of pulling apart two phone books lying face up with their pages interleaved, like a shuffled deck of cards. While physicists have long known that this must be due to enormous frictional forces, exactly how these forces are generated has been an enigma – until now.

A team of physicists from France and Canada has discovered that it is the layout of the books coupled with the act of pulling that is producing the force.

The power of approximation

Finding an approximate solution to a complex problem is an essential skill in science (and in life). Often we are faced with questions that we can’t answer exactly, but sometimes good enough is, well, good enough. Enrico Fermi, one of the greatest physicists in the 20th century, has given his name to such “Fermi Questions” – as he was famous for encouraging this skill in his students.

Here’s one example: “How many piano tuners are there in Chicago?”. I have no idea, and I’m not sure Fermi knew either. But by estimating the population of Chicago, the fraction that might play the piano, and how often a piano needs tuning, you can come up with a pretty good guess, without diving into the phone book (it’s probably closer to 100 than to 1,000).

Doing these “back-of-an-envelope” calculations is usually the first step in approaching a scientific question. Sometimes that is as far as you need to go. Sometimes it tells us that the question is worth investigating more to find the exact answer.

Not even Brian Blessed can do it.

This is exactly what the team investigating the friction of phone books did. The back-of-the-envelope answer is friction between the pages. However, assuming the friction is proportional to the number of pages drastically underestimates the total force that is generated (which seems to rise exponentially with the number of pages). But previous attempts to improve this simple model – by including the effects of gravity and air pressure pushing the pages of the books together – have all failed to explain the result.

Surprisingly simple

So, when the back-of-the-envelope calculation fails, things get serious. In this case, the traction instrument was brought out (think the opposite of a vice), it was used to pull books apart while measuring the force required to do so. But not just any books. Rigorously prepared test books with specific numbers of pages, built from paper sheets of exact dimensions, interleaved to high precision.

Data in hand, a mathematical model was put together, and it turned out to be driven by a surprisingly simple fact. The pages of each book are separated by the interleaving and end up “spreading out”, lying at a slight angle from the spine. When the books are pulled away from each other, the pages want to move back closer together and end up squeezing the interleaved pages from the other book. And gripping something tightly greatly increases the friction.

Just impossible.

As an example, imagine a person with long hair in a swimming pool. While floating underwater, their hair can spread out – much like the pages of the books are spread out by the interleaving. Then, if our volunteer swims off, their hair will naturally move close together, following their head which is pulling it along. The pages of our books also want to move close together behind the thing pulling them (the spine of the book), but instead just squeeze more tightly on the pages of the other book, which are in the way. Pulling harder on the books only increases the friction.

This is an example of the geometrical amplification of friction, or how the layout of the books produces forces far beyond what is expected. Knots are another example, looping a rope around itself greatly increases the friction, resulting in a secure grip. The authors point out the recent resurgence of interest in this kind of problem and the general field of tribology, the study of surfaces in relative motion.

This is being driven by the need to understand the structure and behaviour of new micro and nano-engineered materials, which have impact on many aspects of life from medical applications to solar cells. Interleaved carbon nano-tubes as the material of the future anyone?

The Conversation

Why banning the mammoth ivory trade would be a huge mistake

Would a ban on mammoth ivory endanger or save the elephant? Pixabay

There is widely held belief that the only way we can protect globally endangered species that are being poached for the international wildlife trade is to completely ban the trade. This is a dangerous misconception and will speed up extinction rather than prevent it.

Adrian Lister, a mammoth expert from University College London, recently suggested that mammoths should be listed under the convention on international trade in endangered species to keep their ivory from being laundered into an illegal trade in tusks. He argued that the mammoth trade is encouraging the poaching of elephants by keeping up the demand for ivory.

This is madness. Mammoths and mammoth ivory is not rare – it is estimated that there are 10 million mammoths that remain incarcerated within the permafrost of the Arctic tundra. And in any case a ban on mammoth ivory would not stop the trade, it would simply drive it underground and attract the attention of organised crime groups. For example, in my own research I found that prices for illegally caught whale meat rose very quickly when enforcement efforts intensified and this in turn led to the trade being controlled by dedicated “professional” criminals.

In the same way, a ban on mammoth ivory would drive up prices and lead to many mammoth sites being excavated in clandestine fashion, without any associated scientific endeavours to garner knowledge and understanding of these great beasts. In fact the current situation supports collaboration between collectors and academics about new finds, to the benefit of scientific research.

A ban would not save the elephant either. In fact it would do the opposite and probably hasten its extinction in the wild. Although record levels of funding are now being invested in enforcement and anti-poaching measures to tackle the crime, many species such as the rhino remain on the path to extinction in the wild quite simply because bans aren’t working.

Woolly mammoth model at the Royal BC Museum in Victoria (Canada). FunkMonk/wikimedia, CC BY-SA

Around the world, incentives to poach elephants and rhinos are increasing due to rising prices and growing relative poverty between areas of supply and centres of demand, and while trade bans can curtail supply it does not seem to have reduced demand in any measurable way. Indeed, high levels of protection can actually stimulate demand for a species due to something called the anthropogenic allee effect.

Economic theory and research can explain why this happens and why we need to urgently reconsider our reliance on global trade bans. Where there is demand that is not very sensitive to price changes and strong enforcement of a ban, prices for illegal wildlife products will rise steeply, but have little overall effect on supply and consumption. This is especially true where organised criminal networks can circumnavigate the police and customs – a relatively easy trick for countries mired in corruption.

The need for bold moves

In this situation we need to look beyond regulation and consider bold strategies that actually make economic sense. In particular we need policies that drive prices down and reduce the pressure on wild populations. To do this we should be considering introducing sustainable off-take mechanisms such as regulated trade, ranching and wildlife farming. If these new sources of supply are close substitutes, such as mammoth and elephant ivory, these mechanisms will certainly cause prices to fall and pressure on wild populations to reduce.

Cross sectioned mammoth tusk. Cropbot/wikipedia, CC BY-SA

We have seen this happen successfully with crocodilian species, where farmed animals have largely taken over the market and recent economic research in Canada shows that the sale of mammoth ivory into the ivory business in Asia has actually led to lower prices for elephant ivory saving thousands of elephants.

Basic economics tells us that when one introduces a substitute, especially a very close substitute, the price of the alternative product will fall. A recent analysis linked with empirical data predicts that the 84 tonnes of Russian mammoth ivory that was exported to Asia on average per annum over the period 2010-2012 would have actually reduced poaching of wild elephants from 85,000 per year to around 34,000 elephants per year, primarily by reducing elephant ivory prices by about $100 per kilogram.

The policy implication is simple – the mammoth ivory trade should be legal and sustainably managed rather than banned – this will help save both the living elephant and the extinct mammoth.

The Conversation

Airshows are risky – that's why we like them – but they also have a strong safety record

As seen on 24-hour television news. Daniel Leal-Olivas/PA

Whether aircraft are used for travel or for acrobatic displays, it will never be possible to aviate entirely without risk. Airshows are manifestations of our liking for what Anthony Giddens calls deliberately cultivated risk – this excites and sustains those who participate and those who watch.

Deliberately cultivated risk is an outlet for untapped energies and repressed emotions. That is the upside. The downside is that such activities occasionally lead to death and injury.

This was the case at the Shoreham Air Display where a wayward 50-year-old Hawker Hunter two-seat trainer aeroplane ploughed into a road killing at least 11 motorists. Nevertheless, air displays enjoy a remarkably good safety record: the Shoreham crash was the first major loss of life among spectators or the public at an airshow in the UK since Farnborough, 63 years ago, when a prototype De Havilland 110 fighter aircraft disintegrated, its engines scything through the watching crowds killing 29 and injuring a further 63 spectators.

The Hawker Hunter that crashed at Shoreham. Paul Jarrett/EPA

The media view

This generally excellent safety record has been glossed over in much of the reporting of the tragedy. De-contextualised reporting creates an impression of chronic mishap and horror. Photographs of the moment of impact have been repeatedly used, the same jet-explodes-in-orange-fireball footage endlessly replayed across the voracious 24-hour news channels.

Several times an hour eye-witnesses are wheeled out to give their accounts, regardless of the fact that research frequently reveals eye-witness accounts to be highly inaccurate. And commentators have been breaking the golden rule of air crash investigation by indulging in speculation as to the causes.

Within 48 hours of the crash, headlines included: “I drove through fireball and went to work” (The Sun); “Jet crash fireball forces air show safety rethink” (The Times); “Air shows should be over the sea, it should not have been over that road” (The Daily Telegraph); “Why did they have to die?” (Daily Mirror); “Footie pals killed in jet crash carnage” (Daily Star); “How many more fireball dead?” (Daily Mail).

I appreciate why the media is so interested in the story, but there comes a point where legitimate curiosity mutates into self-interested voyeurism. Of course it could be argued that the public gets the press it deserves. That is, that the press merely reflects public morality and behaviour. Consider the following item from the the August 24 edition of Metro:

Steve Barry, assistant chief constable of Sussex Police, warned passers-by [on the A27, site of the Shoreham crash] against taking selfies. In a statement, Sussex Police said: ‘It is not our place to dictate what may or may not be published anywhere, but from a personal perspective I would ask people to consider the feelings of those who have lost loved ones in this incident and, indeed, who may still not have heard from them and are seeking information’.

I have no doubt that Barry’s appeal for decency and restraint will be ignored by some.

If we look at aviation through the prism of an expectation of total safety, we are going to be disappointed. As academics such as Charles Perrow remind us, aviation is an inherently risky activity. Aircraft, especially military aircraft, are “dense” – they contain myriad components packed tightly together.

This density makes unanticipated and uncontrollable interactions between components more likely. Add in an element of deliberately cultivated risk, such as flying a single-engined, five-decade old airframe around the sky in proximity to heavily used transport routes, and there is a window for mishap, error, and potential disaster.

So what is to be done? Two things. First, there must be what Professor Brian Toft calls active learning: aviation is comparatively safe precisely because it learns from its mistakes, applying the lessons of incidents, accidents and near-misses. Whatever can be learned from Shoreham must, and will be, acted upon. And second, the industry must engage the public in a conversation about safety. Aviation, in all its forms, can never be 100% safe, and it’s in the interests of the industry and the public to better manage expectations.

The Conversation

Obesity drug may be on the horizon after study pinpoints genetic mechanism

Help may be on the way. Alan Cleaver/Flicke, CC BY-SA

Nearly half of all Europeans are genetically predisposed to obesity. The condition is a worldwide epidemic affecting more than half a billion people and rising every year in most countries.

Despite this, we know little about the genetic origin of the condition and have no good medical treatment for it other than bariatric surgery. But now a genetic study seems to have cracked the mystery – raising hopes for more efficient treatment.

The global obesity crisis is often blamed on an increasingly sedentary life style and poor eating habits. However, studies have shown that 70-80% of the differences between people in body fat are due to their genes (this is called the heritability).

The first large-scale genetic studies for obesity were launched in 2007, after the initial mapping of the human genome. And one gene, dubbed FTO, made the headlines by popping its head above the other 20,000 genes in the pack. For the past eight years, despite finding nearly 100 other genes linked to obesity, FTO and the area around it have remained the top signals. But scientists around the world have struggled to understand how the gene works and whether it really is behind obesity.

A switch for fat burning?

The new study, published in the New England Journal of Medicine by an international team of researchers, took seven years and cost more than a million dollars. It is based on a wide range of studies of hundreds of patients, cell types and laboratory mice. The researchers also mined a rich public resource of databases for gene expression as well as the heritable changes in gene expression (epigenetics), demonstrating just how complex this field has become.

It turns out that the FTO gene doesn’t do much directly – it influences other nearby genes which cause the changes via regions in the DNA called enhancers and repressors. These can change the precursors of adult fat cells while they are still developing. All fat cells originally come from our bone marrow along with cartilage and bone cells and they pass through different stages as they become fat cells.

Could a simple pill replace invasive and expensive obesity treatments using adjustable gastric bands. JohnnyMrNinja/wikimedia, CC BY-SA

We know from recent research there are different types of fat cells in humans, with the most common being white, then some beige and a few brown – each storing and burning fat differently. Obese people have a greater proportion of white cells, which stores the fat rather than burning it off (getting larger as a result). The susceptibility gene variant that the study uncovered makes people produce less brown and beige fat – although the natural effects are quite small.

The team went on to block this pathway using a gene editing tool called CRISPR and found the effects on cells in culture and in lab mice were actually substantial: with five- to seven-fold effects on the animals' ability to burn fat. In fact, blocking the pathway made the animals 50% thinner.

The implications of this work are that after ten years of knowing about thousands of disease-related genes, we finally have the tools to crack the underlying mechanisms. By understanding how the body changes unhealthy white fat cells to healthy beige cells, we can start to develop new treatments for obesity.

This work also emphasises that the billions of dollars spent on the Human Genome Project and its spinoffs such as ENCODE and Epigenetics Roadmap have not been wasted. But we have redefined the parameters of success.

We know now that identifying genes for the most common diseases is actually pretty useless for prediction or diagnosis. Knowing all the 100 identified obesity genes only explains less than 5% of the genetic effect in an individual. Emerging fields such as epigenetics, metabolomics or microbiomics or the old fashioned way of looking at the health of your parents are much better for personalised medicine.

But if you want to understand how to design a badly needed drug for obesity, gene based studies like this are the key. Full-scale research could start on drugs that increase the relative production of beige and brown fat. Hopefully, trials could be underway in humans within a decade.

The Conversation

Shoreham crash will bring safety changes, but airshows are here to stay

Hawker Hunter WV372, the aircraft that crashed at Shoreham. Guy Gratton, Author provided

The tragic Shoreham Airshow crash has turned the spotlight on the safety of airshows, after a 57-year-old Hawker Hunter T7 failed to pull out of an aerobatic manoeuvre. With 11 and perhaps up to 20 people killed this should cause us to re-examine the industry in detail.

But let’s first knock down a few issues not worth debating. First, several million people each year attend airshows in Britain alone, and many more worldwide. These are very popular spectator events, and that won’t change: the age of the airshow is not over.

Second, that this aeroplane – an obsolete British jet fighter designed by Sydney Camm, responsible for the famous World War II-era Hawker Hurricane and the Hawker-Siddeley Harrier, the world’s first vertical take-off and landing aeroplane – was 57-years-old is irrelevant to its safety. The regulations governing the continued airworthiness of aeroplanes are incredibly strict, and tens of thousands of pounds will have been spent annually on this single aeroplane to ensure its fitness for flight.

A wingtip from the wreckage is removed from the road. REUTERS/Luke MacGregor

Third, whatever we may wish, we cannot eliminate the risk of accidents at airshows. While accident rates should be as low as possible, they cannot reasonably be zero, and I am quite certain that all airshow pilots and most airshow attendees recognise this. Accidents such as that which killed Gulf War veteran Trevor Roche in 2012, or former deputy chief of defence staff Sir Kenneth Hayr in 2001 were tragic – but those killed were participants who fully understood the risks.

What happened at Shoreham, however, where the jet failed to pull out of a dive and crashed onto a nearby busy road filled with traffic, is unacceptable.

Counting the cost

All aviation regulations take a four-tiered approach to safety: all third parties must be protected from death or injury as far as possible; consenting participants, such as airline passengers or airshow audiences, must be protected as far as possible, but not at the expense of protecting the general public; pilots and other aircrew should be protected from injury; and aircraft and structures should be protected from damage.

The crash at Shoreham caused the deaths of people who were not participants in the show, and so had not made any sort of informed decision as to the risks involved. Everybody in the aviation industry recognises that should never happen, and because of that there will be at least two separate investigations, already underway.

The first will be by the Air Accidents Investigation Branch (AAIB), a public body set up to investigate accidents “with a view to the preservation of life and the avoidance of accidents in the future”. There will be a team at Shoreham collecting evidence, aiming to produce a report that identifies the main causes behind the accident and, more importantly, recommends how to prevent a similar accident occurring in the future.

Once the wreckage has been collected, this team will relocate to their base at Farnborough. There will be a great deal of evidence to sift through, although the age and size of the Hunter means there’s no flight data recorder (“black box”), and instead the investigation will mostly involve photographic evidence, eyewitness accounts, and wreckage analysis.

The second investigation will be by the Civil Aviation Authority (CAA), the UK statutory body responsible for regulating all aviation in UK airspace. The CAA has many years experience of regulating airshows, much of which started from the tragedy at Farnborough Airshow in September 1952, when the prototype de Havilland 110 (which would become the Sea Vixen) broke up, killing the pilot, John Derry, his flight test observer, Anthony Richards, and 29 spectators.

Following Farnborough, rules were put in place that defined a strict “display line” away from spectators so that they would not be harmed by a crashing aeroplane. The faster an aircraft is, the further the display line must be from the official spectators, although non-paying spectators lining roads around airfields are a concern. A highly experienced flying display director is also required, usually supported by an expert flying control committee to oversee and approve the display. Those pilots flying in the display must hold individual display authorisations to do so.

This has demonstrated itself to be a robust system, and until Shoreham there has been no loss of life among spectators or the general public at a British airshow since John Derry’s crash in 1952. But clearly on this occasion, something failed and allowed a high energy aircraft to point toward a busy public road, leading to deaths. The CAA has already decided to impose immediate restrictions, grounding all Hawker Hunters, and prohibiting high energy aerobatics at airshows. This is a knee-jerk reaction, although perhaps politically necessary, and should be revised following the investigation.

It is normal that rules found to have failed will be re-evaluated and improved, but politicians and the public have added to the clamour. Neither the AAIB nor CAA will cut corners with their investigations – it’s important to do them right, and those seeking quick answers will probably be disappointed.

The investigation of this awful tragedy – which should never have happened – will result in revisions to the rules that oversee how airshows are conducted. It’s far too early to say with any certainty what those changes will be, but this won’t be brushed under the carpet. What’s equally sure is that the saying often repeated by aviators – that all safety regulations were originally written in blood – remains sadly true.

The Conversation

Privacy watchdog takes first step against those undermining right to be forgotten

It's not erasing the past, just making memories fuzzier. chalboard by sergign/shutterstock.com

The UK’s data privacy watchdog has waded into the debate over the enforcement of the right to be forgotten in Europe.

The Information Commissioner’s Office issued a notice to Google to remove from its search results newspaper articles that discussed details from older articles that had themselves been subject to a successful right to be forgotten request.

The new reports included, wholly unnecessarily, the name of the person who had requested that Google remove reports of a ten-year-old shoplifting conviction from search results. Google agreed with this right to be forgotten request and de-linked the contemporary reports of the conviction, but then refused to do the same to new articles that carried the same details. Essentially, Google had granted the subject’s request for privacy, and then allowed it to be reversed via the back door.

The ICO’s action highlights the attitude of the press, which tries to draw as much attention to stories related to the right to be forgotten and their subjects as possible, generating new coverage that throws up details of the very events those making right to be forgotten requests are seeking to have buried.

There is no expectation of anonymity for people convicted of even minor crimes in the UK, something the press takes advantage of: such as the regional newspaper which tweeted a picture of the woman convicted of shoplifting a sex toy. However, after a criminal conviction is spent, the facts of the crime are deemed “irrelevant information” in the technical sense of the UK Data Protection Act.

The arrival of the right to be forgotten, or more accurately the right to have online search results de-linked, as made explicit by the EU Court of Justice in 2014, does not entail retroactive censorship of newspaper reports from the time of the original event. But the limited cases published by Google so far suggest that such requests have normally been granted, except where there was a strong public interest.

Stirring up a censorship storm

It’s clear Google does not like the right to be forgotten, and it has from early on sent notifications to publishers of de-listed links in the hope they will cry “censorship”. Certainly BBC journalist Robert Peston felt “cast into oblivion” because his blog no longer appeared in search results for one particular commenter’s name.

It’s not clear that such notifications are required at all: the European Court of Justice judgment didn’t call for them, and the publishers are neither subject (as they’re not the person involved) nor controller (Google in this case) of the de-listed link. Experts and even the ICO have hinted that Google’s efforts to publicise the very details it is supposed to be minimising might be viewed as a privacy breach or unfair processing with regard to those making right to be forgotten requests.

The Barry Gibb effect

De-listing notifications achieve something similar to the Streisand effect, where publicity around a request for privacy leads to exactly the opposite result. I’ve previously called the attempt to stir up publisher unrest the Barry Gibb effect, because it goes so well with Streisand. So well, maybe it oughta be illegal.

Some publishers are happy to dance to Google’s tune, accumulating and publishing these notifications in their own lists of de-listed links. Presumably this is intended to be seen as a bold move against censorship – the more accurate “List of things we once published that are now considered to contain irrelevant information about somebody” doesn’t sound as appealing.

In June 2015, even the BBC joined in, and comments still show that readers find salacious value in such a list.

Upholding the spirit and letter of the law

While some reporters laugh at the idea of deleting links to articles about links, this misses the point. The ICO has not previously challenged the reporting of stories relating to the right to be forgotten, or lists of delisted links – even when these appear to subvert the spirit of data protection. But by naming the individual involved in these new reports, the de-listed story is brought straight back to the top of search results for the person in question. This is a much more direct subversion of the spirit of the law.

Google refused the subject’s request that it de-list nine search results repeating the old story, name and all, claiming they were relevant to journalistic reporting of the right to be forgotten. The ICO judgement weighed the arguments carefully over ten pages before finding for the complainant in its resulting enforcement notice.

The ICO dealt with 120 such complaints in the past year, but this appears to be the only one where a Google refusal led to an enforcement notice.

The decision against Google is a significant step. However, its scope is narrow as it concerns stories that unwisely repeat personally identifying information, and again it only leads to de-listing results from searches of a particular name. It remains to be seen whether other more subtle forms of subversion aimed at the right to be forgotten will continue to be tolerated.

The Conversation

Water, water, everywhere – where to drink in the solar system

A frozen lake of water-ice on the floor of a 35 km wide impact crater on Mars. Copyright ESA/DLR/FU Berlin (G. Neukum)

Science fiction movies about aliens threatening the Earth routinely ascribe them the motive of coming here to steal our resources, most often our water. This is ill thought-out, as water is actually extremely common. Any civilisation coming to our solar system in need of water (either to drink or to make rocket fuel) would be foolish to plunge all the way inwards to the Earth, from where they’d have to haul their booty back against the pull of the sun’s gravity.

Until recently, we believed that the Earth was the only body in the solar system that had water in liquid form. While it is true that the Earth is the only place where liquid water is stable at the surface, there’s ice almost everywhere. Many scientists also infer that liquid water may exist beneath the surfaces on several bodies.

But where in the solar system are we likely to find it and in what form? Could we ever get to it and, if so, would we be able to drink it?

Comets and the Kuiper belt

If you are interested in finding places were extraterrestrial microbial life might occur, then you should look for liquid water, or at least “warm” ice within a few degrees of melting. Those places are widespread, if you are prepared to look below the surface of cold bodies or around the edges of patches of permanent shade on hot bodies.

Frozen water can be found everywhere in the Solar System, from the Oort Cloud to Mercury (except on Venus). NASA / JPL-Caltech

Furthest from the sun is the Oort Cloud, a region where most comets spend most of their time some 10,000 times further from the Sun than the Earth is. They are mostly water-ice, with traces of various carbon and nitrogen compounds. Because of those you wouldn’t want to drink comet water neat, but there is probably about five Earth-masses of water out there. We can’t be sure, because only the comets that stray close to the sun can be directly studied.

That’s mostly water jetting off the nucleus of comet 67P/Churyumov-Gerasimenko on 30 July 2015 as the comet drew closer to the Sun. ESA/Rosetta/NAVCAM

Most comets are less than about 10 km across, and out in the Oort Cloud they are separated by vast distances, so if you wanted to harvest a lot of water it might be worth travelling inwards as far as the Kuiper Belt about forty times further from the sun than the Earth is.

Could Pluto hide liquid water far beneath its surface? NASA

Here there are bodies up to just over 2000 km in diameter, like Pluto. These are mostly water-ice surrounding rocky cores, but ices made of more volatile substances may coat their surfaces. A few may even have oceans of liquid water tens or hundreds of kilometres below their surfaces.

The giant planets

Neptune, Uranus, Saturn and Jupiter are the giants of the solar system. Deep inside, and confined by very high pressure, each of these is believed to contain several Earth-masses of water, sandwiched between its rocky core and its outer layers of hydrogen and helium gas.

Europa, a 3130 km diameter moon of Jupiter. There is almost certainly a global ocean of salty water between the surface ice and the rocky interior. NASA/JPL-Caltech/SETI Institute

There is no feasible way to get at that water, but the giant planets each have numerous moons that are made mostly of ice. Far from the sun, the ice contains methane, ammonia and carbon monoxide as well as water. However, at the distance of Jupiter from the sun, only five times further out than the Earth, it was too hot for the more volatile ices to condense, resulting in relatively pure water ice.

There is compelling evidence that several icy moons have internal oceans. The best places to look for life are where the ocean overlies warm rock. This may be the case inside Europa (Jupiter) and Enceladus (Saturn), but chemical reactions with the rock would make the liquid water salty, so not good to drink.

The rocky planets

Closer to the sun, Mars, Earth, Venus and Mercury are in a region that was too hot for ice to condense when the solar system was forming. Consequently the planets are mostly rock, which can condense at higher temperatures than ice. The only water on the rocky planets was either trapped inside minerals and then sweated out from the interior, or was added at the surface by impacting comets.

Mars probably once had at least as much water proportional to its rock as the Earth has, but it is a smaller body with weaker gravity and no magnetic field, allowing most of its water to have been lost to space. However, water certainly flowed on Mars’s surface in the past and there are intriguing signs of water seeping downslope to form gulleys even today. However, in order to survive as a liquid this would probably have to be very salty indeed.

We know for sure that there is water-ice in the polar caps too, but neither setting seems hospitable to life. However, if you took the right kind of terrestrial microbes to the right places on Mars, they might be able to scratch a living. What we are less sure of is whether microbes have already made the trip between planets, hitch hiking on meteorites.

Whereas Mars is too cold, Venus has been too hot for liquid water for most of its history. However, there are water droplets high in its atmosphere. This is not worth collecting as a resource, and a very long shot as a means of supporting microscopic airborne life.

Mercury’s north polar region. The yellow areas are in permanent shadow. ASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington/National Astronomy and Ionosphere Center, Arecibo Observatory

The last place you might expect to find water is Mercury, because it is mostly far too hot. However, there are craters near the poles onto whose floors the sun never shines. The presence of water-ice in these regions, delivered by impacting comets, has been demonstrated be several techniques and cannot be doubted.

Similarly “cold-trapped” water-ice has also been found inside polar craters on the Moon. This may be one of the first solar system resources that we, rather than visiting aliens, exploit as we leave our home world and make our way into space.

The Conversation

Despite Ashley Madison furore, our view of infidelity has not always been fixed

caught by Captblack76/shutterstock.com

When in 2010 I interviewed Noel Biderman, founder of infidelity website Ashley Madison, he said: “It’s easy to vilify me. But I’m not doing anything wrong. I didn’t invent infidelity.” He had a point, though at the time the moral outrage generated by the site suggested that Biderman had not only invented adultery, but all the evil in the internet too.

Five years on, and his website – and attitude – has spectacularly backfired following a hack that has outed personal details of its members and corporate emails. The outrage value of unprincipled web businesses has certainly dwindled – and within the internet’s wild west of trolling, pornography, cyberbullying, celebrity promotion, ungrammatical communication and hook-up apps, Ashley Madison seems positively tame. Who cares about some largely North American adulterers and their kinks? Arranging an affair through a dating site is pretty vanilla compared to a lot of what goes on. And mainstream dating sites like OkCupid and Match are perfectly good for cheaters too.

And yet Ashley Madison has never stopped being deeply contentious. Failed attempts to float on the New York and London stock exchanges suggested moral recoil on the part of bankers, a group hardly known for their disdain of smut. And so it fell to a group calling themselves Impact Team to reveal the site’s secrets with the moralising zeal of the righteous. Or the wronged – it’s suggested that the hackers had assistance from a disgruntled ex-Ashley Madison employee.

“Time’s up!” the hackers announced when Ashley Madison and its two sister companies remained in business after a warning. “We have explained the fraud, deceit, and stupidity of ALM and their members. Now everyone gets to see their data.” Data, the new private parts, was duly exposed, with women told: “Chances are your man signed up on the world’s biggest affair site, but never had one. He just tried to. If that distinction matters.”

Marital infidelity brings people including, apparently, hackers, to the very highest pitches of moral indignation – even today, in a world where teenage daughters and sons may well make contributions to amateur pornography websites. So outrageous is the idea of being cheated on – and so staunchly moral – that adultery would seem a universal, timeless evil. But a look at 20th-century history, at least in Britain, suggests that infidelity was not always the worst thing that could happen to a marriage.

In fact, as leading social and cultural historian Professor Claire Langhamer makes clear, perceptions of the wrongness of affairs are linked to changes in attitudes to relationships in the post-war period. The more marriage became tethered to love, with sex its crowning glory, the more fidelity mattered. At the same time, the arrival of the contraceptive pill in the 1960s and no-fault divorce led to a more sexually-oriented, exploratory approach to relationships. Yet as Langhamer argues, even as attitudes grew more permissive, with experimentation before and during marriage becoming more common, attitudes towards infidelity hardened.

So does the tsunami of personal and marital nightmares unleashed by the data from a site like Ashley Madison being made public mean that modern relationships are too close, or endowed with too much importance? Would it be better for cheaters and their spouses if relationships were more economic and pragmatic, and less territorial and sexualised? Perhaps.

It might also be better if we saw a renewal of the art of discretion – itself a kind of pragmatism in a digitised age where commercial promises of security can be so quickly overturned. Here the hackers of Ashley Madison make a good point: the site said all its user information was deleted – and it wasn’t.

Looking back to mid-20th-century Britain, a female volunteer from the sociolological Mass Observer project put the central, and perhaps distinctly British, role of keeping schtum instead of open censorship (or open admission) when she said:

I would never have foreseen … that I would be involved in a significant number of extra-marital affairs or that they would prove part of the life experience of most (not all) of my family and friends … Such relationships were still spoken about in a whisper, behind closed doors, shocking. Yet my own family was quite considerably rattled by a quasi-affair of my father’s: muttered about, hinted about, never pronounced openly.

Adultery is not likely to stop because people say it’s bad. Internet dating sites must learn to guarantee that private actions are “never pronounced openly” – in failing to do so, Ashley Madison has got its comeuppance. As for its customers’ best-laid plans, I’ll leave that to you to judge.

The Conversation

Windows 95 turns 20 – and new ways of interacting show up desktop's age

Windows 95 and DOS6: actual museum pieces. m01229, CC BY

The arrival of Microsoft Windows 95 on August 24 1995 brought about a desktop PC boom. With an easier and more intuitive graphical user interface than previous versions it appealed to more than just business, and Bill Gates’ stated aim of one PC per person per desk was set in motion. This was a time of 320Mb hard drives, 8Mb RAM and 15” inch CRT monitors. For most home users, the internet had only just arrived.

Windows 95 introduced the start menu, powered by a button in the bottom-left corner of the desktop. This gives a central point of entry into menus from which to choose commands and applications. The simplicity of this menu enables users to easily find commonly used documents and applications. All subsequent versions of Windows have kept this menu, with the notable exception of Windows 8, a change which prompted an enormous backlash.

We take these intuitive graphic interfaces for granted today, but earlier operating systems such as DOS and CP/M allowed the user to interact using only typed text commands. This all changed in the 1970s, with Ivan Sutherland’s work with Sketchpad and the use of lightpens to control CRT displays, Douglas Engelbart’s development of the computer mouse, and the Xerox PARC research team’s creation of the Windows Icon Menu Pointer graphical interfaces paradigm (WIMP) – the combination of mouse pointer, window and icons that remains standard to this day. By the early 1980s, Apple had developed graphical operating systems for its Lisa (released 1983) and Macintosh (1984) computers, and Microsoft had released Windows (1985).

DOS - these were not good old days. Krzysztof Burghardt

Imagining a desktop

All these interfaces rely on the central idea of the desktop, a comprehensible metaphor for a computer. We work with information in files and organise them in folders, remove unwanted information to the trash can, and note something of interest with a bookmark.

Metaphors are useful. They enable users to grasp concepts faster, but rely on the metaphor remaining comprehensible to the user and useful for the designer and programmer putting it into effect – without stretching it beyond belief. The advantage is that the pictures used to represent functions (icons) look similar to those in the workplace, and so the metaphor is readily understandable.

Breaking windows

But 20 years after Windows 95, the world has changed. We have smartphones and smart televisions, we use the internet prolifically for practically everything. Touchscreens are now almost more ubiquitous than the classic mouse-driven interface approach, and screen resolution is so high individual pixels can be difficult to see. We still have Windows, but things are changing. Indeed, they need to change.

The desktop metaphor has been the metaphor of choice for so long, and this ubiquity has helped computers find a place within households as a common, familiar tool rather than as specialist, computerised equipment. But is it still appropriate? After all, few of us sit in an office today with paper-strewn desks; books are read on a tablet or phone rather than hard-copies; printing emails is discouraged; most type their own letters and write their own emails; files are electronic not physical; we search the internet for information rather than flick through reference books; and increasingly the categorisation and organisation of data has taken second place to granular search.

Mouse-driven interfaces rely on a single point of input, but we’re increasingly seeing touch-based interfaces that accept swipes, touches and shakes in various combinations. We are moving away from the dictatorship of the mouse pointer. Dual-finger scrolling and pinch-to-zoom are new emerging metaphors – natural user interfaces (NUI) rather than graphical user interfaces.

What does the next 20 years hold?

It’s hard to tell but one thing that is certain is that interfaces will make use of more human senses to display information and to control the computer. Interfaces will become more transparent, more intuitive and less set around items such as boxes, arrows or icons. Human gestures will be more commonplace. And such interfaces will be incorporated into technology throughout the world, through virtual reality and augmented reality.

These interfaces will be appear and feel more natural. Some suitable devices already exist, such as ShiverPad, that provide shear forces on surfaces that provide a frictional feel to touch devices. Or Geomagic’s Touch X (formerly the Sensible Phantom Desktop) that delivers three-dimensional forces to make 3D objects feel solid.

Airborne haptics are another promising technology that develop tactile interfaces in mid-air. Through ultrasound, users can feel acoustic radiation fields that emanate from devices, without needing to touch any physical surface. Videogame manufacturers have led the way with these interfaces, including the Microsoft Kinect and Hololens that allow users to use body gestures to control the interface, or with their eyes through head-mounted displays.

Once interaction with a computer or device can be commanded using natural gestures, movements of the body or spoken commands, the necessity for the Windows-based metaphor of computer interaction begins to look dated – as old as it is.

The Conversation

Here's why the Greenwich Prime Meridian is actually in the wrong place

Out of line Sameer Walzade/flickr, CC BY-NC-ND

If you’ve ever been to the Royal Observatory in Greenwich, London, it might come as a shock to learn that the Prime Meridian line located there is in the wrong place. In fact, it’s out by about 100 metres.

Since the late 19th century, the Greenwich Meridian has been the line at which most maps mark 0° longitude, the starting point for measuring geographical coordinates in an east-west direction. But we now know that the line, a physical representation of which is visited by thousands of tourists every year, should more precisely be 0.001472° (or 102.5 m) further east.

How did the Victorian astronomers who created the Meridian get their calculations wrong? It comes down to the fact that the Earth is not a perfect sphere. In order to determine the precise angle at which to position the line as it ran through the Greenwich Observatory, its creators used early instruments that were aimed vertically at what are called “clock-stars” in the night sky. These are the brighter stars, whose positions have been observed over long periods of time and can be used as reference points in the sky.

To find the exact vertical direction (a line pointing at the precise centre of the Earth’s mass) the Observatory’s astronomers first found the exact horizontal direction (at 90° to the vertical) by looking at the surface of a pool of mercury in a basin.

Pointing at the stars Andres Rueda/Wikimedia Commons, CC BY

This method, however, assumed that the Earth’s gravitational force that created the horizontal surface on the mercury was both uniform and straight down. But because the Earth is not perfectly round and local gravitational forces vary with terrain, the surface of the mercury at Greenwich was not precisely horizontal relative to the centre of the Earth’s mass. As a result, the vertical line to the stars and therefore the meridian line on the ground were slightly skewed.

Today, we have the significant advantage of access to the satellite-based Global Positioning System (GPS), which does not rely on the Earth’s varying gravitational force and uses a more accurate method to calculate the centre of the planet’s mass. This has enabled scientists to determine the true vertical direction and in doing so produced a new meridian slightly to the east of the old one. Because the Earth isn’t a perfect sphere, it was impossible to simply move the new line over and maintain an accurate coordinate system.

Does it matter?

So what are the implications of this apparent inaccuracy, particularly given that it is the location from where every place on Earth is measured and from which all clocks are ultimately set? Fortunately, the answer is none, really.

We must remember that the position of the Prime Meridian is actually rather arbitrary and could theoretically be located anywhere. Its location through Greenwich was agreed at the International Meridian Conference of 1884 because it was the most popular candidate. Before this point, roughly ten other prime meridians were also in use, including ones through various other cities including Paris and Cadiz.

Because all important scientific measurements are today made using GPS and not the original location of the Greenwich Meridian, the impact of the error is actually minimal. Arguably, the main issues are confined to the Royal Observatory itself and how it plans to address the issue at the tourist site. There is certainly an argument for a new marker at the “true” Prime Meridian 102m to the east (although being set in one of London’s heavily regulated Royal Parks might make this somewhat problematic).

And where should we envisage the true Prime Meridian? Certainly, the new location is the more accurate line and the one that will be used in the future. But we shouldn’t forget the groundbreaking work that was conducted by scientists in centuries past with only limited tools. The fact that the two lines are just 100m apart is testament to their hard work and ingenuity and so disregarding the old line would be disrespectful. The old line will remain a historic and scientific curiosity, while the new one will allow for ever more accurate navigation within the Earth’s terrestrial, oceanic and atmospheric system.

The Conversation