Fossils help to reveal the true colours of extinct mammals for the first time

Jay Matternes/Wikimedia Commons

The animal kingdom is full of colour. Animals use it for camouflage, to advertise themselves and even as various forms of protection. But we haven’t been paying as much attention to what colours now-extinct mammals might have had – until now.

By matching samples of organic material to their chemical make up we’ve been able to determine the colour of extinct bats and our novel research, published in PNAS, has the potential to work out colours in lots of other organisms.

Fossils usually only leave us information about the harder parts of an animal such as bones and shells. Occasionally, however, soft tissues, such as feathers, skin or hair are left behind.

Palaeontologists have previously discovered dark, organic residues in fossils that for decades were thought to be remnants of decaying bacteria from the surface of the dead bodies. However, in 2008 it was suggested that these little bacteria-like structures were in fact preserved melanosomes, the special sub-units of a cell that carry the pigment melanin. This is the primary source of pigment for feathers, hair and skin across the animal kingdom.

Palaeontology in black and white. Yale

Looking at a fossilised feather from the Cretaceous period (roughly 105m years old) with an alternating black and white pattern revealed that the microscopic structures were only present in the black bands. If these structures were bacteria as originally thought, they would have covered the entire feather. The fact that the structures were missing from the white areas, which would lack pigment, suggested the organic matter was actually melanosomes. What’s more, the structures were aligned along the fine branches of the feather (barbs and barbules), another characteristic feature of melanosomes.

Colour clues

Different melanosomes have different shapes. Of the two main types, reddish brown pheomelanosomes are shaped like tiny little meatballs (500 nanometres in diameter). Black eumelanosomes, meanwhile, are shaped like little narrow sausages and are about twice the size at one micrometre in length.

Subsequent studies have used these facts to reconstruct colour patterns of dinosaurs, with the shape of melanosomes found in different places of a fossil indicating its pigment colour and even iridescence. But until now, little work has been done to characterise the chemistry of the pigment in these fossil melanosomes and there is little evidence to prove that the melanosome shape actually reflects the original colour in fossils.

Bacteria or colour carriers? Jakob Vinther

Using a combination of techniques, we have been able to describe melanin and melanosomes in animals ranging from fish to birds to squids, and for the first time, frogs, tadpoles and mammals. We looked at the shape of the melanosomes under a scanning electron microscope. We also analysed the molecules directly associated with these structures and found that their chemical signature resembled modern melanin samples. However, there were also some clear differences.

We speculated that perhaps the melanin had changed its chemical composition over millions of years buried in the ground under high pressure and temperature. In order to test this, we subjected melanin to even higher pressures and temperatures to replicate within 24 hours the conditions it would have experienced over millions of years. The chemical signature from our cooked melanin then looked more similar to the fossils.

Furthermore, we found that we could quantify the difference between red and black melanin in both fresh and fossil samples. This meant we could test the idea that melanosome shape correlated to chemical colour in the skin of the now fossilised animal – and we found that it did.

Secret in the bones. A. Vogel, Senckenberg Institution, Messel Research

Most excitingly, this also meant that we could for the first time determine the colour of long-extinct mammals just by studying their fossils. We looked at two fossilised bat species from Messel in Germany that lived in the Eocene period (around 49m years ago). Based on the small spherical melanosomes – which are indicative of pheomelanosomes – and the chemical signature associated with the related pigment, we were able to infer that these bats originally sported a reddish brown coat. This means they did not look much different from modern bats.

The study of fossil melanin and other pigments is a blooming research area. Knowing something about fossilised creatures' original colours will not only make Jurassic Park sequels more realistic, but will also inform us about the whole ecology of dinosaurs and other extinct animals.

The Conversation

It's not just Facebook that goes down: the cloud isn't as robust as we think

Josemaria Toscano/shutterstock.com

The computing cloud we have created supports much of our day-to-day office and leisure activity, from office email to online shopping and sharing holiday photos. Even health, social care and government functions are moving towards digital delivery over the internet.

However, we should be wary that as we become more dependent on it, the cracks will show. The systems are often a patchwork of interconnected services provided by various companies and industry partnerships. A failure of one can lead to a failure in others.

For example, Skype recently went down for almost an entire day, while Facebook was down for more than an hour – the second time in a week – meaning that many sites that depend on Facebook accounts as authentication were locked out too.

Losing Facebook is an annoyance, but interruptions to major health and social care services or energy supply management systems can lead to real damage to the economy and people’s lives.

A few weeks ago Google’s data centres in Belgium (europe-west1-b) lost power after the local power grid was struck by lightning four times. While most servers were protected by battery backup and redundant storage, there was still an estimated 0.000001% loss of disk space – which for Google’s huge data stores meant a few gigabytes of data.

The lesson is not to trust cloud providers to store and provide backups for your data. Your backups need backups too. What it also shows is our dependence on power supply system which, as long runs of conductive metal, are more prone to lightning strikes than you might imagine.

Facebook response graph, showing outage. Bill Buchanan

When the lights go out

Former US secretary of defence, William Cohen, recently outlined how the US power grid was vulnerable to a large-scale outage: “The possibility of a terrorist attack on the nation’s power grid — an assault that would cause coast-to-coast chaos,” he said, “is a very real one.”

As a former electrical engineer, I understand well the need for a safe and robust power supply, and that control systems can fail. It’s not uncommon to have alternative or redundant power supplies for important equipment. Single points of failure are accidents waiting to happen. Back-up your backup.

The electrical supply grid will try to provide alternative power whenever any part of it fails. The power supply system needs to be built with redundancy in case of problems, and monitoring and control systems that can respond to failures and keep the electricity supply balanced.

Cohen fears a major power outage could lead to civil unrest. Janet Napolitano, former Department of Homeland Security secretary, said a cyber-attack on the power grid was a case of “when,” not “if”. And former senior CIA analyst Peter Vincent Pry went so far as to say that an attack on the US electrical power supply network could “take the lives of every nine out of ten Americans”. The damage that an electromagnetic pulse (EMP) could cause, such as from a nuclear weapon air-burst, is well known. But many now think the complex and interconnected nature of industrial control systems, known as SCADA, could be the major risk.

An example of the potential problem is the north-east US blackout on August 14 2003, which affected 508 generating units at 265 separate power plants, cutting off power to 45m people in eight US states and 10m people in Ontario. It was caused by a software flaw in an alarm system in an Ohio control room which failed to warn operators about an overload, leading to domino effect of failures. It took two days to restore power.

As the world becomes increasingly internet-dependent, we have created a network that provides redundant routes to carry traffic from point to point, but electrical supply failures can still take out core routing systems.

Control systems - the weakest link

Often it’s the less obvious elements of infrastructure that are most open to attack. For example, air conditioning failures in data centres can cause overheating sufficient to melt equipment, especially the tape drives used to store vast amounts of data. This could affect anything from banking transactions worth billions, the routing of traffic around a busy city, or an emergency services call centre.

As we become more dependent on data and data-processing, so we are more vulnerable to their loss. Safety critical systems are built with failsafe control mechanisms, but those mechanisms can also attacked and compromised.

The cloud we have created and upon which we increasingly depend is not as hardy as we think. The internet itself, and the way we use it, is not as distributed as it was designed to be. We still rely too heavily on key physical locations where data and network interconnections are concentrated, creating unacceptable points of failure that could lead to a domino-effect collapse. The DNS infrastructure is a particular weak point, where just 13 root servers worldwide act as master lists for the entire web’s address book.

I don’t think governments have fully thought this through. Without power, without internet connectivity, there is no cloud. And without the cloud we have big problems.

The Conversation

Mars: why contamination and planetary protection are key to any search for life

The dark streaks on Mars' hills will be a good place to look for life. Credit: NASA/JPL/University of Arizona

It has been over 400 years since Galileo put humankind in its right place in the solar system. By looking at how Jupiter’s moons revolve about the gas giant, he came to the conclusion that Earth was not at the centre but one of many planets revolving around the sun. Similarly, recent evidence that water is likely to flow on Mars means facing the idea that Earth is not the only planet in the solar system to harbour life.

While Galileo’s heliocentric views were met by fierce opposition, finding life on Mars would today spark an unprecedented global scientific revolution on Earth. The immediate (and sensible) response will be a likely boost to the exploration of the red planet. But how should we go about it in an ethical and scientifically considered way – without bringing biological contamination from Earth to the unspoilt environment of Mars?

Where there’s water, there could be life. NASA’s recent discovery of salty traces, thought to come from seasonal water flows, means the race is now on to see actual water flowing on the surface. The salty traces were seen by the Mars Reconnaisance Orbiter – a satellite overlooking the surface of the planet – so were from off-site observations. Current ground missions, including the Opportunity and Curiosity rovers, have so far found no evidence of liquid water on the surface, so future ground missions will now certainly focus on looking for water and testing for the presence of microbial life harboured by liquid water.

Artist’s concept of Mars Reconnaissance Orbiter. NASA/JPL

NASA’s plans for a manned mission, part of the Journey to Mars programme, could start as early as the 2030s. These could directly confirm, or reject, the possibility of a Martian biosphere within our lifetime. But it may be more difficult than it sounds.

Surviving the extreme

Back in the 1970s, experiments carried out by the Viking landers looked for signatures of biological activity in dust samples from the Martian surface. These famously led to tantalising positive results that were later disproved so any new evidence of life on Mars will have to be thoroughly scrutinised.

The new evidence suggests liquid briny water can exist at temperatures as low as -23°C. This raises important questions about whether biochemical processes can take place in such exotic environments. One possibility could be Martian extremophile organisms, ones that are hardy enough to survive the most extreme environments and could withstand the harsh conditions of the red planet. This might motivate testing for subtler “proto” life forms – organisms similar to viruses, enzymes and prions – similar to those that may have existed on Earth before bacteria and archaea.

Plans will certainly include integrated tests, for example using lab-on-a-chip devices, to search for signature biochemical substances. But perhaps most importantly, newly devised tests will have to consider the effect that native Martian conditions, such as chemistry, radiation levels and temperature, could have on the biochemistry of any lifeforms.

New technologies should be adapted to test for life in areas of Mars of special interest. In fact, the National Academy of Sciences of the USA and the European Space Sciences Committee have already produced a report foreseeing potential “special regions” of interest apart from sources of briny water, including methane-rich areas, shallow ice-rich deposits and subsurface cavities such as caves.

Terraforming and contamination

But we need to proceed with care. Mars is a pristine environment and we we would need to take into account the potential fragility of Martian life. Earth extremophiles could, in principle, accidentally make the whole journey to Mars as microscopic stowaways and survive on the Martian surface. This could already be the case with current land missions such as the Opportunity and Curiosity rovers, which might be deemed unfit to travel to biologically promising areas due to the hazard of microbial contamination from Earth.

With its thin atmosphere and plummeting temperatures, Mars is a very inhospitable environment for humans. However, the existence of water could open up opportunities for terraforming, a process to modify a planet to have Earth-like conditions. Air and soil humidity are key factors for plant growth and human sustenance and attempts to create a more hospitable environment could start with small, artificially enclosed areas of Earth-like soil pockets immersed in Earth-like atmospheres. Building such structures would pose several engineering challenges to ensure a protective shield against radiation, and to prevent leaks.

Colonisation plans would have to include extensive tests on the viability of organisms from Earth within the extreme Martian environment. For example their resistance to lower gravity and higher radiation levels. However, there are subtler ramifications that might arise from constrained genetic and ecological diversity such as genetic disorders caused by inbreeding.

The prospect of a potential biochemical and ecological clash between Earth and Martian organisms would be the most complex problem so far seen by biologists. Introducing alien species to an indigenous environment could lead to significant adverse effects on the stability of the ecosystem and much like conservation work on Earth, we would have to address the issue of planetary protection.

Incoming organisms might also be susceptible to pathogenic infections from native lifeforms, something we would need to mitigate and plan for.

Beyond Galileo

In a famous letter to Kepler, Galileo complained that sceptic scholars of his celestial observations would not even look through his telescope, thus “shutting their eyes to the light of truth". Sadly, Galileo’s work supporting heliocentrism was eventually banned and the man himself subjected to house arrest by the Inquisition.

This time around nobody will look away. The consistent progress made over the past decades to understand Mars is a signature of a much more cooperative and ideologically open society.

Much like the light that bounced off Jupiter’s moons and came through Galileo’s telescope, the images captured by the Mars Reconnaissance Orbiter have already started unveiling new and exciting information. The truth about Martian life is out there, and it is just a matter of time before we go and find it.

The Conversation

Why it hurts to see others suffer: pain and empathy linked in the brain

Study suggests the ability to experience pain may be the key to having empathy for others in pain. www.shutterstock.com

The human brain processes the experience of empathy – the ability to understand another person’s pain – in a similar way to the experience of physical pain. This was the finding of a paper that specifically investigated the kind of empathy people feel when they see others in pain – but it could apply to other forms of empathy too. The results raise a number of intriguing questions, such as whether painkillers or brain damage could actually reduce our ability to feel empathy.

The researchers used a complicated experimental set up, which included using functional magnetic resonance imaging (fMRI), which measures blood flow changes in the brain. However, brain imaging alone can’t prove a link between pain and “pain empathy”. This is because the same brain areas are activated in each case, partly because there is a lot of overlap generally between the brain areas used for feelings and emotion. Another factor is that fMRI is not a direct measure of brain activity – the blood flow measure is instead something that we infer to accompany brain activity.

Brain waves: using fMRI as one of their tools, scientists have tracked how empathy in processed in the brain. www.shutterstock.com

The authors therefore took a new approach. They investigated whether the way a drug changes how the brain processes pain and empathy for those in pain can be used to understand the similarities and differences between these two experiences.

The study is based on two experiments on a total of about 150 participants – which is an unusually large number for this kind of study. The financial expense and general inconvenience of running fMRI studies, means scientists usually just involve some 20 or 30 people.

The painkiller trick

All the participants in the study were given a tablet that they were told was an approved, highly effective, expensive, over-the-counter painkiller (to ensure it had the maximum chance of working). However, none of the participants actually had a real painkiller but a placebo. This effect, called “placebo analgesia”, has been shown to be highly effective at reducing the amount of pain one perceives. However the authors wanted to know whether it affected how pain and pain empathy are processed in the brain.

A second group of people were also given this placebo analgesia, and 15 minutes later a second tablet – a drug that reverses the action of a painkiller. However, the participants were told this tablet would enhance the action of the painkiller, so they weren’t expecting it to counteract any previous drug they were given. The authors wanted to know whether the “placebo analgesia” could be reversed in the same way real painkillers can.

After waiting for the placebo painkiller to “take effect”, and checking that it had “worked” in all people, participants underwent various experiments. These involved receiving a short painful electrical shock to the back of the hand (the strength of this had previously been matched for differences in individual levels of pain threshold – we’ll call this self pain) and watching a picture of someone they had earlier met receive the painful stimulus (we’ll call this pain empathy).

Participants were then split into two groups: some received a real and painful shock (or watched someone receive it), while others received a painless stimuli. The painless stimulus was administered in the same way as the electrical stimulus, but at a lower current.

Participants were asked to rate the amount of pain they felt during self pain and were asked to rate the level of unpleasantness they felt while watching another person receive pain (pain empathy). And they also underwent fMRI during self pain and pain empathy.

The results?

In the first experiment with the one tablet only (placebo painkiller), 53 people received real pain and 49 people received (pretend) pain stimuli. The placebo painkiller reduced the amount of pain the participants reported feeling and also reduced the amount of unpleasantness they reported feeling while watching someone else experience pain. At the same time, the fMRI scan revealed that the network of regions that usually process pain showed a reduction in activity for placebo (pretend) pain compared to real pain.

In the second experiment, where 50 participants took an additional tablet – 25 had the real drug that reverses the action of a painkiller and another 25 people a placebo. The real drug was found to reverse the effects of the placebo analgesia on self pain and also on pain empathy, each by a similar amount. This confirms that the effect of the “pretend” painkiller can be reversed in the same way that a real (drug) painkiller can.

Placebo or reality? We can feel other people’s pain. www.shutterstock.com

This means that empathy for pain is likely to be processed very similarly (in the brain) to first-hand pain. We can infer that this is because both self pain and pain empathy are changed in the same way by the painkiller-reversing drug, and because placebo analgesia also reduces pain empathy in the same way as it reduces pain. The fMRI results add further evidence that this is indeed what is going on.

Exploring empathy further

This is therefore consistent with the theory that empathy for pain occurs as a result of simulating another person’s feelings within one’s own brain. It also provides further evidence that the feelings of pain and pain empathy occur as a result of similar processes within the brain.

Further, patients who have damage and/or disease in the parts of the brain that fall within this network of pain-processing areas, often experience a reduction in ability to feel empathy for painlink. This suggests that the ability to feel pain is necessary in order to experience empathy for pain.

Going forward, the research could be useful to explore empathy in other contexts. For example, the researchers suggest addressing the question of whether the pain from other events – for example social rejection – is processed in a similar way. This study certainly provides a new angle to investigate the feelings of pain and empathy – namely by manipulating two experiences to see if they are processed in similar ways.

Another suggestion is that taking painkillers may decrease one’s feeling of empathy for pain – but that topic needs further research. A way to do this could be to compare the results of this study using placebo painkillers with a similar design using real painkillers.

The Conversation

Can digging up 100-year-old bodies help crack unsolved murders?

Wikimedia Commons

Imagine the untold misery caused by telling the wrong family that their loved one is dead while another family is left in blissful ignorance. That’s why accurately identifying bodies is of paramount importance.

Identification is usually based on simple criteria. Visual recognition or distinctive tattoos are often enough. But as time passes and the body deteriorates, these methods become less reliable or impossible. This will certainly be the case for the bodies alleged to be those of Russia’s Crown Prince Alexei and Grand Duchess Maria, which are due to be re-examined in an attempt to determine if they are the real royals killed during the Russian Revolution.

Forensic analysis has developed apace in the last century, and DNA technology in particular has opened ways of analysing bodies that were previously unthought of even relatively recently. Such technology is increasingly being applied to cases from the past, and the media are always quick to report stories where high profile mysteries are finally “solved” using modern forensics. The cynic would note that some cases (I’m particularly thinking of the “Jack the Ripper” murders) have been “definitively solved” several times with different outcomes.

DNA evidence

So what can forensic science actually bring to these old cases? Certainly DNA can often be extracted from the body, often in teeth and bones. But a DNA profile isn’t just a printout of who someone is. It has to be compared to a known profile. It’s unlikely that we still have a hairbrush or toothbrush from Crown Prince Alexei, but if we have a known sample of DNA from a relation (such a bloodstains on a uniform from his great-grandfather Emperor Alexander II) then familial similarities can be used.

Our knowledge of DNA can do more than just identify someone. An old DNA sample can spot any genetic diseases a subject may have prone to. Similarly, advances in technology allow us to look at the chemical composition of bones and determine what kinds of things they ate and so where in the world they probably came from.

Analysis of pollen from the sinuses can tell us about what plants were around the person. Carbon dating may tell us how old someone actually is, although – as with most forensic techniques – only a range of dates can be given rather than a definitive answer. All of this information can help us work out whose body is (or isn’t) being examined.

Ricard III: picking out the detail Darren Staples/Reuters

But we also shouldn’t forget the simpler techniques. Just looking at a body may yield information depending on how well it has been preserved. Old or perimortem (from the time of death) injuries or bone deformities may be apparent. The shape of the skull and teeth may point to gender and ethnicity. CT scanning can show us inside the body without having to open it, helpful when dissection, which is an invasive and destructive process, is not an option.

This battery of tests can tell us an awful lot about how someone lived, how they died, and who they may have been. The most publicised example of this in the UK was the discovery of King Richard III, which was identified by my colleagues at Leicester University last year.

It’s not just the body itself that the forensic investigators can examine. If someone is buried, what is the grave like – deep or shallow? What does the soil tell us? If the body is in a shroud, how was that made and of what? The possibilities are only limited by the imagination of the investigators.

Unsolved mysteries

But before we get carried away, we must bear in mind that few things in this field are completely certain. By way of example, you’d think that attempts to carbon date the Shroud of Turin – the cloth claimed to have covered Jesus’s dead body – would allow us to finally decide whether or not it really dates to Biblical times. But when the results came out suggesting not, an argument arose as to whether the sample tested was from the original weave or part of a medieval repair.

As time passes, the possibility of deterioration, contamination, alteration or outright fraud of a sample increases. The more people who handle a body, the more foreign DNA can be introduced. Time changes the body and changes the environment.

Finally, there is always the issue of interpretation. For example, was Palestinian leader Yasser Arafat poisoned with radiation or not? Different interpretations of the test results can lead to different conclusions. It was originally suggested that Arafat had been poisoned by 210 Polonium, and an exhumation of his body produced samples showing unusually high levels of this element, but later analysis suggested that this was environmental in nature.

So can modern forensic science reveal secrets from the past? Yes, but not necessarily as definitively as excited headlines may wish us to believe.

The Conversation

Ad industry may gripe about adblockers, but they broke the contract – not us

madpixblue/shutterstock.com

The latest version of Apple’s operating system for phones and tablets, iOS9, allows the installation of adblocking software that removes advertising, analytics and tracking within Apple’s Safari browser. While Apple’s smartphone market share is only around 14% worldwide, this has prompted another outpouring from the mobile and web advertising industry on the effects of adblockers, and discussion as to whether a “free” web can exist without adverts.

It’s not a straightforward question: advertising executivesand publishers complain that ads fund “free” content and adblockers break this contract. Defenders of adblocking point out that the techniques used to serve ads are underhand and that the ads themselves are intrusive. Who is right?

Why we use adblockers

There are good reasons for using adblockers. People are usually prompted to do so by online advertising techniques that they find intrusive. These include pop-ups, pop-unders, blinking ads, being forced to watch videos before getting to the content, and ads that contravene the Acceptable Ads Manifesto.

Adverts and trackers can be loaded from multiple third-party websites, inserted into the web page by advertising networks rather than by the site’s publishers. While this saves publishers the hassle of finding advertisers and negotiating rates, it means they often have little say over what ads appear, which can lead to ads that are irrelevant, dubious, even offensive. The additional load on the browser from connecting to multiple sites at once also drains battery and bandwidth and slows down the page load – all for something we don’t want and which scours our devices to collect information about us for further use.

The UK’s Internet Advertising Bureau (IAB) believe that 15% of British adults use adblockers. The IAB study found that people blocked adverts because they were intrusive (73%), ugly or annoying (55%), slowed down web browsers (54%), were irrelevant (46%), or over privacy concerns (31%). What this suggests is that users don’t reject advertising per se, but intrusive advertising specifically.

Advertising, ethics and the web

The advertising industry argues that adblockers undermine the revenue model for publishers that relies upon behaviourally targeted advertising. They claim adblockers stifle start-ups that are dependent on advertising as a means of generating revenue. The theory goes that without advertising revenue all that’s left is subscription services, something which generally only large corporations are good at building.

While there is some truth to this, the argument assumes that digital start-ups (whether this be an app, a new social media service, or a news website) have access to a large user base from which to generate ad revenue. But of course this isn’t the case when firms are only just getting going. Start-ups rely on investment to grow and be self-sustaining: only then can advertising assist.

It is reasonable to argue that content has to be paid for. We might try to ignore the adverts that subsidise printed newspapers and magazines, but we cannot remove them. However, in respect of mobile devices – which have now become the primary means through which the world gets online – we must also consider the data plan that we pay for as part of our mobile phone contract. The firm behind one mobile adblocker, Shine, estimates that depending on where we live, ads can use up 10-50% of a user’s data allowance.

Annoying mobile ads make for unhappy phones and users. ronbennetts, CC BY-ND

Browsers as consent mechanisms

So the case for mobile is different, in that ads represent a cost to the user. Europeans living in EU member states have the right to refuse to be tracked by third parties. This comes under Article 5(3) of the EU ePrivacy Directive, that in 2012 was altered so people have to be asked upfront whether they consent to cookies.

The aim of this was to shift third-party cookies from being opt-out to being opt-in. The ad industry argued that people’s web browser settings were sufficient to indicate consent to interest-based advertising and tracking – but of course, many people do not know how to alter browser settings. Seen in this way, adblockers are a means of expressing (or rather, denying) consent – something made clear by the need to find and install an adblocking programme or browser extension.

The problem with the implied contract of advertising-for-content is that it is opaque and built upon questionable terms. It’s disingenuous to blame people for using adblockers: we accept adverts in magazines, newspapers and cinemas and on radio, billboards and television. The good ones make us smile. The best we fondly remember. We mostly stick to the deal that we get content free or at reduced cost in exchange for being exposed to ads.

But the growth of adblocking demonstrates that parts of the advertising industry have overstepped the mark with their creepy tracking mechanisms and deliberately confusing or irritating formats. The ad industry broke the contract, not us. How does anyone think that irritating people is the way forward? Which brand, large or small, would want to be associated with annoying their customers?

The growing number of people using desktop and mobile adblockers leaves the online advertising industry two options: fight web users and ad-blocking firms by lobbying for legal change or protection, or the more interesting route of trying to create a model that works for everyone. Rather than fighting the tide, advertising and publishing need to find a way to swim with it.

The Conversation

The Martian: a perfect balance of scientific accuracy and gripping fiction

Matt Damon is feeling lonely on Mars. 20th Century Fox

“I’m going to have to science the shit out of this,” says astronaut Mark Watney, played by Matt Damon, after being stranded on Mars. That pretty much sums up the tone in Ridley Scott’s new film The Martian, adapted from Andy Weir’s novel, which appears in cinemas this week. Many have already commended the movie for its scientific rigour and Scott has said himself that it is as “accurate as we can possibly get it”.

So does the movie live up to its expectations? Well, the mission design and the hardware are based on actual NASA capabilities and an existing plan to get humans to Mars known as Mars Direct. However, there are parts that are less scientifically accurate. But what the story lacks in scientific rigour, it makes up for with great fiction that could inspire new interest in science.

Growing food in space

The main challenge for Watney is to find a way to grow food on the planet in order to stay alive the four years until NASA’s next planned mission to Mars. While this has of course never been done in real life, it is not entirely unrealistic. In August 2015 astronauts on the International Space Station (ISS) ate lettuce that they had grown in space. This was the first time that humankind had grown and eaten food away from home.

In these so-called “VEGGIE” experiments the crew had been provided with everything they needed: soil, seeds, specific lamps tuned to the requirements of the plants. In The Martain, however, Watney had none of this specially-prepared equipment and, crucially, no soil.

The vegetable production system aboard the ISS. NASA/wikipedia

The technical term for loose material covering rock is regolith, which includes the soil that we all know on Earth. Even regolith on Mars is familiar: we have been studying its properties since the 1970s, starting with the Mars Viking missions. NASA’s Phoenix Mars Exploration Rover (MER) has found evidence that the regolith contains crucial minerals for growing plants and is slightly alkaline, suitable for a range of crops – including asparagus and green beans.

Normally potatoes are grown in an acidic soil as this suppresses the effect of pathogens, such as common scab, but also because alkaline soils have a negative effect on the yield of potatoes. Our hero could easily account for this in his calculations for the number of plants required to grow enough food for a set number of days.

But Martian regolith may also contain perchlorates that are not good for human bodies. However, somewhat ironically, they are used as markers for the presence of water. Watney needs additional water for his crops and sets about making this by combining oxygen with hydrogen. To get the hydrogen, Watney catalyses a type of rocket fuel known as Hydrazine, in a somewhat dangerous experiment which would be even more dangerous in real life – as you’d end up with some toxic leftovers.

But exciting new results suggest that water sometimes runs openly on the surface of Mars. So in reality, it would have been safer for Watney to just go and extract it from the regolith itself.

We are seeing the early days of growing food in space. Eventually, if humans are to start living for extended periods on the moon, and eventually Mars, we need to be able to do experiments generating raw materials directly on their surfaces. There are already ideas to test our ability to grow food on the moon in small canisters, including basil and turnips.

Stormy weather

What plunges Watney into peril in the movie is an aborted mission in strong winds. Here on Earth, we use the Beaufort scale to measure wind strength. Gale force winds have speeds up to 74km per hour. To get a sense of what’s that like, imagine putting your head out of a car window when moving at 50 miles per hour. Then try to imagine what it would be like at 100 miles per hour as experienced by Watney and his fellow astronauts on Mars.

On Earth this would be a devastating storm, but not on Mars. The pressure that you feel on your skin when out on a windy day is known as the dynamic pressure. It depends not only on how fast the air is moving, but also on its density. In gale force winds on Earth this pressure is about 250 Pascals. The force this exerts on an average person is about one-third of Earth gravity. This is why you have trouble walking about in gale force winds.

But on Mars, the atmosphere is just 1% of the density of that on Earth, meaning the dynamic pressure is much smaller. Even in Watney’s storm the force on a human being would be tiny — less than one-tenth of Mars' gravity. The storm that Watney and his crew encounter would only feel like a gentle breeze – not the devastating storm shown in the film.

Dust storms of 2001 observed on Mars by Mars Global Surveyor. NASA/JPL/Malin Space Science Systems

Despite this, the wind and the sound it produces does actually have an important function in the film – it creates tension and allows us to empathise with Watney and feel his fear.

Finding pathfinder

Even though this is a work of fiction, as a follower of Mars exploration I felt a tingle of excitement as Watney recovered the Mars Pathfinder, buried under a huge pile of dust. Just as NASA follow Watney’s exploits using imaging from orbit in the film, space scientists have also been monitoring the landing sites of Mars spacecrafts, including Pathfinder.

Measurements at the landing site of the Mars lander Phoenix have shown that dust settles out of the atmosphere at a rate of about 0.1 – 1 thousandth of a millimetre per Martian day. Over the 20 years Pathfinder has been on Mars, that only amounts to between 1 mm and 10 mm of accumulated dust. So, in reality, Watney wouldn’t really have needed to do much digging at all. But this dramatic unearthing of Pathfinder pulls at the heart-strings of our exploration of Mars.

Pathfinder’s landing site imaged by Mars Reconnaissance Orbiter NASA/JPL/University of Arizona

All too often in science fiction the characters are placed in impossible situations from which they can only escape by resorting to a kind of scientific deus ex machina. This is certainly not so in The Martian, in which the story has a logically and physically possible resolution.

The Martian is one of an increasing number of Hollywood films that explore the human soul and spirit of humanity while still grounded in science. Another example is how Christopher Nolan and Kip Thorne used Einstein’s theory of General Relativity to tremendous effect in Interstellar. However, The Martian uses science in a different way. It shows what it is to be a scientist. It shows Watney building scientific arguments, doing calculations, facing the outcome of making an error in reasoning – his answers aren’t in the back of the book. This engages audiences with compelling science.

One could easily be critical of the science shown in fiction. But in a push to reflect “real science” in the cinema we shouldn’t surrender strong narratives for the sake of scientific accuracy. To do so denies us the opportunity to tell stories and to show science in action and in unfamiliar settings.

The Conversation

NASA: streaks of salt on Mars may mean flowing water, and new hopes of life

There's finally evidence that salty water could be behind the mysterious ephemeral dark streaks on Mars. NASA/JPL-Caltech/UArizona

Salty streaks have been discovered on Mars, which could be a sign that salty water is seeping seasonally to the surface. Scientists have previously observed dark streaks (see image above) on the planet’s slopes which are thought to have resulted from seeps of water wetting surface dust. Evidence of salts left behind in these streaks as the water dried up are the best evidence for this yet. The discovery is important – not least as it raises the tantalising prospect of a viable habitat for microbial life on Mars.

I have lost track of how many times water has been “discovered” on Mars. In this case, the researchers have detected hydrated salts rather than salty water itself. But the results, published in Nature Geoscience, are an important step to finding actual, liquid water. So how close are we? Let’s take a look at what we know so far and where the new findings fit in.

Ice versus liquid water

Back in the 18th century, William Herschel suggested that Mars’s polar caps, which even a small telescope can detect, were made of ice or snow – but he had no proof. It wasn’t until the 1950s that data from telescopes fitted with spectrometers, which analyse reflected sunlight, was interpreted as showing frozen water (water-ice). However, the first spacecraft to Mars found this difficult to confirm, as water-ice is in most places usually covered by ice made up of carbon dioxide.

Part of Nirgal Vallis, a valley on Mars first seen on this image by Mariner 9 in 1972. This image is 120km from side to side. NASA

In the 1970s attention turned to the much juicier topic of liquid water on Mars, with the discovery by Mariner 9 of ancient river channels that must have been carved by flowing water. These channel systems were evidently very ancient (billions of years old), so although they showed an abundance of liquid water in the past they had no bearing on the occurrence of water at the present time.

‘Canals’ on Mars drawn by Percival Lowell in 1896. Percival Lowell/wikipedia

Gullies & droplets

Things became more interesting in 2000, with the announcement that high-resolution images from the Mars Orbiter Camera on board Mars Global Surveyor showed gullies several metres deep and hundreds of metres long running down the internal slopes of craters.

It was suggested that they were carved by water that had escaped from underground storage. Such small and sharp features had to be young. They could still have been thousands of years old but annual changes were soon noticed in a few gullies which appeared to suggest that they were still active today.

Gullies inside a crater in Noachis Terra, 47 degrees south. NASA/JPL/Malin Space Science Systems

Are gullies really evidence of flowing water? Some probably are, but there are other explanations such as dry rock avalanches or slabs of frozen carbon dioxide scooting downhill. Some gullies start near the tops of sand dunes where an underground reservoir of water is very improbable.

In 2008 the lander Phoenix actually saw water on Mars. When it scraped away at the dirt, it found water-ice a few centimetres down, but more excitingly droplets that could hardly be anything other than water were seen to form on the lander’s legs. It was suggested that the water had condensed around wind-blown grains of calcium perchlorate, a salt mineral whose properties enable it to scavenge water from the air and then dissolve it. Moreover, whereas pure water would freeze at the local temperature at the time (between -10°C and -80°C), water containing enough dissolved salts could stay liquid.

Water droplets on the leg of the Phoenix lander in 2008. Arrow points to the relevant leg. NASA/JPL-Caltech/University of Arizona/Max Planck Institute

Water seeps?

In 2011 a new phenomenon was recognised on high resolution images from orbit by the Mars Reconnaissance Orbiter. These are “recurrent slope lineae” or RSLs, dark downhill streaks that come and go with the seasons (which last about twice as long as seasons on Earth).

They are usually between 0.5m to 5m wide, and not much more than 100m long. These could mark avalanches of dry dust, but the favoured explanation has always been – and which the new NASA find also suggests – is that water is seeping from the ground and wetting the surface enough to darken it, though without flowing in sufficient volume to erode a gully.

Artificial perspective view of the streaks. NASA/JPL/University of Arizona

What is most noteworthy about the new research is that it is the first determination of the composition of the streaks. They used an instrument called CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on board the orbiter to analyse the light reflected off the surface of these streaks. In this way, they could show that they contain salts that are most likely to be magnesium perchlorate, magnesium chlorate and sodium perchlorate. These kinds of salts have antifreeze properties that would keep water flowing in the cold temperature, and tallies with what Phoenix had suggested in 2008.

There are no signs that liquid water was present when the NASA measurements were made. Scientists will surely keep looking in the same spot in the hope of finding the features that would indicate liquid water instead of those indicative of salts left behind after the water has dried up. However, few can doubt that the salts were put there by flowing water.

Importantly, with liquid water comes the prospect of life on Mars. The researchers cannily conclude by pointing out that in the most arid parts of Earth’s Atacama desert the only source of water for microbes is what they can get from salts dissolved in water. If it can happen on Earth, maybe it can happen on Mars too.

The Conversation

In the future, your internet connection could come through your lightbulb

mightyohm, CC BY-SA

The tungsten lightbulb has served well over the century or so since it was introduced, but its days are numbered now with the arrival of LED lighting, which consume a tenth of the power of incandescent bulbs and have a lifespan 30 times longer. Potential uses of LEDs are not limited to illumination: smart lighting products are emerging that can offer various additional features, including linking your laptop or smartphone to the internet. Move over Wi-Fi, Li-Fi is here.

Wireless communication with visible light is, in fact, not a new idea. Everyone knows about using smoke signals on a desert island to try to capture attention. Perhaps less well known is that in the time of Napoleon much of Europe was covered with optical telegraphs, otherwise known as the semaphore.

The photophone, with speech carried over reflected light. Amédée Guillemin

Alexander Graham Bell, inventor of the telephone, actually regarded the photophone as his most important invention, a device that used a mirror to relay the vibrations caused by speech over a beam of light.

In the same way that interrupting (modulating) a plume of smoke can break it into parts that form an SOS message in Morse code, so visible light communications – Li-Fi – rapidly modulates the intensity of a light to encode data as binary zeros and ones. But this doesn’t mean that Li-Fi transceivers will flicker; the modulation will be too fast for the eye to see.

Wi-Fi vs Li-Fi

The enormous and growing user demand for wireless data is placing huge pressure on existing Wi-Fi technology, which uses the radio and microwave frequency spectrum. With exponential growth of mobile devices, by 2019 more than ten billion devices are expected to exchange around 35 quintillion (1018) bytes of information each month. This won’t be possible using existing wireless technology due to frequency congestion and electromagnetic interference. The problem is most acutely felt in public spaces in urban areas, where many users try to share the limited capacity available from Wi-Fi transmitters or mobile phone network cell towers.

A fundamental communications principle is that the maximum data transfer possible scales with the electromagnetic frequency bandwidth available. The radio frequency spectrum is heavily used and regulated, and there just isn’t enough additional space to satisfy the growth in demand. So Li-Fi has the potential to replace radio and microwave frequency Wi-Fi.

Light frequencies on the electromagnetic spectrum are underused, while to either side is congested. Philip Ronan, CC BY-SA

Visible light spectrum has huge, unused and unregulated capacity for communications. The light from LEDs can be modulated very quickly: data rates as high as 3.5Gb/s using a single blue LED or 1.7Gb/s with white light have been demonstrated by researchers in our EPSRC-funded Ultra-Parallel Visible Light Communications programme.

Unlike Wi-Fi transmitters, optical communications are well-confined inside the walls of a room. This confinement might seem to be a limitation for Li-Fi, but it offers the key advantage that it is very secure: if the curtains are drawn then nobody outside the room can eavesdrop. An array of light sources in the ceiling could send different signals to different users. The transmitter power can be localised, more efficiently used and won’t interfere with adjacent Li-Fi sources. Indeed the lack of radio frequency interference is another advantage over Wi-Fi. Visible light communications is intrinsically safe, and could end the need for travellers to switch devices to flight mode.

A further advantage of Li-Fi is that it can use existing power lines as LED lighting so no new infrostatcutre is needed.

How a Li-Fi network would work. Boston University

Lightening the burden of the internet of things

The internet of things is an ambitious vision of a hyper-connected world of objects autonomously communicating with each other. For example, your fridge might inform your smartphone that you have run out of milk, and even order it for you. Sensors in your car will directly alert you though your smartphone that your tyres are too worn or have low pressure.

Given the number of “things” that can be fitted with sensors and controllers then network-enabled and connected, the bandwidth needed for all these devices to communicate is vast. Industry monitor Gartner predicts that 25 billion such devices will be connected by 2020, but given that most of this information needs only to be transferred a short distance, Li-Fi is an attractive – and perhaps the only – solution to making this a reality.

Several companies are already offering products for visible light communications. The Li-1st from PureLiFi, based in Edinburgh, offers a simple plug-and-play solution for secure wireless point-to-point internet access with a capacity of 11.5 Mbps – comparable to first generation Wi-Fi. Another is Oledcomm from France, which exploits the safe, non-radio frequency nature of Li-Fi with installations in hospitals.

There are still many technological challenges to tackle but already the first steps have been taken to make Li-Fi a reality. In the future your light switch will turn on much more than just illumination.

The Conversation

Simpler, smaller, cheaper? Alternatives to Britain's new nuclear power plant

na0905/flickr, CC BY-SA

Britain appears to finally be on the way to building its first new nuclear power station for 20 years. The chancellor of the exchequer, George Osborne, recently announced a £2 billion loan guarantee linked to the development of the Hinkley Point C power plant, signalling that the final decision to build cannot be far behind. But the plans from French firm EDF have drawn criticism from an array of experts and commentators for being too expensive and relying on an as yet unproven technology that is already being redesigned.

Although the basic principles of nuclear energy are relatively simple, the specific designs of different reactors can vary considerably. The two other companies hoping to build new nuclear plants in the UK, for example, each favour alternatives to EDF’s model. So are we in danger of backing the wrong technology with the current plans for Hinkley Point?

Nuclear reactors generate heat from uranium using a reaction known as fission. This is a process where atomic nuclei split into two fragments, releasing energy in the form of heat. Fission of one atom also releases several neutrons that can spark the same process in neighbouring atoms, leading to a chain reaction throughout the uranium fuel within the reactor core. The chain reaction can be slowed or stopped by inserting control rods into the core to absorb the excess neutrons.

The heat from the reaction is used to create steam, which generates electricity via a turbine. The heat is carried away from the core by a coolant substance, which can also be used as a moderator to slow down the neutrons and increase the chances that they induce fission in other fuel atoms (although some designs use separate moderators).

Overdue, over-budget, over-engineered

The reactor EDF wants to use at Hinkley Point C is a type of pressurised water reactor (PWR) that uses water as both the moderator and coolant. The specific design is known as a European pressurised reactor (EPR) and evolved from earlier French models with innovations such as a concrete-ceramic core catcher to prevent the molten core of the reactor escaping in the case of a meltdown. If built, it will deliver 3.2GW of electrical power, roughly equivalent to 7% of the UK’s electricity.

Power stations featuring this enhanced EPR design are being built in France, Finland and China, but none are yet online and the first two are billions of pounds over budget and years overdue. The Chinese projects are only delayed by around two years, perhaps due to experience gained in the European projects.

The predicted cost of Hinkley Point C has steadily risen from £14bn to £24.5bn and has steadily risen from earlier estimates of £16bn. The complexity of the project is enormous, due to what is believed to be by many to be an over-engineered design. There are also reported issues regarding the manufacture of the reactor pressure vessel for the EPR associated with anomalies in the composition of the steel.

Proven technology in Japan Toach japan/Wikimedia Commons, CC BY-SA

Simpler reactor

EDF has admitted that Hinkley Point C will not start operating in 2023 as originally predicted. As a result, the first new nuclear plant to come online in the UK may actually be an entirely different type: the advanced boiling water reactor (ABWR), a proven Japanese design from Hitachi-GE that has been used in nuclear power stations since the 1990s.

This reactor is simpler than because the water is allowed to boil in the reactor creating steam directly. In PWRs on the other hand, two stages are required to create the steam and the water in the core is maintained at pressure to prevent boiling. The ABWR is also self-compensating. This means it can maintain a stable temperature simply through normal operation. The hotter it gets, the more steam it produces. This reduces the amount of neutrons produced and so the reaction slows down, diminishing the amount of heat again.

On top of this, the ABWR has advantages from a manufacturing point of view. It has a modular design (it is build in sections assembled in factories rather than in one big piece) and so its construction is more straightforward and therefore cheaper. This means the electricity price the government will need to guarantee to the plant’s operator Horizon is likely to be lower than that of the 92.5p/MWh agreed with EDF for Hinkley Point C.

New generation

Looking further into the future, the NuGen proposal, backed by Toshiba, to bring the Westinghouse AP1000 design to the UK is another promising prospect. This advanced passive 1GW reactor is actually a PWR but is highly simplified compared to the EPR with far fewer components and so far fewer things that could wrong. It also employs a large amount of passive safety features that work even without an external power source. In this instance natural processes such as gravity-induced flow and convection are used to drive the circulation of cooling.

Unfortunately, the rather blinkered focus of the government on delivering the Hinkley Point project without recognising what is coming in the near future is a significant point of weakness for UK nuclear energy policy. An approach that gave greater recognition to the potential of other designs could avoid future embarrassment, as well as saving money for the taxpayer and energy bill payer.

The Conversation

Understanding the hidden dimensions of modern physics through the arts

Can the arts be a bridge to other worlds? Daniel Parks, CC BY-NC

Sometimes, the hardest job for a theoretical physicist is telling the story. The work in this field can be conducted entirely in the abstract, leaving outsiders (and the odd insider) bewildered, but there might be some assistance in the visualisation techniques developed by certain artists and writers. Cutting-edge theories are often motivated by aesthetics and simplicity, after all, and so the idea of a synergy between artists and scientists does not seem all that far-fetched. One clear example where the combination can work comes in the exploration and understanding of extra dimensions.

You may have heard that scientists often talk of these “other worlds”, but (hopefully), your everyday reality takes place in three dimensions of space, and one dimension of time. Physicists marry together these dimensions because of Albert Einstein’s special theory of relativity; it enables us to describe a point in (1+3) dimensions of (time+space) with four coordinates: (t, x, y, z). But from an abstract point of view, it makes a lot of sense to then ask: why just four? And in fact, many theories in physics can easily be formulated without being too specific about the number of dimensions. We can call it 1+D instead and open up the the possibility of more than three spatial dimensions: (t, x, y, z….).

But that’s where it gets hard, of course. Extra spatial dimensions are very hard to imagine, and even the scientists working with them (such as myself) have a hard time visualising them. Now, this in itself is not proof that they do not exist. We also find it hard to imagine infinities, for instance, and super-positions of quantum mechanical states, but both these concepts are seen in nature.

Hidden truths

Physicists have of course come up with tests which allow for the existence of other dimensions, but the trouble is that this delivers results which imply we’re happily bumping along with the ones we’re all very familiar with.

But before concluding that this invalidates the whole discussion about more dimensions already, there are ways to get around this result. We already knew that the new dimensions would have to be very different from the ones we experience – otherwise we would be able to see them. In much the same way they may not show up in those tests which use force laws. They may be very small, for instance, and folded away to make them invisible to us. Size and energy are inversely related in particle theories, so the smaller the dimensions are, the less likely it is that we will be able to probe them directly.

A popular example of how this works is by an ant on a piece of rope. From far away, the piece of rope seems one-dimensional, but only when you zoom in you can see that in the ant’s world the surface it sits on is really 2D.

Our ability to perceive even our own dimensions can be flawed. Alexa Meade

This limited ability to perceive dimensions even in our familiar world can be seen in the work of artist Alexa Meade, who paints 3D installations and renders them 2D to our primitive eyes. And to start visualising extra dimensions instead, scientists may also take inspiration from the arts.

Slicing

A good starting point is to turn the question on its head: in the 1884 novella Flatland EA Abbott wrote about creatures living in fewer dimensions, instead of more. The creatures of his 2D world experienced 3D through cross sections of objects passing through. An illustration from the book appears below.

Flatland, A Romance of Many Dimensions, EA Abbott

In exactly the same way we may use a computer to show what a cross section of a 4D image would look like in 3D, or in 2D. A 4D cube (a hypercube) may for instance be represented with this slicing method:

Thomas Banchoff, Brown University

Interestingly, both are slices of the same object, but in the top set of images the slicing was started on a corner, and in the second one with a square.

EA Abbott wrote about slicing, but there may have been another way in which his flat creatures observed the 3D sphere. If the 3D sun had shone over it, a shadow would have been cast over the plane: this defines the linear perspective method. It has a foundation in ancient Greece, and modern artists still follow the techniques developed by Renaissance architect Filippo Brunelleschi, perhaps most famous for building the gigantic dome of on Florence cathedral. Jean-François Colonna has some great examples of extra dimensional objects created using the perspective method which have all the trappings of abstract art.

from http://ift.tt/Lqqx5A

Light and shade

In the same way that the light from our sun casts shadows on 2D surfaces, that is, in parallel lines, we may consider a hypothetical 4D sun which casts a shadow of a 4D object onto our 3D world. This is hard to visualise, but easy to program on a computer. A hypercube would then look like this:

from Wolfram Mathworld

This is called a Schlegel diagram. Perhaps it is not immediately obvious how this is related to a shadow, but considering its contour lines may help.

If you can imagine adding one dimension, you can imagine adding several. Descriptions of string theory, for example, only make sense when formulated in as many as 11 dimensions. And though the result may grow in complexity with the number of dimensions, the techniques looked at here are not limited to 4D.

Difficulties with visualizing physical theories have never proved to be a valid basis for their rejection, but they have been an obstacle to understanding. The techniques developed for visualizing extra dimensions form a good example of how physicists may borrow and extrapolate techniques developed in the arts world, and how interdisciplinary collaborations may be beneficial to both fields.

The Conversation

Here's how to make the Hajj safer – by better understanding crowd psychology

Muhammad Hamed/Reuters

The crowd crush at the annual Hajj pilgrimage in Saudi Arabia has claimed the lives of more than 700 people and injured at least 850 more. Sadly this is not the first such tragedy to affect the event. The Hajj attracts millions of pilgrims from across the world every year and involves several complex rituals, which means it is always a potentially dangerous event.

In recent years, great lengths have been taken to ensure the safety of pilgrims, with, according to Saudi government sources, more than £200bn spent since 1992 on redesigning the infrastructure of the Hajj, which involves events at several sites in and around the city of Mecca. One key way that organisers plan for the safety of crowd events such as the Hajj, but also parades, carnivals and sporting competitions, is by using computer simulations to model large groups of people.

Two crowds

In a recent systematic review of computer models we drew upon the social identity approach, which suggests a distinction between physical crowds (where people are simply in one place) and psychological crowds (where people in a physical crowd share a common self-definition – a social identity).

A group of people at an event may all see themselves and each other as Muslims, Manchester United supporters, or music lovers, for example. This shared identity affects the behaviour of the crowd and is therefore imperative for understanding and predicting the crowd movements, including flow and congestion.

Crowd control Muhammad Hamed/Reuters

Recent research has shown that feelings of group identity may mean psychological crowds are easier for their members to cope with even if they are tightly packed or very slow moving because they feel safe within the group. But when there are several psychological crowds within the same physical space they can inadvertently limit the movement of one another.

In a recent (unpublished) study we found that people in one psychological crowd walk more closely together, walk more slowly and walk further distances to stay together than people who are just in physical crowds. Those outside the psychological crowd did not try to walk through it but instead walked around it.

Despite the importance of shared identity to understanding psychological crowds, computer modellers have so far either neglected crowd psychology in their models or treated crowds simply as a mass of identical individuals. Where groups are included within the crowd, the model has been to use small groups of two to five people. But the modellers have assumed that all crowds are simply physical crowds.

As a result, these simulations cannot adequately predict the behaviour of psychological crowds. A key issue, as mentioned, is that a physical crowd may contain more than one psychological crowd – for example, Sunni and Shia. So assuming that the crowd is simply made up of individuals who behave like particles or billiard balls in a mass doesn’t account for a number of features of crowds such as that at the Hajj.

Modelling the Hajj

This has several important implications for existing simulations of the event. For example, they can’t predict how different groups favour different locations in the Hajj, such as the Shia preference for worshipping in the open. This is particularly important due to the variety of rituals the pilgrimage involves. A model that treats the crowd as a homogeneous entity also can’t explain how large groups of people will try to stick together within a moving crowd, separating themselves from other groups and creating mass contraflows.

We should be wary of relying exclusively on computer models, however. They cannot give absolute predictions or guarantee safety. In order to avoid disasters, we also need to monitor the density and flow of crowds in real time to prevent these disasters from emerging. But by combining computer modelling with crowd psychology, we can better understand crowd behaviour and develop simulations that can make events safer and hopefully avoid disasters such as the events we have witnessed at the Hajj.

The Conversation