Boldly going into space for 1,000 days presents a series of health risks

Padalka might be keeping fit but we simply don't know what effect repeated space travel can have on our bodies. NASA/wikimedia

Russian cosmonaut Gennady Padalka, the commander of the current crew on board the International Space Station, has broken the record for the longest time spent in space with 803 days. Padalka, who is to return to Earth in September, has previously said he would like to try for 1,000 days on a future mission.

However, space travel significantly alters our bodies. While we don’t know exactly what the cumulative effect of several long journeys to space is, Padalka is at risk developing a range of health problems – including back problems, osteoporosis (brittle bones), cancer and damage to the nervous system.

Thank gravity for big guns

Living on the Earth’s surface, gravity constantly pulls our bodies downwards, keeping us firmly on the ground. Our muscles have to contract continuously to stand up against this gravitational pull or to lift objects. It causes us to get slightly shorter during waking hours. Gravity also pulls our blood down into our legs and our hearts have to work hard to pump oxygen-rich blood to our brains.

But our bodies are adapted to these conditions. In space, the lack of gravity has profound effects on the human body – and these effects are amplified the longer we stay in the low-gravity environment in space (known as microgravity). While in microgravity, astronauts will typically see their muscles waste away, their bones lose mineral density and their blood reduce in volume.

A thousand days on the ISS could have a high cost. NASA

In space, our muscles waste away simply because they are not used. Reductions in muscle size have been reported after as little as two weeks of exposure to microgravity during space shuttle missions – and significant reductions are experienced after long duration missions of around 6 months to ISS.

The muscles in our legs and torso are the most affected since they are not used as much on the ISS as they are on Earth. Astronauts don’t need to walk or stand upright against gravity. The muscular changes experienced as a result of space flight are very similar to those seen as our bodies age.

This damage occurs despite the fact that astronauts take part in up to two hours of exercise on the ISS each day, leading to significant issues once back on Earth. As a result, astronauts typically have to undergo a rigorous rehabilitation programme to help them stand up straight again.

Taller in space

Travelling to space also affects astronauts’ skeletons significantly. As gravity is not pulling them downwards, their spines lengthen up to as much as a few inches over the course of a long-duration mission to the ISS. This increase in the length of the spine is due to an increased volume of fluid in the spinal discs – fluid which is normally squeezed out over the course of the day when we are on Earth.

Do you feel taller? (Mercury astronauts in simulated weightless flight in 1958.) NASA

This effect, combined with the changes to muscle control, makes the spine less stable – leading to lower-back pain both during and after space flight. In fact, astronauts are at much greater risk of a slipped disc within the first year after they return from space.

On Earth, every time we take a step or land after a jump, our bones – especially in our legs – are loaded as a result of gravity. This helps our bones maintain an appropriate density of minerals (including calcium). Since astronauts’ bones are not loaded in this way in space, bone mineral density reduces. The exception to this is the bones of the upper body (the arms, for instance) which are used more in space and can show a slight increase in bone mineral density.

This loss of bone density leads to the bones becoming brittle, similar to people on Earth who develop osteoporosis. Research has predicted that only 50% of the loss in bone density will be restored after 9 months back on Earth.

Another problem that occurs with a lack of gravity is that the heart does not need to work as hard to pump blood to the brain. Astronauts’ bodies adapt to this by reducing the volume of blood in the body. The effects are not noticed while in space. However, when returning to Earth, astronauts’ blood is suddenly pulled back down towards their feet which leads to the brain not receiving enough oxygen-rich blood. This can lead to dizziness and astronauts are often seen to faint.

Padalka’s verdict

In general, astronauts’ bodies age at an accelerated rate in space, therefore causing significant challenges upon return to Earth. Padalka, who has been on a number of space missions since 1998, will no doubt have experienced many changes to his body. Every time he returned to Earth, his body will have recovered to some extent following a period of intense rehabilitation, but not everything will have returned to normal. The muscles that support the spine, however, are known not to recover well, even after six months of recovery after a period of disuse.

In addition to facing osteoporosis and back pain, Padalka will also be at greater risk of developing cancers and damaging his central nervous system as a result of prolonged exposure to radiation in space.

The Conversation

Do rats dream of the future?

Do rats dream of electric treats? Starsandspirals, CC BY-SA

Rodents, one might guess, live in the present – seeking out the best rewards they can scurry to. Indeed, the Scottish poet, Robert Burns, encapsulated this in his poem, “To a Mouse”, with the lines:

Still, thou art blest compar’d wi’ me! The present only toucheth thee: But Och! I backward cast my e’e, On prospects drear! An’ forward, tho’ I canna see, I guess an’ fear!

Legend has it that Burns wrote the poem after turning a mouse out of its home when ploughing his fields. He felt pity for it, but also envied the mouse for its inability to worry about what the future might bring. However, it seems Burns may have been wrong. New research published by our research team in eLife indicates that rodents do in fact appear to simulate the future, and they do so during sleep/rest periods.

We have known since the 1970s than neurons, called “place cells”, in a brain area called the hippocampus form an organised map of space through their spatially localised patterns of activity. Because each cell is active in a different part of a space, the population of activity from these cells provides a sort of “you are here on the map” signal to the rest of the brain connected to the hippocampus. Place cells are typically recorded in rats, but similar patterns have been observed in humans.

One dogma is that place cells can only form a map during active physical travel through a space. However, we wondered whether this assumption might be wrong. This was because a recent study found that humans with hippocampal damage struggled to imagine future scenarios. When asked to imagine lying on a beach in a tropical bay, for example, the patients described having great difficulty creating a coherent scene in their mind’s eye. We speculated that if place cells not only map space during physical exploration, but also during mental exploration of a future space, this might underlie why the patients were unable to imagine fictitious places. The patients' place cells were damaged making them unable to mentally construct imagined places.

To test this hypothesis, we placed rats on a straight track with a T-junction ahead while recording place cells from their hippocampus. Access to the junction – as well as the left and right hand arms beyond it – was prevented by a transparent barrier. One of the arms had food at the end, while the other side was empty. After observing the food the rats were put in a sleep chamber for an hour. Finally, the barrier was removed and the rats were returned to the track and allowed to run across the junction and on to the arms.

Brain activity and dreams come together in Inception. Diraen, CC BY

During the rest period, the data showed that the place cells that would later provide an internal map of the food arm were active. Cells representing the empty arm were not activated in this way. More specifically, the map was sequentially activated consistent with trajectories leading to and from the food – what we refer to as “pre-play”. This indicates that the hippocampus was simulating or preparing future paths leading to a desired goal.

So if rats are able to simulate future scenarios when resting in a sleep chamber, does this mean that the rats were dreaming of running to the food during the rest period? The truth is we don’t know. We only know humans dream because we can speak to them about their inner experiences after they wake.

We also don’t know whether the activity recorded in our experiment comes from specific periods of sleep in which dreams in humans tend to be reported (for example in REM sleep). However, the idea that such activity patterns in the hippocampus might underlie the content of dreams has been speculated on before and this is thought to have influenced the recent film Inception.

Dreaming – and rats (22:05 minutes in)

In the future it may be possible to relate the activity of place cells recorded in humans to dream content. However, technical challenges of recording enough cells make this difficult. A more tractable project for future work is to establish whether or not the pre-play is behaviourally important.

We found that the greater the interest each rat showed in the unobtainable food the more pre-play they expressed in their hippocampus. Currently this is based on evidence from just four rats. Future work with greater numbers of rats and future manipulations of the possible options of routes to a goal would help.

Whether rats dream at all remains unclear, but what is clear is that they are capable of processing relevant futures, yet to happen, during their periods of rest. Thus, rats may be more similar to us humans in their capacity to wonder about what the future holds.

The Conversation

The rise and demise of a super-armoured "monster worm" from ancient China

Scary-looking creature but at least it doesn't bite. Credit: Jie Yang

It was partly bald, partly covered in hair and had 15 pairs of legs, 72 spines and two antennae. It’s no wonder that worm-like creatures like Collinsium ciliosum are also known as “Hairy Collins’ Monsters”. The animal, discovered in China, lived over half a billion years ago, during the Cambrian period.

This heavily armoured creature is one of the first early animals to have developed an external skeleton specialised for self defence. It adds to a growing number of weird and wonderful fossils from this dynamic period, unravelling the mysteries of how life on Earth came to be.

Armed to the (non-existent) teeth

Collinsium was discovered in Xiaoshiba – an exceptionally well-preserved fossil site in south China. It’s part of the animal group Lobopodia: worm-like creatures with legs. Lobopodians have existed from the Cambrian (ocean-dwelling) right up to the modern day, with examples such as velvet worms (land-dwelling).

The modern day cousin of Collinisium – a velvet from Ecuador. Geoff Gallice/Flickr, CC BY

Collinsium had two antenna-like features on its head, six pairs of bristly legs and, at the rear, nine pairs of legs with tiny claws. Although Collinsium had numerous pairs of legs, they were most likely not for walking. The bristly legs were used to collect tiny floating particles of food suspended in the surrounding waters, whereas the clawed legs, with ring-like segments, helped with climbing and anchoring it on the ocean floor.

Because the flexible appendages at the front end of Collinsium filtered food, it had a very basic mouth structure. This meant that it had very few oral features, for instance, no teeth. The front section was also covered in tiny hairs, unlike the back end, which was comparatively bald – with the exception of a few small clusters of hairs around glands, known as papillae.

But perhaps the most incredible feature of Collinsium was its spines. These protective structures could be seen running along the creature’s back in rows, a total of 72 spines. Although lobopodians commonly have spines, most have significantly fewer than Collinsium. An example of this is Hallucigenia, which had two spines for each pair of legs, whereas Collinsium had up to five. Hallucigenia also differs from Collinsium in its feeding habits – Hallucigenia didn’t have the bristly, suspension-feeding limbs, as seen on Collinsium. Researchers have referred to Collinsium as “Hallucigenia on steroids”.

Reconstruction of the armoured worm. Credit: Javi er Ortega-Hernández

It also appears that the spines on Collinsium, although positioned in rows, could move individually. This meant that it could point each of its spines in different directions – an excellent defensive feature when it comes to protecting yourself from predators.

Too specialised to survive

This specialised mode of life, with extensive protective spines and distinctive limbs, came to an end for lobopodians by the middle Cambrian. It’s believed that Collinsium, alongside similar lobopodians, fitted into a palaeoecological niche during the Cambrian explosion – thriving at a time when ecological and environmental conditions were optimum for the success of this particular creature.

The extinction of Collinsium could therefore have been a result of changes to its local ecology or environment (for example, alterations to the food chain).

Unfortunately, since it’s difficult to fossilise the soft tissues of lobopodia, it is possible that palaeontologists will only ever be able to study a handful of specimens from sites of exceptional preservation, such as Xiaoshiba. Sadly, this is a common problem in palaeontology, but this certainly doesn’t stop incredible discoveries, such as Collinsium, enhancing our understanding of Earth’s dynamic history.

The Conversation

How oversized atoms could help shrink "lab-on-a-chip" devices

Lab-on-a-chip microfluidic devices can manipulate liquids at ever smaller scales. Atdr gs/Wikimedia Commons, CC BY-SA

“Lab-on-a-chip” devices – which can carry out several laboratory functions on a single, micro-sized chip – are the result of a quiet scientific revolution over the past few years. For example, they enable doctors to make complex diagnoses instantly from a single drop of blood.

In the future, shrinking such devices to extremely small sizes, comparable to the liquid molecules themselves, will be a huge challenge; success will depend on our ability to understand how fluids behave under extreme confinement. In a recent study published in the journal Nature Communications, we came up with a new way of unveiling how fluids behave in such “superconfinement” using lumpy particles known as colloids to act as oversized atoms.

Milky rainbows

Atoms are tiny, tiny things. So small, you would not be able to see them under an optical microscope. But what if you could blow up the atoms in size? This is precisely what colloids do, they act as grossly oversized atoms. The technique can replicate many processes in liquids at atomic scales – something that is key for further developing lab-on-a-chip devices.

Colloids are all over the place, even in the milk you probably just poured in your tea. Milk is a water-based mixture, containing sugars, fats and proteins among other stuff. Many of these components aggregate into small lumps of about a thousand times smaller than a millimetre in size. Such lumps are what we call colloidal particles.

Would you like some colloidal solution with that? Laura D'Alessandro/Flickr, CC BY-SA

In fact, milk is a very good example of the power that colloids can have in science. By mixing milk and water in a tray and shining light through with a flashlight one can recreate the effect behind the amazing colours you see in sunsets. In both cases, the sunset effect boils down to how light interacts with particles in a fluid.

In the atmosphere, light is scattered by the atoms and molecules, giving the sky its striking colours. However, the small size of the atoms means that you can only see the effect over relatively long distances of many kilometres. With milk, however, this effect is blown-up by the size of the colloidal particles, so you can see a glorious miniature sunset using just a torch and a tray!

Sunset in a jar – T Mantilla

But what exactly are colloids? Colloids are any kind of particles that are small and light enough not to settle immediately if you disperse them in a fluid – such as air or water – but not too small so that they dissolve in that fluid. Colloidal particles can range from 1 nanometre (that’s a millionth of a millimetre) to 1 micrometre in size (1,000th of a millimetre) and can be made of many different components.

Clever colloids

Back in the laboratory, we used a colloidal mix of spherical particles and polymer strands to understand how fluids behave in extremely small channels, such as a drop of water in a nano-fluidic chip device. The size of the particles in our mix is about 200 nanometres, so they fit nicely into our colloidal particle classification.

To give you an idea of how blown-up these atoms are, a water molecule, which is about 0.25 nanometres in diameter, is as a mere spec in front of the gigantic 200-nanometre colloid. The smart thing about this colloidal mix is that the polymer strands are able to squeeze between the spherical particles, sort of elbowing them out. This effect eventually results in the creation of a two-phase mixture, very similar to having oil separated from water. Crucially, the size of the colloidal “molecules” in our “liquids” is not too small compared to the size of a micro-channel, so we are able to use them as blown-up atoms to study a variety of phenomena in extreme confinement in micro-channels.

These microscopy images show a time sequence of jets and drops forming in superconfined colloidal fluids. Author provided

By changing the size of the channels, we were able to reveal in detail how a fluid interacts with the boundaries encasing it. We then used this understanding to control the formation of drops and jets only a few hundred times larger than the size of a colloidal particle. Crucially, the size of the colloidal particles made it possible to observe the fluid dynamics under such an extreme confinement in all its glory using nothing but direct optical techniques using confocal microscopy, – something that would have been impossible to do with a common liquid such as water.

So where to now?

The fluid structures that we have identified in the lab can be very useful in applications that go beyond our colloidal mixtures. For example, simple changes in the channel size can be used to create very small liquid droplets, which in turn can be used for lab-on-a-chip applications acting as drug carriers or miniature beakers for chemical reactions.

But the ability to control drops can also be potentially used to guide the self-assembly of specifically shaped particles, some sort of “colloidal bricks”, that could be used to produce more complex structures such as micro robots, which could for instance be used in large swarms to explore environments that are too small for larger robots. It could also help develop micro-based materials, such as high-grade micro-emulsions, which can be used, for example, cleaning products.

Such applications are not restricted to using our colloidal liquids, but are open to using many types of liquids, including water and oils, as long as they are contained in very small channels. Using knowledge from one system to understand another is not particular to colloids, it is an underpinning principle of how physics works to make sense of the world around us – and unveiling such generality is perhaps one of the most beautiful aspects of it.

The Conversation

NHS care.data still leaks like a sinking ship, but ministers set sail regardless

'We're not sinking, we're just naturally low in the water.' boat by Roberto Castillo/shutterstock.com

NHS chiefs are pressing ahead with an IT programme that will share identifiable patient records and GP data for uses including medical research, despite it being red-flagged as “unachievable” by a watchdog.

The NHS England care.data programme is among the major projects given the worst rating by the Cabinet Office’s Major Projects Authority review, alongside other NHS projects including the Health and Social Care Network (HSCIC) and NHS Choices.

The care.data programme was put on hold in February 2014 following a torrent of criticism which prompted a House of Commons select committee inquiry. Concerns included security and informed consent, the sale of data to commercial companies including insurers and “information intermediaries”, false claims that anonymity could be guaranteed and a complete lack of clarity on the scope and purpose of the project.

In fact the programme resembles a textbook example of the failures and problems that have bedevilled many government IT infrastructure projects. It was flagged red in the previous review and even now – years after the project began – the report remarks that the business case is still “in the progress of being developed”.

However, ministers are pressing ahead. Communications with patients from Blackburn with Derwen, the first care commissioning group to be selected for a trial, recently announced that care.data would be starting “at the end of June”. Three more areas are due to join this year, with the rest of England to follow after a satisfactory evaluation.

Re-arranging deckchairs

All of this would suggest that the many problems with care.data have been addressed – unfortunately not. Much has happened, very little of substance has changed, and most problems remain. The programme’s leader, Tim Kelsey, still thinks it was all just a communication problem, and that the benefits have been undersold.

One of the more visible changes is the creation of the National Information Board (NIB) within the Department of Health, focused on applying the benefits of data and information technology to the NHS. The somewhat overreaching sound to its name suggests that perhaps health minister Jeremy Hunt and Kelsey, chair of the NIB and the care.data Programme Board, and the NHS England’s national director of patients and information, know something we don’t about the government’s data sharing agenda. Yet the NIB data plan for the next five years barely acknowledge the many failures of the programme’s original plan.

Medical data might be safer strung on a lanyard than in a database. comedynose, CC BY

Transparency and oversight

The most promising step forward was the appointment of Dame Fiona Caldicott as the national data guardian in November 2014. Highly respected in the field of medical data ethics, her report in December raised 52 questions on care.data that needing answering.

We don’t know if they’ve been answered satisfactorily, because answers were drafted for a programme board meeting earlier this year and have not been made public – nor even shared with the care.data Advisory Group. This complete failure of transparency (never mind its promise to share papers and minutes) is one reason to hold little confidence in care.data or those running it.

Consent and information

The lack of informed consent for patients about what would be done with their data was the main reason given for putting the programme on hold. But this still hasn’t been fixed.

Some 700,000 people thought they had opted out of any sharing of their data for any non-clinical purposes. But the Health and Social Care Information Centre, which provides data and statistics on the NHS and under whose remit care.data falls, told parliament that these people would therefore miss out on some preventative, clinical screenings – contrary to assurances. And while this opt-out was promised by Hunt in 2013, HSCIC have indicated that they still don’t know how to enact it, and it has yet to be given any legal basis.

The same lack of legal basis applies to Caldicott’s role as national data guardian (now expected to begin in 2016 by NIB), the promised sanctions for abuse or misuse of health data, and the legal safeguards on data sharing promised following the 2014 public consultation.

Security and privacy

It looked as if, following the response to the Partridge Report of HSCIC data sharing, the approach to privacy and security issues relating to sharing with commercial organisations would improve. In practice, however, medical data is still shared with analytics firms, intermediaries and data brokers like Experian. Even proposals to restrict third parties' access to data to secure data facilities (similar to those for census data), which would alleviate many privacy concerns about misuse of highly sensitive individual-level personal data, are being watered down.

The debate on responsible use of medical data has evolved over the last few years, leading to the Nuffield Bioethics report on the use of healthcare data. Yet despite all the greater understanding we’ve gained, those cheerleading for large-scale commercial exploitation, including Kelsey and minister for life sciences George Freeman, haven’t changed their tune in the slightest. For example, they still advocate sharing genome data without acknowledging the privacy risks.

It’s the complete absence of any political will to divert the ship from this dangerous course that’s perhaps the biggest worry of all. Organisations well-versed in the issues such as medConfidential have suggested constructive solutions to salvage something from the care.data debacle; it seems no one in the Department of Health or NHS England is listening.

The Conversation

Men and women could use different cells to process pain

If only I could shut off my my microglia right now Todd/Flickr

We have known for some time that there are sex differences when it comes to experiencing pain, with women showing a higher sensitivity to painful events compared to men. While we don’t really understand why this is, it seems likely that both biological and psycho-social factors are involved. However, a new study published in Nature Neuroscience suggests that there may be a sex difference in the immune cells involved in the processing of pain signals. The results show that it is time to stop ignoring sex differences in research.

The researchers looked at the immune responses of male and female mice, and found that different immune cells seemed to signal pain. They found that for male mice, microglia, which serve to defend the brain and spinal cord, were important in signalling pain. However, this did not seem to be the case for female mice. Instead white blood cells known as T cells seemed to signal pain.

While we need to be cautious about translating these results to humans, the authors conclude by asking whether we should start thinking about different ways of managing chronic pain in men and women. For example, could drugs be developed that target these different pain pathways, and used in a sex-specific way.

Stop ignoring sex in research

The study adds to a growing body of evidence showing that sex differences are relevant for health. We know that there are important differences in how males and females respond across a range of health conditions. In an era of personalised pain medicine, this raises general questions as to what works best, and for whom.

There is a lot at stake. If a pain response is found in males, it does not automatically mean it will be found in females, and vice versa. Similarly, if a treatment is found to be less effective in one sex, it does not mean it is ineffective for all. Looking at it in this context, have some approaches that might have worked for females, been dismissed too early if just tested on males?

Researchers are underestimating the need to look at sex differences in animal research. Rama/wikimedia commons, CC BY-SA

Part of the problem is how we do clinical-health research. Historically, women were systematically excluded from clinical trials. Although researchers are getting better at recruiting both men and women, progress is slow. Sometimes these differences are actually viewed as “nuisance variables” to be statistically controlled for.

Unless you go looking for sex differences, how will you know whether they exist and are important? We need to encourage a change in research practice, which means designing studies to allow this to happen. In the US, the main health funding agency, the NIH, now requires researchers to consider the potential effect of sex/gender within their studies. Some medical journals, such as The Lancet and The Journal of the American College of Cardiology, include instructions to authors to consider, and report, sex differences. Interestingly, it is not yet standard practice to see sex differences considered in systematic reviews of treatment efficacy studies for pain.

Since there may be differences in male and female health, we can no longer generalise or ignore sex. Unless there is a good reason not to, then males and females should be recruited into research, and sex differences considered.

The study is an important wake-up call as we are still some way to go before we see such comparisons become a mainstream part of clinical-health research investigation and reporting practice.

The Conversation

Scientists discover fundamental property of light – 150 years after Maxwell

Scientists have shed light on light. Taras Mykytyuk/Flickr, CC BY-SA

Light plays a vital role in our everyday lives and technologies based on light are all around us. So we might expect that our understanding of light is pretty settled. But scientists have just uncovered a new fundamental property of light that gives new insight into the 150-year-old classical theory of electromagnetism and which could lead to applications manipulating light at the nanoscale.

It is unusual for a pure-theory physics paper to make it into the journal Science. So when one does, it’s worth a closer look. In the new study, researchers bring together one of physics' most venerable set of equations – those of James Clerk’s Maxwell’s famous theory of light – with one of the hot topics in modern solid-state physics: the quantum spin Hall effect and topological insulators.

To understand what the fuss is about, let’s first consider the behaviour of electrons in the quantum spin Hall effect. Electrons possess an intrinsic spin as if they were tiny spinning-tops, constantly rotating about their axis. This spin is a quantum-mechanical property, however, and special rules apply – the electron has only two options open to it: it can either spin clockwise or anticlockwise (conventionally called spin-up or spin-down), but the magnitude of the spin is always fixed.

In certain materials, the spin of the electron can have a big effect on the way electrons move. This effect is called “spin-orbit coupling” and we can get an idea of how it works with a footballing analogy. By hitting a freekick with spin, a footballer can make the ball deviate to the left or the right as it travels through the air. The direction of the movement depends on which way the ball is spinning.

Bend it like Beckham. Ronnie Macdonald/Flickr, CC BY-SA

Spin-orbit coupling causes electrons to experience an analogous spin-dependent deflection as they travel, although the effect arises not from the Magnus effect as in the case for the football, but from electric fields within the material.

A normal electrical current consists of an equal mixture of moving spin-up and spin-down electrons. Due to the spin-orbit effect, spin-up electrons will be deflected one way, while spin-down electrons will be deflected the other. Eventually the deflected electrons will reach the edges of the material and be able to travel no further. The spin-orbit coupling thus leads to an accumulation of electrons with different spins on opposite sides of the sample.

This effect is known as the classical spin Hall effect, and quantum mechanics adds a dramatic twist on top. The quantum-mechanical wave nature of the travelling electrons organises them into neat channels along the edges of the sample. In the bulk of the material, there is no net spin. But at each edge, there form exactly two electron-carrying channels, one for spin-up electrons and one for spin-down. These edge channels possess a further remarkable property: the electrons that move in them are impervious to the disorder and imperfections that usually cause resistance and energy loss.

This precise ordering of the electrons into spin-separated, perfectly conducting channels is known as the quantum spin Hall effect, which is a classic example of a “topological insulator”– a material that is an electrical insulator on the inside but that can conduct electricity on its surface. Such materials represent a fundamentally distinct organisation of matter and promise much in the way of spintronic applications. Read heads of hard drives based on this technology are currently used in industry.

Beginning to see the light

Now, the new study suggests that the seeds of this seemingly exotic quantum spin Hall effect are actually all around us. And it is not to electrons that we should look to find them, but rather to light itself.

In modern physics, matter can described either as a wave or a particle. In Maxwell’s theory, light is an electromagnetic wave. This means it travels as a synchronised oscillation of electric and magnetic fields. By considering the way in which these fields rotate as the wave propagates, the researchers were able to define a property of the wave, the “transverse spin”, that plays the role of the electron spin in the quantum spin Hall effect.

In a homogeneous medium, like air, this spin is exactly zero. However, at the interface between two media (air and gold, for example), the character of the waves change dramatically and a transverse spin develops. Furthermore, the direction of this spin is precisely locked to the direction of travel of the light wave at the interface. Thus, when viewed in the correct way, we see that the basic topological ingredients of the quantum spin Hall effect that we know for electrons are shared by light waves.

This is important because there has been an array of high-profile experiments demonstrating coupling between the spin of light and its direction of propagation at surfaces. This new work gives a integrative interpretation of these experiments as revealing light’s intrinsic quantum spin Hall effect. It also points to a certain universality in the behaviour of waves at surfaces, be they quantum-mechanical electron waves or Maxwell’s classical waves of light.

Harnessing the spin-orbit effect will open new possibilities for controlling light at the nanoscale. Optical connections, for example, are seen as a way of increasing computer performance, and in this context, the spin-orbit effect could be used to rapidly reroute optical signals based on their spin. With applications proposed in optical communications, metrology, and quantum information processing, it will be interesting to see how the impact of this new twist on an old theory unfolds.

The Conversation

We've just started work on the technology to power a Star-Trek style replicator

Machine to make anything Shutterstock

Who has never dreamt of having a machine that can materialise any object we need out of thin air at the push of a button? Such machines only exist in the minds of science fiction enthusiasts and the film industry. The most obvious example is the “replicator” that Star Trek characters routinely use to generate a diverse range of objects, helping them escape from even the most impossible of plotlines.

However, scientists might have found a way to build such a dream-like machine. The trick will be to exploit the ever-famous E=mc2 equation, known as Einstein’s energy-matter equivalence principle. This equation tells us that mass (the amount of matter a body is made of) is just another form of energy. This means it should be possible to take some mass and directly convert it into pure energy.

This phenomenon is supported by uncountable experimental evidence. For instance, it provides the energy that keeps atomic nuclei together. If you “weigh” the nucleus of an atom, you will find that it is slightly lighter than the sum of its components. The missing mass is converted into energy, which holds everything together. So far so good, but the equals sign in the equation tells us something even more exciting. We can, in principle, take pure energy and materialise it into mass.

Vacuum – not so empty

How might that be possible? In order to grasp this idea, we need to change our concept of pure vacuum. Classically, vacuum is nothing but a completely empty (and rather boring) region of space. Quantum mechanics instead tells us that vacuum is an extremely busy region of space, where ultra-tiny particles come into existence for extremely short periods of time (shorter than 10-21 s, or a thousandth of a billionth of a billionth of a second).

The particles are quickly annihilated when they collide with a corresponding (anti)particle made from antimatter. Together, these particles and antiparticles, usually referred to as “virtual particles” because they exist for such short periods of time, are a direct consequence of Heisenberg’s Uncertainty Principle.

Now, imagine sending a super-intense laser beam (which is pure electromagnetic energy) into a vacuum. If the laser is intense enough, it could rip these virtual particles away from their antiparticles to such a distance that they will not collide and annihilate. This means you have sent energy into a void region and end up with some real particles with mass.

From the void Shutterstock

There’s only one drawback: you would need to send enough energy to separate the virtual particle-antiparticle pair before they would naturally annihilate each other (remember the 10-21 s?). This appears to be a Herculean task, but recent developments in laser technology are now giving us the opportunity to do so.

Lasers are now able to produce bursts of light that last for tiny periods of time, periods comparable to the time it takes an electron to perform one revolution around the nucleus in the atom. They can also be focused on a region of space smaller than the width of a human hair. To bring things into a bit more perspective, these laser bursts are thousands and thousands of times more powerful than the whole UK electrical grid (although they require relatively small amounts of energy) and billions and billions of times more intense than solar irradiation on Earth.

Ramping up the power

Scientists are notoriously never satisfied, however, and are pushing this limit even further. A major European project is now building the most powerful laser ever generated, the Extreme Light Infrastructure (ELI). This unprecedented project will result, in the next few years, in the creation of a laser system that provides beams with a power of 10 PW (10,000,000,000,000,000 watts). That’s 10 times more powerful than existing state-of-the-art laser facilities.

Theoretical calculations indicate that such a laser is able to “produce” a handful of particles out of a pure vacuum and provide the first experimental evidence that energy can be directly transformed into tangible matter.

We might still be a long way from producing a polished finished object from vacuum but the first step is now being taken. Once the wheel is set in motion, it will only be a matter of time before a replicator will be an essential appliance in every household. The only problem remaining then will be what to do with the anti-objects that will unavoidably be generated beside the requested objects?

The Conversation

Galaxy survey to probe why the universe is accelerating

Understanding how galaxies are arranged could be the key to figuring what causes the expansion of the universe. ESA/Hubble, NASA and S. Smartt (Queen's University Belfast), CC BY

We know that our universe is expanding at an accelerating rate, but what causes this growth remains a mystery. The most likely explanation is that a strange force dubbed “dark energy” is driving it. Now a new astronomical instrument, called the Physics of the Accelerating Universe Camera (PAUCam), will look for answers by mapping the universe in an innovative way.

The camera, which will record the positions of around 50,000 galaxies at once, could also shed light on what dark matter is and how the cosmos evolved.

In the 1990s, astronomers studying exploding stars – supernovae – in galaxies far away discovered that the universe’s expansion was accelerating. This came as surprise, as scientists at the time thought it was slowing down. With no obvious solution at hand, scientists argued that there must be some sort of mysterious force – dark energy – pulling the universe apart.

Timeline of the universe, assuming a cosmological constant. Coldcreation/wikimedia, CC BY-SA

Fast forward about two decades and we still don’t know what dark energy is, thought to make up 71% of all the energy in the universe. One theory says it can be explained by an abandoned version of Einstein’s theory of gravity – known as the “cosmological constant” – which is a measure of the energy density of the vacuum of space. Another argues that it is caused by enigmatic scalar fields, which can vary in time and space. Some scientists even believe that a weird “energy fluid” that fills space could be driving the expansion.

Mapping the sky

Of course, the only way to find out is through observation. After spending six years under design and construction by a consortium of Spanish research institutions, PAUCam was successfully tested out for the first time this month – seeing “first light” on the 4.2 metre William Herschel Telescope on La Palma in the Canary Islands.

Using the information captured by PAUCam, an international team, including researchers from Durham University’s Institute for Computational Cosmology, is being set up to build a unique map of how galaxies are arranged in the universe.

Such a map will contain detailed new information about the basic numbers which govern the fate of the universe; its expansion and about how the galaxies themselves were made. The map will reveal the extent of structures in the distribution of galaxies. These structures grow due to gravity – if the expansion of the universe is speeding up, then it is harder for gravity to pull matter together in order build these structures. Knowing the strength of gravity and measuring the size of structures in the galaxy distribution can therefore help us deduce the expansion history of the universe.

Astronomers can map the positions of galaxies on the sky by taking images or photographs. These are projected positions and so do not tell us the distance to a galaxy from the Earth. A galaxy could appear to be very faint because it is at a large distance from us or simply because it is nearby, but is intrinsically faint with few bright stars.

Traditionally, astronomers have used spectroscopy to measure the distance to a galaxy. This technique works by capturing the light from the galaxy and spreading it out into a spectrum according to its wavelengths. In this way, they can investigate the pattern of lines emitted by the different elements in the stars that make up the galaxy. The further away the galaxy is, the more the expansion of the universe shifts these lines to appear at longer wavelengths and lower frequencies than they would appear in a laboratory here on Earth. The size of this so-called “redshift” therefore gives the distance to the galaxy.

Early surveys of galaxy positions painstakingly measured such spectra one galaxy at a time, pointing the telescope at each galaxy in turn. Modern surveys can now record up to a few thousand galaxy spectra in a single exposure.

The camera has been tested using the William Herschel Telescope. wikimedia commons, CC BY-SA

PAUcam will revolutionise survey astronomy by measuring the distances to tens of thousands of galaxies it can see each time it looks at the sky. It does this by taking 40 photographs or images using special filters that isolate a portion of the light emitted by a galaxy. This allows a quick spectrum to be built up for each galaxy at a fraction of the traditional cost. This spectrum also acts like a DNA for each galaxy, encoding information about how many stars it contains and how quickly new stars are being added.

Looking for answers

My team here at Durham will build computer models of the evolution of the universe, which aim to describe how structures like galaxies have developed over 13.7 billion years of cosmic history. The cosmologist’s universe is mostly made up of an unknown substance called dark matter, with a small amount of “normal matter”.

PAUCam will allow cosmologists to test their models for building galaxies by measuring the lumpiness of the galaxy distribution in the new map. This is important because it tells us about the distribution of the dark matter, which we cannot see directly.

We know from previous observations that galaxy clusters contain dark matter. By counting the number of galaxies in a cluster, astronomers can estimate the total amount of (visible) matter in the cluster. By also measuring the velocities of the galaxies, they find that some are moving so fast that they should escape the gravitational pull of the cluster. The reason they don’t is because huge amounts of invisible dark matter is increasing the gravitational pull. If the galaxies are very clustered – or their distribution is lumpy – then the computer simulations show that this means the galaxies live inside more massive dark matter structures.

PAUCam will allow us to learn more about an effect called gravitational lensing, in which the mass in the universe bends the light from distant galaxies, causing their images to appear distorted. Scientists can study the distortions to calculate how massive the patch of the universe really is – including the dark matter. This is one of the key probes of dark energy that is planned for the European Space Agency’s Euclid mission, which is scheduled for launch in 2020.

The lensing distortion depends on the lumpiness of the dark matter, which is turn is determined by how fast the universe is expanding. If the universe expands at a fast rate, then it is harder for gravity to pull structures together to make bigger ones. PAUCam will help us to disentangle the signal from gravitational lensing from simple alignments between the orientations of galaxies which develop as they form.

A galaxy survey like PAUCam has never been attempted on this scale before. The resulting map will be a unique resource to help us learn more about how galaxies are made and why the expansion of the universe seems to be speeding up. We hope to have the answer once the PAUCam survey is finished by around 2020.

The Conversation

Government must invest in skills and police resources to tackle cybercrime

There aren't enough skilled investigators to tackle the cybersecurity problem. polygraphus/shutterstock.com

It is estimated that the cost of cybercrime to the UK economy is around £27 billion per year, around 2% of national GDP. Some experts suggest this is too small, excluding as it does important vectors of cybercrime such as malware.

Computer security firm Norton estimates that more than 12.5m people in the UK fall victim to cybercriminals every year – 34,246 cases each day – with an average loss of £144 each. Again, this is probably an underestimation when one considers that many people will be victims of hacks or malware without ever knowing, and so they go unreported.

A global study conducted by the UN Office of Drugs and Crime reported rates of cybercrime including hacking leading to theft and fraud at rates of up to 17%, significantly higher than rates of their conventional equivalents at less than 5%.

Fighting cybercrime is by no means easy. The wide range of technologies and vectors of attack available to cyber-criminals and the cross-border nature of these crimes make investigating them difficult. The fragile nature of digital evidence complicates matters, tracks and traces that skilled cybercriminals can erase behind them. And the intrusive nature of investigating cybercrimes – which typically requires removing computer equipment for analysis – raises privacy issues that make digital forensics an even more complicated task.

Policing cybercrime in the UK

In the context of UK policing, the National Association of Chief Police Officers (formerly ACPO) Core Investigative Doctrine provides a strategic framework and good practice guidelines for forensic investigation of e-crimes. Since 2011, the UK government has adopted a centralised approach as part of its National Cyber Security Program, with the National Cyber Crime Unit (NCCU), part of the UK National Crime Agency, the central focus for tackling cybercrime in partnership with government agencies such as GCHQ and the Home Office.

The government has committed £650m to the cybersecurity programme to improve the nation’s cyber-defences and resilience. But considering that around 60% of this is to go to GCHQ for intelligence activities, this leaves only £260m for investigation and law enforcement – a figure that does not compare favourably to the estimated cost (£27 billion) of the crimes the NCCU is to investigate.

According to the commissioner of City of London Police, Adrian Leppard, there are 800 specialist internet crime officers, yet it’s expected that a quarter of them will lose their job due to budget cuts in the next two years. Again, considering Norton’s estimation of 34,246 individuals falling victim to cybercrime every day in Britain, the remaining 600 investigators would need to address 57 cases each day of the year – a mission impossible.

Skills needed

So the imbalance between the capabilities of organised e-crime groups and the limited capacities of law enforcement agencies is not something that the UK can resolve in the near future. However, some solutions may narrow the gap and confine criminals’ opportunities.

Most obvious is how few university courses there are at undergraduate and especially at postgraduate level in cybersecurity and e-crime forensics that could train the skilled investigators required. Tackling the threat of organised criminals working in cybercrime over the long term requires knowledgeable experts to profile, track, detect, and ultimately provide the information that can lead to their arrest.

At a recent TechUK event attendees suggested the lack of prosecutions under the Computer Misuse Act in the 25 years since it was introduced suggests the law is not fit for purpose – and the skills required to bring a prosecution under it are at the moment in short supply.

While the lion’s share of resources goes to GCHQ, the targets of its intelligence are not necessarily the criminal gangs of interest to the police. More resources for police agencies are necessary to bring investigative capacities up to the same level of the gangs they’re investigating.

GCHQ has reported that 80% of cyber-attacks can be prevented through better education and awareness among users. Developing regional hubs to promote cybersecurity training and education among general users would be key.

The fact that the Anonymous self-styled “hacktivists” whose attacks on Paypal cost the firm £3.5m were sentenced only to seven and 18 months might suggest that cybercrimes are sentenced lightly. A better understanding among judges and juries of the serious implications of cybercrimes and greater punishments and fines for financial crimes could help make cybercrime less rewarding to criminals.

The Conversation

For sporting greats, knowing when to quit is the hardest challenge of all

Nadal in training for Wimbledon EPA

When the men’s seedings for Wimbledon were published, they contained something that was both telling and inevitable. Rafael Nadal, winner of the tournament in 2008 and 2010, was ranked just 10th. No one could doubt that the Spaniard is one of the most formidable players ever to have held a tennis racquet. His 14 grand-slam wins stand second equal with Pete Sampras in the all-time men’s rankings behind Roger Federer’s 17. But at the age of just 29, the growing sense is that his best years are behind him.

To draw an analogy with another sport, it reminds me of the famous quote from the great Liverpool manager Bill Shankly: “Some people believe football is a matter of life and death. I am very disappointed with that attitude. I can assure you it is much, much more important than that".

Burnley days: Clark Carlisle Anna Gowthorpe/PA

How true his words have been for many who have tried to walk away from the game. Take the sad case of Clarke Carlisle, the former Blackpool and Burnley defender, who finished playing in 2013 and went on to chair the Professional Footballers’ Association. The following December, suffering from depression, he attempted to take his own life by stepping out in front of a lorry.

Carlisle is far from alone. Many high-profile sportspeople face profound psychological struggles at the conclusion of their careers. Retirement comes much earlier than in other professions, where not so long ago people didn’t retire at all. In most sports, even relatively late retirements such as footballer Sir Stanley Matthews at 50 and cricketer Brian Close at 55 are of a bygone age. Nowadays it is rare for the “oldies” in any sport to play in the same draw as the best in the world.

Gravity catches up

The longevity of a player’s career is largely determined by the physical demands of the sport, of course. Rugby players can lengthen their career by carefully limiting the number of games played each season. But for most by the time they reach their mid–30s, the repeated collisions have taken their toll. They are likely to be being outperformed by younger players, who inevitably recover more quickly.

Endurance sports such as running and rowing demand such heavy training that early retirement is a wise long-term health decision. More than 25 years of training have left marathon-runner Paula Radcliffe with a chronic foot injury, for example. Each mile for Paula is around 450 foot strikes – and during heavy training periods she runs more than 100 miles a week for months on end. She will carry that injury for the rest of her life.

Regardless of whether you are forced out by injury, however, retirement from sport is rarely simple. There are no clear statistics on what proportion of players make the choice to end their career, but in all cases the implications are the same – to withdraw from an activity which has given day-to-day life meaning and structure from childhood is incredibly difficult.

Not surprisingly, the early research linked retirement from sport with the emotional grieving process experienced by people who have received a terminal diagnosis. It argued that you could apply the famous Kübler-Ross stages of grief from 1969 in the same way: denial, anger, bargaining, depression and finally acceptance. This has been criticised by many researchers and applied practitioners, but they do help us to understand the emotional complexity of the experience.

For many, the big hurdles to overcome are psycho-social – the extent to which the performer viewed themselves as a performer, the loss of their “sporting identity”, the structure the sport gave them and the social contact with people with whom they have shared a very significant part of their emotional lives. Such people often feel irreplaceable, at least in the short term. And to make all this worse, by withdrawing from regular training, players are not getting their neurochemical “fix” of endorphins, dopamine and serotonin. The world seems a bleaker, less exciting and more stressful place as a result.

When to go

The public spectacle of players coping with the end of their careers can be painful. Putting his personal life to one side, watching Tiger Woods’ current struggle undermines the memories of when he dominated the world of golf. His body has been battered into submission and he needs to stop.

There are always a lot of retirements after a major game – partly becuase goals have been achieved, but also because there is a “pause” to reflect on how much commitment and sacrifice is required for the next peak. There is often a sense of relief that it’s over. The question I pose to athletes at this stage is, are you ready for this? To approach that I begin with a simple decisional balance of the push factors and the pull. This can be extremely revealing as it clarifies the performer’s motives in their mind and how much commitment is required.

Shankly retiring 1974 PA

This is exactly the sort of thought process that Rafael Nadal should begin. His knees have been his Achilles heel for almost a decade now. He has to take regular time out for treatment. Despite his recent victory on the grass at the Mercedes Cup in Germany, he is on borrowed time. Retirement at 29 having achieved everything he set out to achieve is certainly not a failure. The same could be said of Roger Federer, who is ranked second for the tournament a few weeks short of his 34th birthday. If he announced that Wimbledon 2015 was his final appearance, the crowds would flock to salute his achievement and wish him well as he began the next chapter of his life.

And so back to Bill Shankly, a man who, to the surprise of no-one, coped very poorly with retirement. In the years after he stepped down from Liverpool in 1974 at the age of 60, he regularly turned up to watch his team train through the fence of the Melwood ground. This is a stark reminder that for many, retirement from a sporting role needs to look beyond sport. The worst thing that you can have is a daily reminder that you are no longer doing what you did best.

The Conversation

Miniaturisation will lead to 'smart spaces' and blur the line between on and offline

A computer-on-a-stick is the start, but they'll get smaller and smarter yet. Lenovo

Lenovo, the Chinese firm that has bought up IBM’s cast off PC business, has announced a miniaturised computer not much larger than a smartphone, which can be connected to any screen via an HDMI connection.

Advances in electronic components manufacturing processes and integration have resulted in large-scale miniaturisation of computer systems. This has enabled the latest system-in-package and system-on-a-chip approaches, where the processor and other necessary functionality usually provided by many microchips can be incorporated into a single silicon chip package.

Lenovo’s Ideacenter Stick 300 runs Windows 8 or Linux, is powered by a micro-USB connector and comes fitted with a new Intel Bay Trail CPU, 2GB RAM, 32GB flash storage, an SD card reader, Wi-Fi – even speakers.

Lenovo isn’t the first to shrink the PC down to pocket size. Intel’s Compute Stick is another dongle-sized computer with similar specs released this year.

Intel’s Compute Stick is another effort to shrink the PC to pocket size. Intel

The Raspberry Pi, now upgraded to its second major release, was probably the first to provide the functionality of a desktop or laptop computer in a credit card sized electronic board. Over five million Raspberry Pi computers have been sold since launch in 2012.

Google has used its stripped-down Chrome OS based on its Chrome browser to reduce a Chromebook (Chrome OS-powered laptop) down to the Chromebit. While the Chromebit is no larger than a USB memory stick, it’s markedly less powerful than Intel’s offering, as it is powered by the Rockchip RK3288, an ARM processor, which makes it comparable in power to a smartphone.

Google’s Chromebit, in more colours than black. Katie Roberts-Hoffman/Google

There are other stick-sized, computers running low-power ARM processors capable of running Android, such as Cotton Candy or Google Chromecast. These plug into a digital television to play video directly to the TV or from internet streaming services such as Netflix – but not much else.

The appeal of small

Computers this small are attractive for many organisations, such as schools and universities who need to equip functional computer laboratories at minimum cost while taking up as little space as possible. Low power devices also consumer less power which keeps costs down.

A typical desktop computer uses about 65-250 watts (plus 20-40 watts for an LCD monitor) – considerably higher than a typical PC-on-a-stick at about 10 watts. There are obvious business uses, such as digital signage and advertising when connected to screens or projectors.

This new round of computer miniaturisation marks a third wave of computerisation. First there were room-sized computers, shared between many users – the mainframe era. These time-sharing systems gradually disappeared as computers were miniaturised, replaced by the one computer per user of the personal computer or PC era. Today one person could have many computers, whether recognisable as desktop and latop PCs or smartphones or compute sticks, but which are accessible everywhere and anywhere. Known as ubiquitous or pervasive computing, this is the third wave in computing.

A smart, mobile future

As all computing devices grow smaller, the aim is that they are more connected and more integrated into our environment. The computing technology fades into our surroundings until only the user interface remains perceptible to users. It is an emerging discipline that brings computing to our living environments, makes those environments sensitive to us and have them adapt to the user’s needs. By enriching an environment with appropriate interconnected computing devices, the environment would be able to sense changes and support decisions that benefit its users.

There is a growing interest in these smart spaces using miniaturised computing technologies to support our daily lives more effectively. For example, smart offices, classrooms, and homes that allow computers to monitor and control what is happening in the environment.

Apple’s HomeKit and Google’s Nest are a start in this direction, providing the hardware and software to allow home automation. A smart home that monitors temperature and movement could allow elderly to remain self-sufficient and independent in their own home, for example, and voice activated devices could help everyday tasks such as ordering the shopping. A smart office could remind staff of information such as meeting reminders. It could turn the lights on and off, or control heating and cooling efficiently. A smart hospital ward will monitor patients and warn doctors and nurses of any potential problem or human errors.

The Smart Anything Everywhere vision of the European Commission drives research and development in this area. The evolution and disruptive innovation across the field of computing, from the Internet of Things, smart cities and smart spaces down to nano-electronics – the applications and benefits of greater miniaturisation of computers are endless.

The Conversation

Europol tasked with online search-and-destroy mission to combat Islamic State

Terrorism has moved online, and policing must follow. ISIS by GongTo\Shutterstock.com

Europol has set up a Europe-wide unit to search and remove social media accounts run by or linked to the terrorist group Islamic State (IS) in an effort to tackle the growing threat of unopposed jihadi propaganda online.

The specialist team will be modelled on the UK’s Counter-Terrorism Internet Referral Unit, (CTIRU), a joint Scotland Yard and Home Office unit, and will aim to take down IS-affiliated sites within two hours while providing information to other counter-terrorist investigators.

IS has so far demonstrated its effective use of social media for propaganda. IS members living across northern Syria and north-western Iraq use their personal social media accounts to spread their message worldwide, and this decentralised approach has proven hard to tackle.

It is estimated that more than 25,000 foreign fighters have joined the group in this region, their daily messages reaching a global audience in various languages. These social media accounts have been used to recruit foreign fighters, encourage women to travel to the region to become jihadi brides and to encourage families from around the world to join IS.

It’s this growing number of citizens flowing into Syria and Iraq that has led Europol’s Director, Rob Wainwright, to warn of the problems faced by European police forces trying to monitor terrorists' online communications. Tackling the propaganda is made more difficult by the fact that suspects in Syria and Iraq are effectively out of reach.

Use of the more hidden, harder-to-reach areas of the web – the dark web – and encrypted communications make it harder still. Wainwright has added his voice to others in law enforcement that have warned tech firms to consider the impact of sophisticated encryption on law enforcement.

On Twitter alone, Wainwright believes IS has up to 50,000 accounts, tweeting up to 100,000 messages a day. A study by Brookings University researchers claimed the number of accounts as high as 90,000.

Rita Katz of the SITE Intelligence Group has also highlighted the difficulty intelligence agencies and police face monitoring social media and encrypted electronic communications. IS circumvents the blocking of their accounts through using multiple back-up accounts, urging followers to follow up to six accounts tweeting the same message. Katz believes that IS on Twitter is a real threat, a launch pad for recruitment or encouragement for lone wolf attacks, and to send dangerous messages to every corner of the world.

Social media is a major source of recruitment propganda. flags by Steve Allen/shutterstock.com

Europe’s response

This issue of the use of the internet to facilitate radicalisation and terrorism was recognised by the Council of the European Union in March 2015, from which has emerged the Europol Internet Referral Unit, tasked with co-ordinating and sharing information about terrorist and extremist online content.

This builds on Europol’s Check the Web initiative from 2007. But while this had success in child abuse and human trafficking investigations early on its existence, it has had limited success tackling terrorism, especially since the Snowden revelations in 2013 – so has struggled to counter IS. This may reflect the difficulty investigators face in securing co-operation from telecoms providers and ISPs in order to access details of suspected terrorists. Telecoms firms adopt attitudes that often reflect the concerns of their customers over privacy.

Such concerns about the growth of a surveillance society and the need to protect individual’s right to privacy grown since the revelations from documents released by former US National Security Agency (NSA) contractor Edward Snowden, which have revealed that the NSA and its UK counterpart GCHQ conducted surveillance beyond their lawful powers.

An advantage of Europol taking the lead in monitoring IS is that privacy and data protection rights are deeply embedded in EU law. This will apply to Europol too since it became a legal EU body under the Treaty of Lisbon in 2009. This provides an important chain of accountability with direct scrutiny by the European Court of Justice (ECJ) possible.

Only recently the ECJ has shown how it is prepared to be ruthless in protecting privacy and data protection rights, in a case in which it found the 2006 EU Directive on data protection itself invalid. The ECJ held that legislation must lay down clear and precise rules governing the scope and application of surveillance as well as imposing minimum safeguards to prevent misuse of data.

This would also apply to the Europol’s terrorist monitoring unit, and with the right safeguards in place Europol is likely to find it easier to win the co-operation of telecoms firms and ISPs, which in turn will make it a more effective unit. Of course this is still a difficult task, but it’s a step in the right direction.

The Conversation

How the parrot got its chat (and its dance moves)

Who's a clever boy then? D Coetzee/Flickr, CC BY-SA

Many animals – including seals, dolphins and bats – are able to communicate vocally. However, parrots are among a select few that can spontaneously imitate members of another species. A study has now pinpointed the region in the brain that may be allowing this to happen – the region that is also involved in controlling movement. The finding could perhaps also explain the fact that parrots, just like humans, can talk and dance.

We know that birds that can sing, including parrots, have distinct centres in their brain supporting vocalisations, called the “cores”. But, exclusively in parrots, around these there are outer rings, or “shells”. Surrounding this is a third region supporting movement. This is an older pathway that is shared by vertebrates. To find out more about what the unique shell system actually does, the research team analysed the expression of genes in these pathways in nine different species of parrot. They focused on ten genes that we know to be more active in the song regions of birds' brains compared to other parts of the brain.

They found that parrots, when compared to other birds, have a complex pattern of specialised gene expression in all three parts of its brain. That means that most of the vocal learning that is specific to parrots, such as imitation, must be taking place in the shell region and the part of the brain that controls movements. This is surprising, as previous work had assumed that only the dedicated core system would be involved in vocal learning and that the shells had nothing to do with talking.

My own research has shown that it is the connections between brain regions controlling cognitive and motor skills that support language in humans.

The researchers also examined songbirds and hummingbirds and found that the shell regions were indeed unique to the parrots. However, they said future research would have to clarify the exact mechanisms involved in imitating.

Imitation game

That this shell system is observed in so many species of parrot – including in Keas, the most ancient species known – suggests that the vocalisation abilities evolved around 29m years ago. For comparison, that is more or less the time when humans' ancestors are believed to have branched off from other primates.

The researchers hypothesise that this shell structure evolved after the core system for singing in birds was duplicated in the brain, with the shell centre developing new functions such as mimicking. So studying the shell structure in parrots could help us identify other mysterious duplications that could have led to certain brain functions in humans.

Might be hard to believe but parrots have a lot going on upstairs. Courtesy of Jonathan E. Lee, Duke University

Only parrots, humans and certain types of songbird can mimic other species. The fact that species as different as birds and humans share this behaviour is a clear example of “convergent evolution,” in which two species independently evolve structures supporting similar behaviours.

Imitation requires significant brain power and complex, specialised processes. For example, acoustic information must be represented, its organisation decoded and finally the sound reproduced. The complex specialisation of the core, shell and motor systems in parrots support these processes for imitation, enabling these species to couple auditory information from the environment with the finely grained behaviours necessary to produce them. There is currently no evidence suggesting that parrots have any special kind of articulators for producing spoken language. Rather, their brains seem to be doing the extra work.

Let’s dance

Interestingly, the authors also note that humans and parrots belong to another select set of animals – those that synchronise body movements to the rhythms of beats while listening to music. That is, unlike almost every other animal in the world, parrots and humans spontaneously dance (strangely enough, that group also includes elephants which have also demonstrated an ability to move along with music).

In parrots, such dancing is associated with the non-vocal motor regions surrounding the shell – which supports the possibility of a general capacity for learning regularities in the sounds they hear and coupling them with behaviour.

The study is a big step forward in our effort to understand what makes parrots so different from other birds. Indeed, the researchers themselves say they were surprised that the brain structures they discovered had gone unrecognised for so long.

The Conversation

How computers are learning to make human software work more efficiently

Need a computer doctor? Dial 100110011001 agsandrew

Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

The benefits

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware.  

The potential does stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.

We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.

In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.

The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers

To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

Thank you, Mr Darwin Everett Historical

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

The Conversation