Even educated fleas do it ... but is animal sex spicier than we thought?

Fly me to the moon Nednapa

There’s an idea circulating that humans are the only animal to experience sexual pleasure; that we approach sex in a way that is distinct from others. As with many questions about sex, this exposes some interesting facts about the way we discuss the subject.

On one level, the question of whether humans and nonhumans experience sex in the same way is fairly simply dismissed: how would we know? We cannot know how a nonhuman experiences anything – they can’t be asked. Sex as an experiential phenomenon for nonhumans is, quite simply, inaccessible. Science is obliged to propose questions that are answerable, and “how does a leopard slug experience sex?” is, at time of writing, about as unanswerable as they get.

Having said that, we can make educated guesses about whether sex is pleasurable for other species. Sex would be a very strange thing to seek if it didn’t bring some form of pleasure. It increases risk of disease, it wastes energy, it can seriously increase the likelihood of something bigger coming along and eating you (seriously, check out leopard-slug reproduction).

There’s no reason why an animal should seek sex unless they enjoy it. It is often proposed that an inherent “drive to reproduce” explains nonhuman sexual activity, but that is not an alternative here: if animals possess an instinct to reproduce, it needs to function somehow – and pleasure is a fairly basic motivator. The hypothesis that all sexually reproducing species experience sexual pleasure is, in itself, quite reasonable – as would be the hypothesis that animals find eating pleasurable.

Peak performance

This hypothesis about sex has been tested. Since the word “pleasure” is quite vague, scientists have tended to focus on orgasms. As a particularly intense form of sexual pleasure for many people, the logic has been that if non-humans experience orgasm, they are almost certainly experiencing pleasure.

Given that we are most familiar with human orgasms, scientists have unsurprisingly looked for behavioural and physical correlates of what we sometimes experience – shuddering, muscular rigidity, a cessation of movement, vocalisation, changes of facial expression, ejaculation. None of these are guaranteed, and consequently we should not expect them necessarily to be associated with sex in other species. But using this method, most commonly to study non-human primates, the animals perhaps most likely to display responses similar to humans, scientists have detected orgasm in many different species including macaques, orangutans, gorillas and chimpanzees.

In fact, very few primatologists doubt that non-human primates experience orgasm – at least, male non-human primates. There is debate as to whether female primates (including humans) experience sexual pleasure in the same way male primates do, which raises some fairly important questions about how Western culture views female sexual agency. But some detailed studies of the stump-tailed macaque have suggested that females of this species, at least, demonstrate a capacity for orgasm.

‘The post-coital twig’ Funny Solution Studio

One size fits all

Drilling down the totality of the “experience of sexual pleasure” to the moment of orgasm is problematic, though. It is the result of the pioneering work of Masters and Johnson dating from 1966. They focused sexual pleasure on orgasm by proposing a four-stage biomedical framework of excitement, plateau, orgasm and resolution. Despite much criticism, it entered intellectual and public consciousnesses as a description of “normal” sex, involving genitals and aimed at producing orgasms.

But while this may describe sex for many, it excludes an awful lot of people. A brief survey of the various things that humans get up to quickly indicates that sex isn’t necessarily focused on orgasm or genitals. Focusing sex on genitals and orgasm only makes sense if we assume that the central function of sex is reproduction – exactly the same assumption that seems to lie behind scientific enquiries into sexual pleasure in other species.

Various cultures maintain that sex is not connected to conception, though – most famously the Trobrian Islanders of the South Pacific. New reproductive technologies have meanwhile separated sex and reproduction: it is not necessary for a people to have sex in order to conceive. This shouldn’t come as much of a surprise, given that people have more sex than they have children. The yoking of sex to reproduction to the exclusion of pleasure can be traced to the Victorian era, and is the consequence of all sorts of exciting historico-political processes that would take a whole separate article to explain, but it seeped into all aspects of Western culture, including science.

‘Mind where you’re putting that trumpet’ RCKM594

Not to suggest that sex isn’t involved in reproduction. The gamete exchange that is necessary for conception to occur is, in general, the result of some form of contact between bodies. But when people say that “humans are the only species to have sex for pleasure” they are really saying that “humans are the only species that has non-reproductive sex”.

In fact, sex may well serve a number of other functions. Sex may bond animals together or may cement a dominance hierarchy in the case of bonobos, for example, one of humans' closest relatives. These functions may be extremely important, especially for social animals, and would likely only be feasible if sex were in itself a source of pleasure.

There is also no shortage of examples where non-human sex has nothing to do with reproduction at all. Females of many species mate with males when they are non-fertile (marmosets for example). And same-sex sexual behaviour, which is definitionally non-reproductive, occurs in every vertebrate species in which it has been looked for, along with some non-vertebrates (bedbugs, for example, or fruit flies).

This evidence alone should lead us to expect that many animals experience sexual pleasure in much the same way that humans do – that the pleasure involved in sex leads many animals to seek it in non-reproductive contexts, and that this aspect of sexuality is not as unique as humans may like to think. This insight is surely vital to understanding sex in other species, not to mention all other aspects of their behaviour too.

The Conversation

Building blocks of life found among organic compounds on Comet 67P – what Philae discoveries mean

The building blocks of life are lurking on the dark and barren surface of Comet 67P. ESA/Rosetta/NAVCAM, CC BY-SA

Scientists analysing the latest data from Comet 67P Churyumov-Gerasimenko have discovered molecules that can form sugars and amino acids, which are the building blocks of life as we know it. While this is a long, long way from finding life itself, the data shows that the organic compounds that eventually translated into organisms here on Earth existed in the early solar system.

The results are published as two independent papers in the journal Science, based on data from two different instruments on comet lander Philae. One comes from the German-led Cometary Sampling and Composition (COSAC) team and one from the UK-led Ptolemy team.

The data finally sheds light on questions that the European Space Agency posed 22 years ago. One of the declared goals of the Rosetta mission when it was approved in 1993 was to determine the composition of volatile compounds in the cometary nucleus. And now we have the answer, or at least, an answer: the compounds are a mixture of many different molecules. Water, carbon monoxide (CO) and carbon dioxide (CO2) – this is not too surprising, given that these molecules have been detected many times before around comets. But both COSAC and Ptolemy have found a very wide range of additional compounds, which is going to take a little effort to interpret.

New images show Philae’s landing spots on comet when bouncing around and taking measurements. ESA/ROSETTA/NAVCAM/SONC/DLR

At this stage, I should declare an interest: I am a co-investigator on the Ptolemy team – but not an author on the paper. But the principal investigator of Ptolemy, and first author on the paper, is my husband Ian Wright.

Having made this clear, I hope that readers will trust that I am not going to launch into a major diatribe against one set of data, or a paean of praise about the other. What I am going to do is look at the conclusions that the two teams have reached – because, although they made similar measurements at similar times, they have interpreted their data somewhat differently. This is not a criticism of the scientists, it is a reflection of the complexity of the data and the difficulties of disentangling mass spectra.

Deciphering the data

What are the two instruments? And, perhaps more to the point, what exactly did they analyse? Both COSAC and Ptolemy can operate either as gas chromatographs or mass spectrometers. In mass spectrometry mode, they can identify chemicals in vaporised compounds by stripping the molecules of their electrons and measuring the mass and charge of the resulting ions (the mass-to-charge ratio, m/z). In gas-chromatography mode they separate the mixture on the basis of how long it takes each component in the mixture to travel through a very long and thin column to an ionisation chamber and detector.

Either way, the result is a mass spectrum, showing how the mixture of compounds separated out into its individual components on the basis of the molecular mass relative to charge (m/z).

Unfortunately, the job doesn’t end there. If it were that simple, then organic chemists would be out of a job very quickly. Large molecules break down into smaller molecules, with characteristic fragmentation patterns depending on the bonds present in the original molecule. Ethane, C2 H6 for example, has an m/z of 30, which was seen in the spectra. So the peak might be from ethane, or it might be from a bigger molecule which has broken down in the ionisation chamber to give ethane, plus other stuff.

Then again, it might be from CH2O, which is formaldehyde. Or it might be from the breakdown of polyoxymethylene. Or it might be from almost any one of the other 46 species which have an m/z of 30. Figuring out what it is exactly is a tough job and the main reason why I gave up organic chemistry after only a year – far too many compounds to study.

Of course, the teams didn’t identify every single peak in isolation, they considered the series of peaks which come from fragmentation. This helps a bit, in that there are now many more combinations of compounds and fractions of compounds which can be matched.

So where does this leave us? Actually, with an embarrassment of riches. Have the teams come to the same conclusions? Sort of. They both detected compounds which are important in the pathway to producing sugars – which go on to form the “backbone” of DNA. They also both note the very low number of sulphur-bearing species, which is interesting given the abundance of sulphur in the solar system, and the ease with which it can become integrated into organic compounds.

The COSAC team suggests that nitrogen-bearing species could be relatively abundant, whilst Ptolemy found fewer of them. This is important because nitrogen is an essential element for life, and is a fundamental part of the amino acids which eventually make up the central core of DNA. Conversely, the Ptolemy team has found lots of CO2, whilst COSAC hasn’t detected much.

These differences are probably related to sampling location: COSAC ingested material from the bottom of Philae, while Ptolemy sniffed at the top. Did Ptolemy breathe in cometary gases, whilst COSAC choked on the dust kicked up during the brief touchdown? If so, then the experiments have delivered wonderfully complementary sets of data.

Most importantly, both of those sets of data show that the ingredients for life were present in a body which formed in the earliest stages of solar system history. Comets act as messengers, delivering water and dust throughout the solar system – now we have learnt for certain that the ingredients for life have been sown far and wide through the 4.567 billion years of solar system history. The challenge now is to discover where else it might have taken root.

What else is certain is that both teams are keeping fingers crossed that the Philae-Rosetta communications link stabilises, so that they can get on with their analyses. This is just the start.

The Conversation

Disclosure

Monica Grady is a co-investigator on the Ptolemy team, and wife of Professor Ian Wright, the Principal Investigator, but is not an author on the Science paper discussed in the article, and neither was she involved in its preparation. She receives funding from the STFC and is a Trustee of Lunar Mission One.

Aircraft debris looks like it's from MH370 – now can we find the rest?

Small it may be, but so far it's the only part of MH370 that's been found. Raymond Wae Tion/EPA

It appears that the debris washed ashore on Reunion, an island east of Madagascar, may be from the missing Malaysia Airlines Flight MH370 which disappeared in March 2014, believed lost at sea somewhere to the west of Australia.

Reunion lies 500km east of Madagascar near the island of Mauritius, around 4,000km from the area (marked in red) where search efforts for the missing aircraft have been concentrated. That’s a huge distance to travel, even in the 500 or so days it has been since the crash. Is this possible from an oceanographic perspective?

Certainly, it’s possible that aircraft debris – which is built to be relatively lightweight, otherwise it would be difficult for the aircraft to fly – can float quite close to the surface. The near-surface ocean currents in the region are mainly driven by broad wind patterns.

In the southern Indian Ocean the average long-term near surface circulation is counter clockwise, and so material that enters the ocean southwest of Australia would be carried by the West Australia current northwards, towards the equator. There it could join the South Equatorial current moving westwards until it joined the Mozambique current travelling along the African coast and past Reunion. The distance travelled – roughly 5,000-6,000km in 15 months or so – gives an average speed of about 15cm/s. This is quite a reasonable value for the currents in the upper levels of the ocean water column where such debris might find itself suspended.

Currents map the debris' possible route across the ocean. Michael Pidwirny

Using ocean currents to look back into the past

Even if it is part of Flight MH370, does this provide us with any new information that could help investigators pinpoint the crash site location? Perhaps “pinpoint” is too strong a word. But it’s certainly possible to use numerical ocean circulation models to trace the route the debris might have taken and suggest new regions of the ocean to search.

Such simulations come with certain assumptions. Just as a weather forecast becomes less accurate the further ahead we try to predict, attempting to re-create a journey like this through simulation becomes less certain the further back into the past we have to travel.

This is because the circulation transporting the debris would be driven by winds, and we are talking about winds over a remote part of the ocean. Consequently any uncertainty in the wind field used in the model will introduce uncertainty into the model simulations of the ocean currents, which would in turn create potentially large variability in any predictions of the location of a crash site the ocean currents dragged the debris from.

This is not to say that it’s not worth doing – it may very well indicate which part of the ocean the search should focus on, and perhaps even more importantly rule out areas not worth visiting. If one piece of wreckage has emerged, then there may be more to come, which could add accuracy to the prediction model.

Ultimately, when searching an ocean area as huge as the Indian and Southern oceans, any help is better than nothing at all.

The Conversation

Here's why scientists haven't invented an impossible space engine – despite what you may have read

Shutterstock

What if I told you that recent experiments have revealed a revolutionary new method of propulsion that threatens to overthrow the laws of physics as we know them? That its inventor claims it could allow us to travel to the Moon in four hours without the use of fuel? What if I then told you we cannot explain exactly how it works and, in fact, there are some very good reasons why it shouldn’t work at all? I wouldn’t blame you for being sceptical.

The somewhat fantastical EMDrive (short for Electromagnetic Drive) recently returned to the public eye after an academic claimed to have recorded the drive producing measurable thrust. The experiments from Professor Martin Tajmar’s group at the Dresden University of Technology have spawned numerous overexcited headlines making claims that –- let’s be very clear here –- are not supported by the science.

The idea for the EMDrive was first proposed by Roger Shawyer in 1999 but, tellingly, he has only recently published any work on it in a peer-reviewed scientific journal, and a rather obscure one at that. Shawyer claims his device works by bouncing microwaves around inside a conical cavity. According to him, the taper of the cavity creates a change in the group velocity of the microwaves as they move from one end to the other, which leads to an unbalanced force, which then translates into a thrust. If it worked, the EMDrive would be a propulsion method unlike any other, requiring no propellant to produce thrust.

Fundamental problems

There is, of course, a flaw in this idea. The design instantly violates the principle of conservation of momentum. This states the total momentum (mass x velocity) of objects in a system must remain the same and is linked to Newton’s Third Law. Essentially, for an object to accelerate in one direction, there must be an equal force directed the opposite way. In the case of engines, this usually means firing out particles (such as propellant) or radiation.

The EMDrive is designed to be a closed system that doesn’t emit any particles or radiation. It cannot possibly generate any thrust without breaking some seriously fundamental laws of physics. To put it bluntly, it’s like trying to pull yourself up by your shoelaces and hoping you’ll levitate.

From Earth to the Moon in four hours? Still impossible. Shutterstock

Nonetheless, a few open-minded experimental groups have built prototype EMDrives and all seem to see it generate some form of thrust. This has led to a lot of excitement. Maybe the laws of physics as we know them are wrong?

Eagleworks, a NASA-based group, built a prototype and last year reported 30-50 micronewtons of thrust that could not be explained by any conventional theory. This work was not peer-reviewed. Now, Tajmar’s group in Dresden say they have built a new version of the EMDrive and detected 20 micronewtons of thrust. This is a much smaller value, but still significant if it really is generated by some new principle.

Experimental problems

Straightaway, there are problems with this experiment. The abstract states: “Our test campaign cannot confirm or refute the claims of the EMDrive.” Then, a careful reading of the paper reveals this observation: “The control experiment actually gave the biggest thrust … We were really puzzled by this large thrust from our control experiment where we expected to measure zero.”

Yes, the control experiment designed not to generate any thrust still measures a thrust. Then there’s the peculiar gradual way the thrust seems to turn on and off that looks suspiciously like a thermal effect, and then there are acknowledged heating problems. All this leads to the conclusion stated in the paper that “such a set-up does not seem to be able to adequately measure precise thrusts.” Similar problems were seen by the Eagleworks group, with thrust also mysteriously appearing in their control test.

Taken together, these results strongly suggest that the measured signatures of thrust are subtle experimental errors. Possible sources include thermal effects, problems with magnetic shielding or even a non-uniform gravitational field in the laboratory leading to erroneous force measurements. As a comparison, the force measured in this latest experiment is roughly comparable to the gravitational attraction between two average-sized people (100kg) standing about 15cm apart. It is an extremely small force.

That the experiments detect a measureable thrust is undeniable. Where the thrust comes from, whether it is real or erroneous, is inconclusive. That the experiments in any way confirm the EMDrive works is a falsehood. This was noted by Tajmar himself, who told the International Business Times “I believe there is no real news here yet.”

The experimental scientists involved have done their jobs to the best of their ability, having tested a hypothesis – albeit a spectacularly unlikely one – and reported their results. These scientists aren’t actually claiming to have invented a warp drive or to have broken the laws of physics. All they’re saying at the moment is that they’ve found something odd and unexplained that might be something new but is likely an experimental artefact that needs further study. The panoply of clickbait headlines and poorly researched articles on the topic are doing something of a disservice to their scientific integrity by claiming otherwise.

The Conversation

With so much vested in satellites, solar storms could bring life to a standstill

It looks mean from this close, but it's still damaging when it reaches Earth. Solar Dynamics Observatory/NASA

Satellites are essential to modern life. So essential, in fact, that plans have been drawn up on how to cope with a situation in which we could no longer rely on them. A UK government document entitled the Space Weather Preparedness Strategy may sound strange, but when so much of modern communications, transport and the financial system relies on satellites, you can imagine why one would want a Plan B in place.

The reality is that we depend on satellites in more ways than we realise. The concept was popularised in a 1945 letter to Wireless World written by science fiction writer and inventor Arthur C Clarke – and from then satellite services has grown into an industry worth US$100 billion a year.

This highlights the extent to which satellite services pervade modern life. A fleet of several hundred communications satellites encircles our planet in geosynchronous Earth-orbit, with hundreds more at lower altitudes. Rapid satellite communications enable the global markets underpinning our economy, and the emergency and defence services that keep society safe. Satellites provide GPS global navigation services for transport on land, sea and in the air. Modern agriculture, manufacturing and logistics chains, that supply virtually everything you consume – from the milk in your coffee to the screen you’re reading this on – rely on information provided by satellites.

But you’d be forgiven for never noticing some of the subtle influences of satellite technology on your life. After all, who’d have thought that some trains use GPS data to control which doors open at platforms of different lengths? Or that banks uses high-precision timing of satellite navigation systems to time-stamp its financial transactions?

It’s busy up there. ESA

Worst case scenario

We could survive without satellites, but their influence and benefits are so widespread that it would require concerted effort and massive investment to do so. Which has led some to consider the risks satellites face, and what to do about them.

One threat is the impact of “space weather”. This can be solar flares – powerful bursts of radiation – or explosions of high-speed, high-energy protons ejected from the sun which scythe their way though near-Earth space. During periods of disturbed space weather, the region circling the Earth’s equator, the Van Allen radiation belt, swells with greater numbers of high-energy subatomic charged particles.

These can disrupt satellite operations by depositing electrical charge within the on-board electronics, triggering phantom commands or overloading and damaging sensitive components. The effects of space weather on the Earth’s upper atmosphere disrupts radio signals transmitted by navigation satellites, potentially introducing positioning errors or, in more severe cases, rendering them unusable.

These are not theoretical hazards: in recent decades, solar storms have caused outages for a number of satellites services – and a handful of satellites have been lost altogether. These were costly events – satellite operator losses have run into hundreds of millions of dollars. The wider social and economic impact was relatively limited, but even so it’s unclear how our growing amount of space infrastructure would fare against the more extreme space weather that we might face.

When space weather becomes a hurricane

The largest solar storm on record was the Carrington event in September 1859, named after the British astronomer who observed it. Of course there were no Victorian satellites to suffer the consequences, but the telegraph systems of the time were crippled as electrical currents induced in the copper wires interfered with signals, electrocuted operators and set telegraph paper alight. The geomagnetic storm it triggered was so intense that the northern lights, usually a polar phenomenon, were observed as far south as the Bahamas.

Statistical analysis of this and other severe solar storms suggests that we can expect an event of this magnitude once every few hundred years – it’s a question of “when” rather than “if”. A 2007 study estimated a Carrington event today would cause US$30 billion in losses for satellite operators and threaten vital infrastructure in space and here on the ground. It’s a risk taken sufficiently seriously that it appears on the UK National Risk Register and has led the government to draw up its preparedness programme.

Types of space weather and what they affect. BIS/Crown Copyright

We’ve only been aware of the potential damage space weather can bring for around 150 years – and have only begun to rely on technologies vulnerable to it for 50 years or so – and our understanding of it is still poor and warrants further research. It’s not surprising that engineers and insurers have invested so much effort in trying to identify the risks and recommending ways to mitigate the effects. These include building redundant electronic systems to survive overloads, signal amplifiers to cut through increased interference, super-capacitors to soak up excess electrical charges that could damage electrical or communications grids, and high-precision alternatives to satellite data such as GPS that can be used for periods when satellites cannot be contacted.

As the space weather hazard becomes better understood, it will be possible for satellite manufacturers to design and build their spacecraft to withstand most space weather. But it’s likely that improved engineering standards will be required to ensure that critical satellite systems continue to function though extreme events. In the meantime, satellite service users that need to operate during extreme space weather should plan to find ways around the outages they’d experience.

The Conversation

Folding graphene like origami may allow us to wear sensors in our skin

Scientists have figured out how to make this...with graphene. McEuen Group, Cornell University

Material scientists have found a way to apply the ancient art of kirigami – a way of building complex structures by cutting and folding paper – to the wonder material graphene. The experiment shows that ripples in a graphene sheet can increase the bending stiffness of the material significantly more than expected – a discovery that could lead to new types of sensors, stretchable electrodes or tools for use in nanoscale robotics.

Graphene is a single layer of graphite, a naturally occurring mineral with a layered structure. The material, first produced in the lab in 2003, has impressive electrical, thermal and mechanical properties, which makes it potentially useful in applications ranging from new electronic devices to additives in paints and plastics.

The promising material is made up of carbon atoms structured in a series of interconnecting hexagons, similar to chicken wire. It is made by pulling apart the layers in graphite in what scientists call a “top-down” approach (where we take something big and make it smaller). This can be done using adhesive tape; chemical reagents; or by sheer force such as those generated in a kitchen blender or mixer. Although this sounds quite simple it is not suitable for producing large sheets of graphene.

To do this, a “bottom-up” approach is needed, where graphene is assembled by decomposing a carbon-containing molecule such as methane over a hot metal surface, typically copper. This is the technique the researchers in the new study used to produce a sheet of graphene that they then could manipulate using a version of kirigami.

Paper and graphene versions of kirigami. Graphene image taken using a transmission white-light optical microscope. McEuen Group, Cornell University

Art meets science

Kirigami (“kiri” means cut and “gami” means paper) is a type of origami (“ori” means fold) which has been practised for centuries to produce beautifully complex shapes and patterns. Lots of us have probably tried our hand at these techniques as children, making snowflakes out of scrap paper.

The researchers used gold pads as “handles” to crumple a graphene sheet like paper – in a process that is entirely reversible. Like paper, the graphene folds and crumples but does not noticeably stretch.

A gold pad (the dark square which measures a few 10’s of microns) is being pushed by a micromanipulator and is attached to a graphene spring

By using a sophisticated measuring technique, where an infrared laser is used to apply pressure to the gold pad on the graphene film, it is possible to measure the level of displacement of the graphene film. This displacement can then be used to workout the elastic properties of the graphene sheet. Wrinkling of the graphene sheet improves its mechanical properties, similar to how a crumpled sheet of paper is more rigid than a flat one.

In fact it was such mechanical similarities that enabled the researchers to translate ideas directly from paper models to graphene devices. Using photolithography, a method of transferring geometric shapes on a mask to a surface, as the “cutting scissors”, the team showed that it is possible to create a series of springs, hinges and stretchable graphene transistors.

Remotely-operated robotics?

The research has opened the door to manipulating two-dimensional graphene sheets to create new material structures with movable parts. The possibility of stretchable transistors is extremely interesting as there is a growing demand to develop flexible and even wearable electronics.

When stretching a material you would normally expect the electrical resistance to change. In the stretchable transistors developed in the new study, a graphene spring is sandwiched between a source and a drain electrode. When stretched to over twice its original size no noticeable change in electrical properties was detected. Repeated stretching and un-stretching also had little effect either.

Working in a water and soap solution, large sheets of graphene can dramatically crumpled like soft paper, and return to their original shape

This ability to maintain graphene’s electrical properties is down to its lattice structure, which does not undergo much change during the stretching of the spring. It even proved possible to take the kirigami devices to the next level, moving or folding the graphene without using direct contact. For example, by replacing the gold pads with a ferromagnetic material, such as iron, the sheets could be manipulated in a magnetic field, creating complex motions such as twists. The technique could be used to create devices that respond to light, magnetic fields or temperature.

The concept of manipulating two-dimensional materials to generate more complex structures on the macro-, micro- and nanoscale is genuinely exciting. Being able to create new metamaterials, engineered to have properties not usually found in natural materials, could open the door to many new types of tools. Possibilities include new sensors, stretchable electrodes that could be used in robotics or nanomanipulators, tiny machines that can move things around with nanometer precision.

Stretchable electrodes would allow highly conformable or flexible electronics and sensors to be incorporated into synthetic skin or flesh, such as in robots or artificial limbs, while retaining full functionality. We could even visualise such flexible electrodes and sensors being used in wearable electronics incorporated into clothing for realtime personal health monitoring – the ultimate personal healthcare.

The Conversation

Auto industry must tackle its software problems to stop hacks as cars go online

Not what anyone wants to see while driving. Bill Buchanan, Author provided

Many companies producing software employ people as penetration testers, whose job it is to find security holes before others with less pure motives get a chance. This is especially common in the finance sector, but following the recent demonstration of a drive-by hack on a Jeep, and parent company’s Fiat Chrysler’s huge recall of 1.4m vehicles for security testing, perhaps it’s time the auto industry followed its lead.

The growing number of software vulnerabilities discovered in cars has led to calls for the US Federal Trade Commission and National Highway Traffic Safety Administration to impose security standards on manufacturers for software in their cars. Cars are likely to require a software security rating so consumers can judge how hack-proof they are.

In the past, cars have generally avoided any form of network connectivity, but now consumers want internet access to stream music or use apps such as maps. If a car has a public IP address then, just as with any computer or device attached to the internet, a malicious intruder can be potentially connect to and hijack it – just as the Jeep hack demonstrated.

Andy Davis, a researcher from NCC Group, has shown that it may be possible to create a fake digital radio (DAB) station in order to download malicious data to a car when it tries to connect. While the Jeep hack was performed on a running car, the NCC Group researchers demonstrated that an off-road vehicle could be compromised, including taking control of steering and brakes. As the malicious data was distributed through a broadcast radio signal, it could even result in a nightmare situation where many cars could be compromised and controlled at the same time. More details on how the hack works will be revealed at the Black Hat conference this summer.

Tuning into the wrong station could give you more than you bargained for. Bill Buchanan, Author provided

More devices, more bugs, more problems

In the last few weeks Ford has recalled 433,000 of this year’s Focus, C-MAX and Escape models because of a software bug which leaves drivers unable to switch off their engine, even when the ignition key is removed. Recently, it was shown that BMW cars would respond to commands sent to open their doors and lower their windows – hardly the height of security. The firm had to issue a security patch for more than 2m BMW, Mini and Rolls-Royce vehicles.

As more and more software appears in cars, the problems of patching them will grow. Our desktop and laptop computers can be set to auto-update, but with embedded systems it’s not so easy. The next wave of the internet, the internet of things where billions of devices will be network-connected, will evidently bring a whole lot more security problems in terms of finding and fixing bugs – on many more devices than just cars.

Crowdsourcing debugging

Some companies take this seriously, while others try and distance themselves from flaws in their products. Google runs a Vulnerability Reward Program with rewards from US$100-$20,000. For example, Google will pay a reward of US$20,000 for any exploit that allows the remote takeover of a Google account.

Google even has a Hall of Fame, for which it awards points for the number of bugs found, their severity, how recent, and whether the bounty recipient gives their reward to charity – Nils Juenemann is currently in top place. Google also awards grants up to US$3,133.7 as part of its Vulnerability Research Grants scheme.

Microsoft and Facebook also operate Bug Bounty schemes to encourage digging out bugs in its own internet software, with a minimum bounty of US$5,000. But while these companies actively seek people to improve software by fixing bugs, companies such as Starbucks and Fiat Chrysler take a negative approach to those who find bugs in their products, unhelpfully describing such efforts as criminal activity.

Change of approach needed

I don’t mean to alarm, but software is one of the most unreliable things we have. Imagine if you were in the fast lane of the motorway when a blue-screen appears on your dashboard saying:

Error 1805: This car has encounter a serious error and will now shutdown and reboot

It would be back at the dealer in no time. We have put up with bugs for decades. We can’t trust these embedded software systems to be bug-free, yet they’re increasingly appearing in safety-critical systems such as speeding one-tonne vehicles. When was the last time your microprocessor suffered a hardware breakdown? Compare this to the last time Microsoft Word crashed and you can see it’s not the hardware’s fault. This is generally because software suffers from sloppy design, implementation and testing. So while a word processor crash is annoying, a car crash is clearly much worse. can we say: Potentially in both senses of the word. (?)

Car owners of the future will need to be a lot more savvy about keeping their vehicles updated. Consider that you are on the motorway one evening and the car informs you:

You have a critical update for your braking system, please select YES or NO to install the update. A reboot of the car is not required, and the update will be installed automatically from your Wi-Fi enabled vehicle

Would you answer YES or NO? If you choose NO, you don’t trust the software; if you choose YES you are entrusting it to execute without problems while driving at speed along a motorway. Neither of these are good places to be.

The auto industry has a long way to go to prove that it grasps the risks posed by network-enabled vehicles and to then tackle them with our safety at all costs in mind. An independent safety rating for cars would provide some incentive for manufacturers to get this right. As for penetration testers, the industry may find that bug bounty schemes can help do this difficult work for them for less money than it costs in fines and recalls when undiscovered bugs make it to their products on the market.

The Conversation

Windows 10: Microsoft's universal system for an increasingly mobile world

Windows 10, a bit of the new, a bit of the old. Microsoft

With Windows 10, Microsoft is trying to turn the tide against the proliferation of operating systems across desktops, servers, tablets and smartphones by creating a single operating system that will run on them all.

Currently the world’s billions of Windows users are spread across its older versions, with Windows XP, released in 2001, still boasting the same installed base of users (around 12% market share) as the two-year-old Windows 8.1 (at 13%). The bulk of Windows users (61%), are still using Windows 7, released in 2009. And that’s not to mention the various incompatible Windows versions designed for tablets or smartphones.

Trying to consolidate different versions isn’t a new idea, although it’s much easier said than done. Recent versions of Apple OS X operating system for desktops and laptops have drawn inspiration from iOS designed for iPad and iPhone, while Canonical, the company behind the Ubuntu Linux distribution, has also produced a version for phones.

However, with Windows 10, Microsoft is taking the idea to its logical conclusion, producing not just a single OS for all devices, but a framework for apps that run on all of them, making the move between devices seamless.

One app to rule them

If we believe the Microsoft marketing machine, this will be the start of the era of Windows universal apps. There are many clever things in Windows 10, such as the integration of the digital assistant Cortana, but universal apps are what really excites me. This will allow developers to write code once and deploy it to all the different devices Windows 10 supports. It’s not quite as easy as Microsoft would have us believe though: there would still need to be some code that’s written specifically for each type of device, only some of it would be shared.

This is exciting because Microsoft is hoping to entice developers and bridge the “app gap” on Windows devices. As of May 2015, the Google Play Store has 1.5m apps, the Apple App Store has 1.4m, while the Windows Phone Store a mere 340,000. Applications, and therefore available developers to create them, are key. Getting developers on board is the best way for Microsoft to make headway in the race to get their devices into our pockets.

Mixing the new and the old

I’ve spent some time with the technical and insider previews of Windows 10 for the desktop. The latest builds are speedy and show a lot of promise, so much so that every one of my Windows tablets and desktops are now signed up and awaiting the free upgrade. As predicted, it blends the traditional desktop experience of Windows 7 with the apps-based approach of Windows 8. It feels like a new desktop experience but is also familiar, an evolution rather than a revolution.

We’ve come a long way. Microsoft

Some of the key improvements are less headline grabbing than a talking digital assistant like Cortana or the return of the start menu. A key market as personal PC sales decline is the enterprise, and under the hood changes in security have been a heavy focus for Microsoft to ensure businesses are open to upgrading from Windows 7. But other than the front-end “bells and whistles” there aren’t too many obvious internal changes.

This familiarity should entice those Windows 7 users still holding out, those who found the new Metro UI interface of Windows 8.1 too much of a culture shock. Gone are the two interfaces, now merged into a single mix of traditional start menu with start screen stuck on the side. Gone too is the charms bar (popup menu) that was so heavily reliant on touch.

In another new move Windows 10 is being given away as an upgrade for free. With successive Android, iOS, Linux and OS X updates now offered free I think it was inevitable that Microsoft would eventually go the same route.

Although Windows 10 for desktop is available now, we’ll have to wait until September for the mobile version and to experiment with universal apps. Of course it’ll be a bit longer still to see what impact a unified OS platform has, and whether Windows 10 is the fresh start Microsoft is banking on.

The Conversation

Hactivists aren't terrorists – but US prosecutors make little distinction

For Lauri Love, being treated as a terrorist is no laughing matter. Lauri Love/Facebook

Activists who use technology to conduct political dissent – hacktivists – are increasingly threatened with investigation, prosecution and often disproportionately severe criminal sentences.

For example, in January 2015 self-proclaimed Anonymous spokesman Barrett Brown was sentenced to 63 months in prison for hacking-related activities including linking to leaked material online. Edward Snowden is currently exiled in Russia after leaking the global surveillance operations of the NSA and GCHQ.

Prosecutions of hacktivists intensified in 2013, when Andrew “weev” Auernheimer was sentenced to 41 months after exposing a vulnerability that affected 114,000 iPad users on AT&T’s service. Jeremy Hammond was sentenced to 10 years in federal prison after hacking and releasing documents about military subcontractor Stratfor. Aaron Swartz, who was facing a prison sentence of 25 years after hacking into JSTOR – a database of academic articles – committed suicide in January of that year. Chelsea Manning leaked secret military documents to Wikileaks and was sentenced to 35 years imprisonment in August.

Long arm of the law is getting longer

While these are US citizens subject to US laws and punishments, the Obama administration has recently indicated that it will also aggressively pursue hackers located overseas for alleged criminal activities.

So in July 2015, British hacktivist Lauri Love was re-arrested under a US warrant for violating the Computer Misuse Act. His case, like those mentioned above, illustrates the remarkable steps the US government will undertake in the pursuit and prosecution of hackers.

In 2013 the US District Court for New Jersey issued an indictment against Love, charging him with hacking into the US Missile Defense Agency, NASA, the Environmental Protection Agency and other government departments. The US Attorney’s Office for the Southern District of New York claims Love stole the sensitive personal information including emails of Federal Reserve employees.

The leaked Federal Reserve emails may have been part of Operation Last Resort, an Anonymous project to avenge the death of Swartz, which they linked to prosecutorial harassment and the over-zealous enforcement of outdated computer crime laws. Like all major Anonymous operations, Operation Last Resort was a visual spectacle, including hijacking an MIT website to put up a Swartz tribute, releasing the names and contact information of 4,000 banking executives, and hacking the US Sentencing Commission website.

Are hackers terrorists?

Like Hammond, Manning and Snowden before him, Love is accused of hacking into government agencies and leaking information in an effort to make federal agencies more transparent.

Love faces extradition to the US, even though a British police investigation failed to turn up any incriminating evidence. The Crown Prosecution Service acknowledged it didn’t have enough evidence to prosecute and Love was released from bail in 2014.

The impending threat of US extradition is powerful enough to have kept Wikileaks publisher, Julian Assange, holed-up in Ecuador’s London Embassy for three years – and it is not difficult to understand why. Extradition law is generally reserved for serious criminal suspects such as those accused of terrorism.

Consider some of the individuals who have been extradited from the UK to the US: Abdel Abdel Bar and Khalid Abdulrahman al-Fawwaz, wanted in connection with the 1998 terrorist bombing of US Embassies in East Africa; KGB spy Shabtai Kalmanovich; al-Qaeda operative Syed Fahad Hashmi; and Christopher Tappin, accused of selling weapons parts to Iran.

Blurring the political and the criminal

Gary Mackinnon spent 10 years facing extradition and a possible life sentence in the US. John Stillwell/PA So it’s ironic that while Obama recently noted that the US criminal justice system “isn’t as smart as it should be”, his government pursues a policy that seems to blur the differences between and the sanctions against hackers, terrorists, spies and political activists.

There have been successful challenges against extradition orders aimed at those accused of hacking offences, such as Gary Mackinnon, who spent 10 years facing extradition before Home Secretary Theresa May rejected US demands. But Love’s case also offers a window into the anti-democratic operation of state power. The scale of US government response to hacktivism is disproportionate.

Love is accused of attempting to reveal secret facets of the military and financial-industrial complexes so that they might be held accountable. If, as it is alleged, his activities were associated with Operation Last Resort, they were part of a broader digital civil disobedience action involving a form of cyber-squatting on two federal websites as a coordinated protest against the persecution of a fellow hacktivist. Were this activity to have been conducted in the offline world – sit-ins, placard-waving protests, even obtaining and leaking information to journalists – the punishments would not be nearly as severe.

That Love has been doggedly pursued by the US, a year after being released in the UK, reveals that the apparatus of state power is increasingly aimed at criminalising dissent as it is conducted online.

The Conversation

Animal research: varying standards are leading to bad science

How an animal is treated can actually affect research results. Understanding Animal Research/Flickr , CC BY-SA

Scientific research sometimes requires the use of animals. It’s a fact. And as long as that is the case, we need to do everything in our power to minimise the distress for laboratory animals. This is not just for the sake of the animals, but also for the sake of science itself. We know that the quality of life of an animal can actually affect its physiology and, thereby, the research data.

But unfortunately, the standards of animal care vary greatly across countries and even across research institutes. The time has come to overhaul this system and replace it with globally enforced rules.

Necessary evil

There are a lot of misconceptions about animal research, for instance what it is used for. Across the EU and in a number of other countries (including India, Israel, Norway and New Zealand), it is actually illegal to use animals to test cosmetics or household products. It is, however, allowed in medical research.

Animals are vital to medical research – they help us understand how drugs and genes function in our wonderfully complex bodies. By law, new compounds must be tested on animals before they can reach human trials. This is partly because humans are so genetically diverse and come from such a wide range of environments that we are not of much use in the initial phases of drug testing.

A lot of real breakthroughs in medical research would have been impossible without animals. Take the dogs in Emile Roux and Louis Pasteur’s research for example – they helped develop the human and canine rabies vaccines. Likewise, Frederick Banting and Charles Best’s work on diabetic dogs led to the discovery of insulin, arguably one of the most significant discoveries of the 20th century.

In reality, larger animals such as dogs play an increasingly minor role in animal research; more than 84% of studies are now conducted using mice, rats, and flies.

An small proportion of animal experiments today use dogs or other larger animals. Understanding Animal Research/Flickr

Thankfully animal research is highly policed in countries such as Australia, the USA, and the UK. Some institutes follow the rules and maintain the highest possible standards. For example, animal research in Australia is legally bound to follow the so-called “three R’s” – Reduction (of animal numbers), Refinement (to minimise distress) and Replacement (with non-animal models). They are also required to conduct ethical and humane research as described in The Australian Code for the Care and Use of Animals for Scientific Purposes.

Unfortunately, there are some instances, even in highly policed countries, where research doesn’t follow the guidelines, with some institutes and labs slipping through the cracks. Bad practices are much less likely to happen in countries where governing bodies review research proposals and conduct regular inspections. But they still happen.

Guidelines and policing are completely up to individual governments, which can be uninformed or lacking in funding. So what about countries with less stringent rules, such as Italy? Animal-based scientific research is common in Italy. But researchers feel that the occasional threats of institute inspections will almost never result in a real inspection.

In an ideal world, researchers should undergo extensive training to develop a keen eye for any kind of distress and to guarantee a high quality of life of the animals in their charge. In many countries this training, if it exists, doesn’t actually occur when new researchers join an institute. Animal facilities vary wildly in quality, and as such, both the quality of life of the animals, as well as the data itself, is compromised.

Bad animal practices, bad science?

Research has shown that data at the behavioural, cellular, and biochemical levels can be completely different depending on whether rats had access to enrichments (such as toys to play with, tunnels to run through, and things to climb). This can affect things like gene expression, hormones and are cell-signalling molecules called cytokines.

One study showed that mice born in an enriched environment developed more neurons in part of the brain. Another suggested that the progression of neurological disorders changed with environmental variation.

Monitoring environmental enrichments would both markedly improve the lives of research animals and also preserve data quality. Without such procedures, conflicting animal data is wasting both time and research funds. International collaborations often experience this, and research can drag on for years trying to sift through the muddy waters to find solid data.

Animal research in industry is actually easier to regulate because parent companies can set rules for all subdivisions to follow, regardless of the host country. Plus the number of labs is typically small enough to enable strict monitoring.

But what about academia – why haven’t we already done something about this? In some cases researchers are simply not trained properly. In others, overworked and often underpaid scientists are just trying to survive in an increasingly competitive research environment. Time wasted trying to change the established setup could mean loss of data, loss of papers, and thus compromised job security.

But we need to do something, and the shock tactics of animal rights activists are certainly not the best way of tackling this. Instead, changes need to be made at the level of government and science policy. There needs to be better training, and better monitoring of every single facility with international guidelines that are actually enforced. In an ideal world, researchers would come together with regulatory bodies and government representatives, agree on global standards, and stick to them.

With time and a lot of determination, it may be possible to achieve worldwide collaboration on such a project, and both the animals and the data will be better for it.

The Conversation

Our lip-reading technology promises to make hearing aids more human

It's written on your face. Shutterstock

Hearing aids can be lifelines for people with hearing loss. But their limitations can mean that, in particularly noisy environments, users cannot exploit the best of the existing technology. Most new hearing aid designs just make small improvements to microphones, power efficiency and noise filtering. We propose an entirely new approach.

My colleagues and I are working as part of a multi-disciplinary team led by Stirling University, which includes a psychologist and a clinical scientist and is supported by a hearing aid manufacturer. Our aim is to develop an audio-visual hearing aid for the 21st century, taking inspiration from the way that the human body naturally deals with noisy situations, something often known as the cocktail party effect.

Imagine a scenario such as a very busy party with lots of noise, music and people talking. Despite this overwhelming environment, a person with full hearing is often able to pick out and listen to the voice of someone next to them. This is something that people with hearing aids often find extremely challenging. In fact, in really busy environments many deaf people may prefer to remove their hearing aids altogether.

Dealing with noisy environments

The answer to why it is so difficult for hearing aids to deal with these situations is complicated. It’s partly down to the limitations of directional microphones, of inadequate noise cancelling, and of the loss of information about where sound is coming from. But the reason why deaf people can often “hear” better in overwhelming environments like this can be partly explained by lip-reading.

Lip-reading is known to enable individuals with hearing-loss to better understand speech. We all lip-read to a greater or lesser extent, but in people with hearing loss it can become a vital skill. Yet it’s a component of communication that existing hearing aids simply ignore.

No more smiling and nodding. Shutterstock

Our vision is for an ear or body-worn hearing aid linked to a small wearable camera, which could be mounted in a pair of ordinary glasses, jewellery or perhaps even worn as a discreet badge. The device would process the camera’s video stream to isolate relevant information about lip movement.

This data can be used by the hearing aid in several ways. On a simple level, if it knows someone is speaking it could apply some general background noise-reduction filtering. It could identify the direction the voice is coming from and focus a directional microphone accordingly.

Significantly, it could also use the lip movement information to apply an appropriate filter for further noise reduction, just as our brains do naturally. Specifically, if the device can estimate what the speech is likely to be, then it can remove sound elements that don’t match this. For example, if loud music is playing, “reading” the lips of the target speaker would indicate to the device that it should remove this music because it does not match the expected sound.

What are the challenges?

There are multiple challenges to ensuring a hearing aid like this can work practically in the real world, involving the same problems that human lip-readers face. It has to be able to deal with multiple speakers at once and sound that isn’t in front of it. And, generally, people do not simply stand motionless in front of the listener, but instead tend to move, turn their heads, cover their faces or show their emotions visually. They may also be interrupted or have someone else walk in front of them.

To overcome this, our solution will be to again consider how humans function. How much lip-reading we do depends on the circumstances. The more noisy it is, the more we tend to look at people’s lips. So a system that exclusively lip-reads would not be very useful when it comes to real conversations in real environments. We plan to integrate our approach with other non-camera approaches that hearing aids presently use, including noise cancelling and directional microphones.

Our aim is to produce an aesthetically designed system that improves users' ability to understand what someone is saying in a range of environments, potentially with less listening effort. This would help solve the real problems faced by those with hearing loss, including their low uptake of available technology, by delivering a freely available, next-generation hearing device prototype, inspired by the way we naturally think, hear and see.

The Conversation

Are robot surgeons in the operating theatre as safe as they could be?

Today's Da Vinci sticks to surgery rather than the wide interests of its namesake. Intuitive Surgical

A study has revealed that robotic surgery was involved in 144 deaths and 1,391 injuries in the US during a 14-year period. While this may seem a cause for concern, considering there were 1.7m operations carried out during the same period, this is very few indeed.

Of course there is always the possibility of complications occurring in surgery, with or without robot involvement. But as robotic procedures become more common, health service users have a reason to wonder what these machines do and how complications can occur, so that we can try to prevent them in the future.

First, are robots really conducting surgical procedures? A more accurate term is robot-assisted surgery. This is a form of keyhole surgery, where the surgeon performs operations using small incisions, through which a fibre-optic camera and instruments pass. Keyhole surgery is better for patients as it is less invasive, but can be difficult for surgeons as the long instruments can be awkward to handle.

In robot-assisted surgery, the robot holds the instruments while the surgeon sits at a console, remotely controlling the robot’s arms. In these cases, the surgeon has more precise control than without the robot, because with extra robotic arms to assist the surgeon can control the camera and an additional instrument at the same time, whereas otherwise an assistant surgeon would be needed.

The business end of the Da Vinci robotic device. Intuitive Surgical

While there are robotic devices that cater to specific needs such as assisting surgeons with catheter or vascular control, inserting spinal implants or joint replacement procedures, there are few multi-purpose devices. The market leading device is the Da Vinci, manufactured by Intuitive Surgical, sales of which have rapidly risen despite the latest model’s £1.7m price tag and annual maintenance costs of £150,000. Between 2007 and 2011, the number of Da Vinci robots in use in the US increased from 800 to 1,400, while the number worldwide reached 2,300 in 2011. There are around 50 in the UK.

Robot-assisted surgery is primarily used in urology, but is expanding into gynaecology, ear, nose and throat, colorectal, cardiology, and paediatric surgery. NHS England is currently reviewing the evidence for robot-assisted surgery in order to develop rules for where and for what operations robot-assisted surgery can be used.

Two well-designed and conducted studies comparing keyhole and robot-assisted surgery for treatment of prostate cancer found that robot-assisted surgery offered quality-of-life benefits for patients, in terms of higher rates of continence and sexual function. However there is a lack of high-quality studies for other types of surgery.

Robots not always best team players

While studies of robot-assisted surgery tend to focus on the role of the surgeon, the surgeon does not work alone – safe and effective surgery depends on a team. What is known as the surgeon’s first assistant is often a trainee surgeon, or a nurse or operating practitioner who has undertaken specialist training. There is a scrub practitioner, responsible for passing instruments as needed, who is in turn supported by one or more circulator nurses, who gather additional instruments and supplies. There is also the anaesthetist, who will often be supported by a trainee or an operating department practitioner.

Even with robot assistants this team is still essential – what has changed is how they are arranged around the operating theatre. The surgeon is normally sat a couple of metres away from the patient and the rest of the team at the console. This is really important because the collaboration on which safe and effective surgery depends is affected by this arrangement.

When we have interviewed surgical teams about their experience of robot-assisted surgery, they described difficulties in hearing the surgeon’s instructions. It’s sometimes unclear who the surgeon is speaking to, because when the surgeon’s head is in the console, there’s no opportunity to use non-verbal communication such as gestures and eye contact. This results in repetition of instructions and reduced coordination, potentially leading to a longer operation. It becomes harder for the surgeon to guide the first assistant, so having an experienced first assistant is more important in robot-assisted surgery. The surgeon’s position at the console also affects their awareness of what is happening in the operating theatre. Theatre teams told us of having to quickly tell the surgeon to stop because the robot arms were going to hit the patient.

The theatre teams we spoke to had also developed strategies to overcome the problems they were experiencing. So alongside gathering more evidence on the just how effective robot-assisted surgery is, we need to develop a better understanding of what changes it introduces to the teamwork of surgery, and to assess how best surgeons and their teams can make changes that will improve the safety of robot-assisted surgery.

The Conversation

Scientists at work: what's your poison? Tackling India's snakebite problem

Gerry meets Kaulback's pit viper, which could be one of the most lethal snakes in India. Inset picture: Wolfgang Wüster Author provided

Gerry climbs up to the veranda of our tribal longhouse with a snake bag held out in front of him. “Now don’t get too excited, but I’ve just caught a Kaulbacki,” he says, looking pleased but exhausted from a long hike and a six-metre climb up a tree. We gape, hardly able to believe that we have finally found this rare snake alive after four years of intensive searching.

Kaulback’s pit viper, first discovered in 1938 by British explorer and botanist Ronald Kaulback in northern Burma, is one of the largest pit vipers in Asia. On top of that, according to local reports, its bite is lethal. Despite being a co-author on the most recent paper on the species, I had never before seen a living specimen – few scientists have.

I can’t believe my luck at being present for this moment on my first trip to Arunachal Pradesh, a heavily forested state on the north-east frontier of India. This trip is, for me, the culmination of a long personal journey. During my teenage years in India, a family holiday to neighbouring Assam had triggered my curiosity about the natural world, until then only nurtured by tales of exploration and discovery.

Deadly denizens

While now considerably less remote, life on the edge of the jungle is still hazardous. The Nyishi tribals now cultivate rice in the valleys, but on steep hillsides still resort to traditional shifting cultivation – using an area for a while and then abandoning it. As they clear forest, they encounter snakes. And while not all are venomous, monocled cobras, king cobras, banded kraits and others are quite capable of killing people. Yet, of all these, it is the barta (a local name for Kaulback’s pit viper) that the locals fear the most.

Watch your step! Wolfgang Wüster, Author provided

Despite its reputation, our snake is actually not aggressive and rarely attempts to strike – and when it does, it is slow. Curiously, the locals identify the snake that Gerry has just caught as something they call taiji rather than barta, and are adamant that this is a different, less deadly, snake. We count scales and confirm that is a male Kaulback’s pit viper.

As in many pit vipers, female Kaulback’s are bigger and may be more dangerous than males as they guard their eggs near the ground. When we release the male, it heads up a large tree at a leisurely pace; we lose sight of it at around eight metres, still climbing. It is hard to view this beautiful snake as a terrible threat to people. But for the time being we don’t actually know just how dangerous it is – analysis of its venom is needed to confirm that.

Regional anti-venom?

This expedition is part of a larger collaborative study to document the identity and distribution of dangerously venomous snakes and to study their venom in order to reduce the India’s death toll by snakebite which is a massive 45,000 people each year.

Each species of snake has unique venom containing a cocktail of toxins, requiring specific antibodies to neutralise. Even a single species can cause paralysis, blood disorders and tissue damage in varying proportions in different parts of its range.

Snake spotting across the distant, jungle-cloaked hills in Arunachal Pradesh. Anita Malhotra, Author provided

The most effective treatment for snakebite, anti-venom, is still made in the same way as it was when invented in the 19th century, by extracting blood plasma containing antibodies from animals injected with diluted snake venom. It can therefore only effectively neutralise toxins present in the venom mixture used in the immunisation process. Yet all anti-venom in India is currently manufactured predominantly using venom obtained from a single licensed producer.

While several proposed snake farms in different parts of India may soon allow venom from all over India to be used to raise antibodies against a wider range of toxins, more work needs to be done to reduce the dosage, cost of treatment and occurrence of serious side-effects.

Furthermore, none of the “Big Four” (Russell’s viper, spectacled cobra, common krait and saw-scaled viper) against which the Indian anti-venom is manufactured, are found Arunachal Pradesh. Exactly which species are the most deadly there and how many snakebite deaths they cause is unknown.

The spectacled cobra - one of the Big Four Kamalnv/wikimedia, CC BY-SA

Official records of snakebite are virtually non-existent and the tribes in the region are unlikely to go to hospital, even after a life-threatening bite. Emerging evidence from the southern states of India also suggests that the hump-nosed pit viper may be at least as medically significant as the saw-scaled vipers in the few states in which it occurs. This implies that producing regionally specific anti-venom, rather than a single one for all India, might be a better approach.

Permits and policies

Venom research is one of many activities needed to reduce the burden of snakebite – but this is currently hindered by lack of funding and the need for permits to catch snakes, which are strictly protected in India. What’s more, you need to contact each state individually for permits, magnifying the problem considerably.

It is also important to raise awareness of the problem in rural areas – and provide training for primary health care staff, as well as equipping them with vital equipment (such as ventilators) and providing them with direct links to snakebite experts.

A few grass-roots organisations around the country are working on this, but they are largely dependent on volunteers and charitable donations. In some areas of India, doctors specialising in treating snakebite have achieved success rates of 100%, but in a country as large and densely populated as India, it is difficult to see how the burden of snakebite can be reduced substantially without being government-sponsored and properly funded.

In some states, the families of snakebite victims are able to claim up to 1 lakh rupees (about £1,000) as compensation. Ironically, just one year’s potential compensation bill would go a long way to acchieving a long-term reduction in the financial cost of snakebite – let alone the cost in human misery and suffering.

The Conversation

Why we won't be moving to the new 'Earth-like' exoplanet any time soon

Pretty picture (artist's impression) but unlikely scenario. SETI institute

NASA’s announcement of the discovery of a new extrasolar planet that is the closest yet to an Earth 2.0 has been met with a lot of excitement. But the truth is that it is impossible to judge whether it is similar to Earth with the few parameters we have – it might just as well resemble Neptune, Venus or something entirely different.

The planet, Kepler-452b, was detected by the Kepler telescope, which looks for small dips in a star’s brightness as planets pass across its surface. It is a method that measures the planet’s size, but not its mass. Conditions on Kepler-452b are therefore entirely estimated from just two data points: the planet’s size and the radiation it receives from its star.

The habitable-zone myth

Kepler-452b was found to be 60% larger than the Earth. It orbits a sun-like star once every 384.84 days. As a result, the planet receives a similar amount of radiation as we do from the sun; just 10% higher. This puts the Kepler-452b in the so-called “habitable zone”; a term that sounds excitingly promising for life, but is actually misleading.

Looks familiar? Artist’s impression of the new exo-planet and its star NASA

The habitable zone is the region around a star where liquid water could exist on a suitable planet’s surface. The key word is “suitable”. A gas-planet like Neptune in the habitable zone would clearly not host oceans since it has no surface. The habitable zone is best considered as a way of narrowing down candidates for investigation in future missions.

Kepler-452b’s radius puts it on the brink of the divide between a rocky planet and a small Neptune. In the research paper that announced the discovery, the authors put the probability of the planet having a rocky surface about 50%-60%, so it is by no means sure.

Unpredictable geology

Rocky planets like the Earth are made from iron, silicon, magnesium and carbon. While these ingredients are expected to be similar in other planetary systems, their relative quantities may be quite different. Variations would produce alternative planet interiors with a completely different geology.

For example, a planet made mostly out of carbon could have mantles made of diamond, meaning they would not move easily. This would bring plate tectonics to a screeching halt. Similarly, magnesium-rich planets may have thick crusts that are resilient to fractures. Both results would limit volcano activity that is thought to be essential for sustaining a long lasting atmosphere.

Venus and Earth. Similar but oh so different. wikimedia

If Kepler-452b nevertheless has a similar composition to Earth, we run into another problem: gravity. Based on an Earth-like density, Kepler-452b would be five times more massive than our planet.

This would correspond to a stronger gravitational pull, capable of drawing in a thick atmosphere to create a potential runaway greenhouse effect, which means that the planet’s temperature continues to climb. This could be especially problematic as the increasing energy from its ageing sun is likely to be heating up the surface. Any water present on the planet’s surface would then boil away, leaving a super-Venus, rather than a super-Earth.

No neighbours

Another problem is that Kepler-452b is alone. As far as we know, there are no other planets in the same system. This is an issue because it was most likely our giant gas planets that helped direct water to Earth.

At our position from the sun, the dust grains that came together to form the Earth were too warm to contain ice. Instead, they produced a dry planet that later had its water most likely delivered by icy meteorites. These frozen seas formed in the colder outer solar system and were kicked towards Earth by Jupiter’s huge gravitational tug. No Jupiter analogue for Kepler-452b might mean no water and therefore, no recognisable life.

All these possibilities mean that even a planet exactly the same size as Earth, orbiting a star identical to our sun on an orbit that takes exactly one year might still be an utterly alien world. Conditions on a planet’s surface are dictated by a myriad of factors – including atmosphere, magnetic fields and planet interactions, which we currently have no way of measuring.

That does not mean that Kepler-452b is not a fantastic find. It has the longest year of any transiting planet of its size, holding the door open to finding more diverse planetary system. However, whether these discoveries are truly like Earth is a problem we cannot yet tackle.

The Conversation

UK satellite Twinkle will boost search for Earth-like exoplanets

Exoplanets: we know a little, but not a lot. Goddard Space Flight Centre

NASA’s recent discovery of 12 more exoplanets, including the most Earth-like yet, brings the number of exoplanets – those outside our solar system – discovered to nearly 2,000. It’s now thought that almost every star has a planetary system, with Earth just one of several billion planets in our galaxy alone.

Many of the exoplanets we’ve found are quite different to those in our solar system: “hot-Jupiters” are giant planets orbiting very close to their star, while “super-Earths” are rocky planets up to ten times the mass of Earth. The newly discovered Kepler-452b is the first exoplanet that is relatively similar to Earth in size and within the habitable zone around its star – not too hot and not too cold – that might be able to support life.

The Twinkle satellite, observing exoplanets twinkling far away. Twinkle/SSTL/UCL, Author provided

But really we know very little about these alien worlds beyond their mass, density and distance from their star. What are they made of? How did they form? What’s the weather like there? Our small and fast-track satellite observatory dedicated to studying exoplanets, Twinkle, aims to answer these questions.

It’s a huge challenge, since exoplanets are so far away. Most have been detected only indirectly – by a star’s dip in brightness as a planet passes in front of it, or by looking for a wobble in a star’s position caused by an orbiting planet’s gravitational tug. A very few have been imaged directly but, due to their enormous distance from Earth, they are no more than pinpricks of light.

However, even a tiny amount of light can reveal a huge amount of information. In recent years, we have pioneered techniques to extract information about exoplanets from starlight filtered through their atmospheres as they pass in front of their star.

NASA Solar Dynamics Observatory records transit of Venus across the sun, in the same way that exoplanets are detected.

It’s all in the waves

Spectroscopy allows us to split light – in this sense, the entire electromagnetic spectrum, not just that visible to the human eye – into a “rainbow” of its constituent parts so it can be examined in detail. Molecules formed from the periodic table’s elements absorb specific wavelengths from the electromagnetic spectrum, leaving a unique pattern of lines, a bit like a barcode. By detecting and separating out these barcodes we can identify the tell-tale footprints of the elements present, and therefore which gases the exoplanets’ atmospheres contain.

The composition of an exoplanet’s atmosphere can reveal whether a planet formed in its current orbit, or whether it migrated from a different part of its planetary system. The evolution, chemistry and physical processes driving an exoplanet’s atmosphere are strongly affected by the distance from its parent star. The loss of lighter molecules, impacts with other bodies such as comets or asteroids, volcanic activity, or even life can significantly alter the composition of primordial atmospheres. So a planet’s atmospheric composition traces its history, and gives an indication as to whether it might be habitable – or even host life.

Eyes in the sky

However, aside from the Hubble and Spitzer space telescopes, both nearing the end of their lives, there is currently a gap in facilities suitable for studying, rather than finding, exoplanets. Space missions such as ARIEL, a European candidate mission competing for launch in 2026, won’t be available for a decade or more. Upcoming general observatories, like the James Webb space telescope or E-ELT may have some of the capabilities needed, but time available on these for exoplanet research will be limited.

This is what led us to develop Twinkle: a small, relatively low-cost at £50m commercial mission dedicated to studying exoplanets. The Twinkle satellite will be built in the UK using a platform designed by Surrey Satellite Technology Ltd and instrumentation led by UCL.

How Twinkle’s instruments capture data from exoplanets. Twinkle/SSTL/UCL, Author provided

Putting a low cost approach to work for science in space

From a vantage point in orbit 700km above the Earth, Twinkle will observe more than 100 planets orbiting distant stars, its instruments analysing light in the visible and near-infrared wavelengths (from 0.5 to 5 micrometers). It will be able to detect a range of molecules including water vapour, carbon dioxide and exotic metallic compounds, and organic molecules such as methane, acetylene and ethane. It will also be sensitive to precursors to amino acids – the building blocks of life – such as ammonia and hydrogen cyanide.

By measuring the visible light reflected by an exoplanet and the infrared heat that it emits, Twinkle will work out the planet’s energy balance, its temperature and whether clouds are present or absent in the atmosphere. For very large planets orbiting very bright stars, Twinkle will even be able to obtain 2-D maps of temperature and clouds. With repeated observations over the five-year lifetime of the mission, this will tell us about climate and weather on those planets.

As an independent endeavour funded through a mixture of private and public sources, Twinkle is pioneering a new model for astronomy missions. The spacecraft’s structure will be a platform developed for high-resolution Earth imaging, while the instruments will use off-the-shelf components and reuse existing software to bring down costs and increase reliability. With studies already underway, the instruments should be completed by the end of this year, the aim being to launch in 2019.

The consortium includes more than 15 UK research institutions and companies so far and continues to grow – hopefully kickstarting a new era of exoplanet science, but also demonstrating the feasibility of small, nimble and cost-effective science projects.

The Conversation