Oracle vs Google case threatens foundations of software design

Copyright keeps appearing where it's not wanted. Christopher Dombres, CC BY

The Java programming language, which has just turned 20 years old, provides developers with a means to write code that is independent of the hardware it runs on: “write once, run anywhere”.

But, ironically, while Java was intended to make programmers' lives easier, the court case between Oracle, Java’s owner, and Google over Google’s use of Java as the basis of its Android mobile operating system may make things considerably more difficult.

Google adopted Java for Android apps, using its own, rewritten version of the Java run-time environment (the Java virtual machine or VM) called Dalvik. The Oracle vs Google court case centres around the use of Java in Android, particularly in relation to Application Program Interface (API) calls.

An API is a standard set of interfaces that a developer can use to communicate with a useful piece of code – for example, to exchange input and output, access network connections, graphics hardware, hard disks, and so on. For developers, using an existing API means not having to reinvent the wheel by accessing ready-made code. For those creating APIs, making them publicly and freely accessible encourages developers to use them and create compatible software, which in turn makes it more attractive to end users.

For example, OpenGL and Microsoft’s DirectX are two APIs that provide a standardised interface for developers to access 3D graphics hardware, as used in videogames or modelling applications. Hardware manufacturers ensure their hardware is compatible with the API standard, the OpenGL Consortium and Microsoft update their APIs to ensure the latest hardware capabilities are addressed and games developers get a straightforward interface compatible with many different types of hardware, making it easier to create games.

Java RTE and Android ART Author provided

Fight for your right to API

Google designed Android so that Java developers could bring their code to Android by recreating (most of) the standard Java API calls used in the Java libraries and supported by the standard Java VM. The case revolves around whether doing this – by essentially re-creating the Java API rather than officially licensing it from Oracle – is a breach of copyright. If the case finds in favour of Oracle it will set a precedent that APIs are copyrightable, and so make developers lives a lot more legally complex.

To be clear, the case doesn’t revolve around any claim that Google reused actual code belonging to Oracle, but that the code it produced mimicked what Oracle’s Java run-time environment was capable of.

The initial finding came in May 2012, when a US court agreed with Google’s claim that using APIs them falls under fair use, and that Oracle’s copyright was not infringed. Then in May 2014, the US Federal Circuit reversed part of the ruling in favour of Oracle, especially related to the issue of copyright of an API. Now, at the US Supreme Court’s request, the White House has weighed in in Oracle’s favour.

Can you ‘own’ an API?

For most in the industry, a ruling that it’s possible to copyright an API would be a disaster. It would mean that many companies would have to pay extensive licence fees, and even face having to write their own APIs from scratch – even those needed to programmatically achieve only the simplest of things. If companies can prevent others from replicating their APIs through recourse to copyright law, then all third-party developers could be locked out. Also the actual call to the API and its functionality could be copyrighted too, so that the functionality would have to be different too, otherwise it would be a copy.

In the initial trial, District Judge William Alsup taught himself Java to learn the foundation of the language. He decided that to allow the copyrighting of Java’s APIs would allow the copyrighting of an improbably broad range of generic (and therefore uncopyrightable) functions, such as interacting with window menus and interface controls. The Obama administration’s intervention is to emphasise its belief that the case should be decided on whether Google had a right under fair use to use Oracle’s APIs.

It’s like the PC all over again

Something like this has happened before. When IBM produced its original PC in 1981 (the IBM 5150), a key aspect was access to the system calls provided by the PC BIOS, which booted the computer and managed basic hardware such as keyboard, monitor, floppy disk drive and so on. Without access to the BIOS it wasn’t possible to create software for the computer.

One firm, Compaq, decided to reverse-engineer the BIOS calls to create its own, compatible version – hence the term “IBM PC compatible” become standard language to describe a program that would run on an IBM model or any of the third-party hardware from other manufacturers that subsequently blossomed. IBM’s monopoly on the PC market was opened up, and the PC market exploded into what we see today – would this have happened had IBM been able to copyright its system calls?

So 20 years after the birth of Java, through the groundwork laid by its original creator, Sun Microsystems, Java has become one of the most popular programming languages in the world through being cross-platform and (mostly) open. But now it seems it ends in a trap. The wrong decision in this case could have a massive impact on the industry, where even using a button on a window could require some kind of licence – and licence fees. For software developers, it’s a horrible thought. Copyrighting APIs would lock many companies into complex agreements – and lock out many other developers from creating software for certain platforms.

For Google, there’s no way of extracting Java from Android now; its runaway success is bringing Google only a whole lot of problems. But as we go about building a world built on software, be assured that one way or another this ruling will have a massive effect on us all.

The Conversation

Early motor skills may affect language development

What does it look like I'm doing? I'm learning to talk! Ryan and Sarah Deeds/Flickr, CC BY-SA

Learning to sit up, crawl and walk are all major milestones in a child’s early development – and parents often record these actions in baby diaries, photographs and videos. Developing motor skills allows the child to become more independent. But our research, backing a number of other studies, has shown that it may also say something about the rate of a child’s cognitive development, such as talking.

It makes sense that the ability to move affects how children see, think about and talk about their physical and social environments. Indeed, over recent years, it has become increasingly clear that cognitive development is more closely related to the development of gross motor skills, such as crawling or walking, and fine motor skills, such as grasping and manipulating objects, than many have previously considered.

In fact, it has been suggested that rather than assessing motor and cognitive development separately, they should be viewed as two connected cogs within a large, complex system, each dependent on the other and working together to make small steps forward in development.

It is therefore vital that more research investigates the relationship between motor and cognitive development, rather than focusing on these as separate parts. This will not only be important for understanding typical development, but could also help to explain the difficulties that some children face when the connections in the system are disrupted.

Early links

Learning language is a very long process for infants. They have to go through a period of working out how to use their mouths to make sounds, such as blowing raspberries. Then there’s babbling. Then comes the first word. Finally, children are able to build sentences and, later, to hold conversations.

Research has shown that before each of these language milestones, there is usually a change in motor actions. An example is babbling, where an infant repeats the same sound over and over again (“bababa”). In the few weeks before babbling starts, infants show a lot of arm movements, such as banging, shaking or waving. What is interesting is that after they start babbling, infants stop doing these movements as much.

Look at those arms - he’ll babble any day now. Paul/Flickr, CC BY-SA

Why would these two activities be related? It might be that they are both letting infants see what happens when an action is repeated, so they get used to the sounds and feelings of their bodies. Infants are learning that something they do causes something else to happen. It is like learning that when you press a button, a light comes on.

There are lots of other examples of new motor and language skills appearing around the same time. The fact that the motor action and the language milestone are so close in time suggests that the two parts of the system are developing together.

Motor difficulties

Our own research has focused on what happens when infants have difficulties in developing motor skills in a typical way. One way that we have done this is by looking at the relationship between motor and cognitive skills in autism spectrum disorder (ASD).

Language and communication problems are key to a diagnosis of ASD, but children with ASD also often show some difficulties in motor skills. We carried out a study with 53 infants who had an older sibling with ASD. This increases the risk that they will develop ASD themselves.

Working with the British Autism Study of Infant Siblings (BASIS) at Birkbeck, we found that these infants had generally poorer motor skills at the age of seven months compared to infants who had an older sibling without ASD. Importantly, we showed that motor skills at seven months predicted the rate of language development in the group of infants who went on to develop ASD themselves. This suggests that poor early motor skills could be one factor affecting the development of language difficulties, and that this might be particularly relevant for those at risk of developing ASD.

We are also investigating cognitive skills in children with developmental coordination disorder (DCD), which is diagnosed on the basis of motor difficulties which have an impact on daily living. We hope that these studies will help us to better understand the relationships between motor and cognitive development.

An important point to remember in this discussion is that children naturally develop at different rates. An infant may start crawling at any point between five and 13 months and still be within the age range expected for crawling. Some infants do not crawl on hands and knees at all, but shuffle, creep or just start walking so that they can move around the room.

This means that parents should not be worried that their child is not going to be “clever” or is not developing well if he or she is not crawling early. Crawling is one way of solving a problem, such as reaching a toy on the other side of the room, but it is not the only way. As a child’s body grows and muscles get stronger, better ways of solving these problems develop.

One future question to investigate will be whether there are critical time periods in the development of these skills which cause some children to develop atypically. It will also be important to work out the different paths that motor and language skills can follow. To answer these questions, future research will need to study children over time, and study the two sets of skills together.

The Conversation

Sleep study raises hope for clinical treatment of racism, sexism and other biases

Sleep before you speak. Angel Arcones/Flickr, CC BY-SA

Imagine being able to erase the innermost prejudices you are most ashamed of by simply turning on a sound machine before going to bed. It may sound fantastical, but a new study has shown that our biases can indeed be counteracted while we sleep.

Of course, most of us would contend that we are not racist or sexist. But many studies have shown that our actions suggest otherwise. For example, when evaluating applications for a science laboratory position, male applicants were viewed by university science faculty members as more hireable, competent and deserving of a high salary than identically qualified female applicants.

These biases are not surprising. We are often overwhelmed with information that can reinforce race and gender stereotypes.

Implicit association

In a new study, researchers built on our rapidly developing understanding of the way recent memories become ingrained in our mind during sleep. This “consolidation” process takes an unstable new memory and makes it stronger, and more resistant to forgetting, possibly changing its nature in the process.

The researchers were interested in whether implicit gender or racial biases – views that we are not necessarily aware of – could be manipulated. In order to assess people’s biases, they used an implicit association test (IAT). This requires people to make two category judgements by pressing one of two buttons. In a test of gender bias, for example, participants might categorise female faces by pressing one button and male faces with another. They would also have to classify words into “science” and “art” categories using the same keys.

People who implicitly associate women with art and men with science should respond relatively slowly when asked to use the same key for female faces and science words, compared with female faces and art words. There is debate about exactly what this test measures, but it has proved to be a revealing measure of attitudes in a wide array of research areas.

The researchers then tried to counter these biases by requiring the participants to make associations that reversed the stereotypes. For example, participants might be asked to identify only female faces that were paired with science words. These new associations were “tagged” in their memory by playing a particular sound when participants correctly identified the counterexamples. Another IAT showed weaker implicit biases after the interventions.

The experiment in pictures. P Huey/Science

But showing an immediate effect of an intervention is not very useful if the benefits are short-lived. Here’s where the study got really interesting. Participants were asked to have a nap in the lab, while electrodes recorded their brain activity. When deep sleep was observed, one of the sound cues from the association test was repeatedly played.

The idea here is that the sound can reactivate the memories of the recent events and facilitate their consolidation. In effect, the researchers have found a way of picking out particular memories and asking the brain to give them special treatment during consolidation. Similar replay effects in sleep have been found by this group and others using both sounds and odours, and curiously the cueing effect of the sound is more effective during sleep than when people are awake. In this case, the replay was again effective: bias as measured by the IAT after sleep for the cued intervention was less extreme than the bias for the uncued intervention.

How far from clinical practice?

There are of course many more questions one might ask about this type of research. No-one is suggesting that biases developed over many years are going to be eliminated using a short intervention and then giving the natural consolidation process a helping hand. For a start, it is unclear how long such replay effects might last. The research included a test of implicit bias one week after the intervention, but although there was some evidence that the sounds did have a benefit at that point, the evidence was relatively weak.

Another key question is whether training on positive associations and then testing using the IAT is a form of teaching to the test. It would be really useful to know if such bias effects could lead to altered explicit attitudes – those that we are conscious of – and real behaviour change. A recent large-scale study of racial bias interventions showed clear benefits on the IAT but no change in explicit attitudes. However that was when tested straight after the intervention.

The intriguing possibility that the current study raises is that consolidation may lead to more generalised benefits. During sleep, the storage of recent memories spreads to different parts of the brain, and this systems-level consolidation may change the nature of the memory. Sleep has been shown to promote a shift from implicit to explicit knowledge, and work in our lab has found that sleep may lead to the integration of new memories with existing knowledge. Possibly the shift in implicit attitudes is just the starting point for a chain of consolidation processes that can lead to improved explicit representations of gender and racial stereotypes, and even changes in actions or verbal behaviour.

One of the most fascinating aspects of the current study is that it enhances our understanding of the neural mechanisms involved in memory formation and consolidation. It also offers an intriguing new take on the way in which prejudices and stereotypes form, and how they might be malleable. The hope for the future is that our understanding of prejudice and bias may be further benefited by a more unified understanding of these two areas.

However, for this to one day work as a reliable treatment for racism, sexism, or other bad habits we need to know much more about the longevity and generality of changes in implicit attitudes.

Return of the 'snooper's charter' reflects a worldwide move towards greater surveillance

"I'm looking forward to the day all this needle-hunting is computerised, to be honest." Jean-François Millet

Returned to government with a majority and free of their coalition partners, the UK Conservative Party presses on with its signature policies, including curbs on immigration and banning legal highs – and a renewed effort to pass a “snooper’s charter” bill of increased surveillance powers.

As the Communications Data Bill, the proposed extension of the powers of UK security and intelligence services to track people’s use of the web and social media has already been repeatedly introduced to and rejected by parliament. Now under the title of the Investigatory Powers Bill, the snooper’s charter has returned again.

While there is little detail yet – perhaps because the government expects negotiation due to its small majority – it’s clear the bill goes beyond the Conservatives' manifesto pledge to “maintain the ability of the authorities to intercept the content of suspects’ communications”. It enshrines not only snooping powers but also allows bulk surveillance of content, not just metadata, with a warrant. It also promises, ominously, to “address ongoing capability gaps”.

It’s important to note that charters for snoopers do not enjoy a consensus among Tories. The party has always housed a libertarian wing which coexists uneasily with the more authoritarian element. However the party’s libertarians may concentrate their fire on the pledge to scrap the Human Rights Act and so the bill may find an easier ride as a quid pro quo. Having said that, Tory MP David Davis – who has fallen out with party chiefs in the past over privacy and surveillance issues – grumbled that the UK was moving in a different direction from its chief ally the US.

Davis’ intervention, and also the forthcoming report on bulk surveillance from the UK’s Independent Reviewer of Terrorism Legislation, both pose the question as to whether the UK is in or out of step with its international peers.

Global watchfulness on the rise

The Germans take privacy extremely seriously, and their Federal Intelligence Service the BND has reportedly limited the internet surveillance data it shares with the US National Security Agency. While this doesn’t affect telephone data, the BND has requested an official explanation for the need for internet data, which the Americans have refused to provide. This drastic step has been driven by public opinion, following revelations of BND spying on its own citizens while abroad.

Meanwhile in Brazil, where US spying on President Dilma Rousseff played very badly, a new internet freedom law has been presented as a digital Magna Carta. However, as various nations try to find a balance between security and liberty, Germany and Brazil are outliers; others still snoop to conquer.

The French, reeling from January’s Charlie Hebdo attacks, have passed a bill through their lower house which removes the need for a judge’s approval for intrusive surveillance. Although the bill predates the atrocity, it probably owes its overwhelming majority to it.

In Canada, the Telecommunications Transparency Project has just released a report claiming that its spooks’ telecoms surveillance is conducted without transparency or effective governance procedures, and that they have tried to insert back doors into encryption.

In Australia, legislation passed last year gave sweeping powers to monitor any device “connected” to a particular device with just a single warrant – of course, through the internet all devices are “connected”. The head of New Zealand’s Security Intelligence Service this month mused wistfully that she would rather like the same powers as her antipodean colleagues.

So the general direction of travel among technologically advanced democracies seems based on the believe that finding the needle in a haystack is made easier by maximising the amount of hay gathered. But just how useful this approach is remains unclear to say the least. “Fast bind, fast find,” Shylock says to Jessica – but he still lost Jessica that night. Are there any prospects of putting a brake on this rush to pry?

No sign of a backlash

Outside Germany, there is little sign of public outrage. Research by Sören Preibusch found that though Edward Snowden’s revelations prompted an increased use of privacy-enhancing technologies and searches for privacy-related topics, behaviour soon reverted to the norm.

Davis’ remark about the reactions in the US provides another clue, however. While Congress wrestles with the renewal of the 2001 Patriot Act, rushed into law following the attacks of September 11 2001, a US court agreed with the American Civil Liberties Union that the NSA’s bulk data collection is not legal. In Europe, meanwhile, the Court of Justice of the European Union has been increasingly aggressive in defending data protection regulations, culminating in the “right to be forgotten” judgement against Google in 2014. In its current mood, it may start examining the surveillance issue too.

Counter-intuitively, it may be that while public opinion is neutral and politics pushes ever further in the direction of surveillance and the greater security that it is assumed this brings, it is unelected judges interpreting analogue laws bent to use in a digitally networked world that are most likely to apply the brakes.

The Conversation

Opportunity knocks for the Tories to boost gender equality in science

Inspiring role models can help more girls consider a career in science. woodleywonderworks/Flickr, CC BY-SA

It’s no secret that there is a lack of women in science-related careers. And it’s bad for the economy. While the Conservatives launched some good initiatives to address this problem in the last coalition government, their polices were disjointed and did not result in any significant progress. The party should now grasp the opportunity to tackle the problem properly – linking policies on education, career progression and childcare.

The UK punches above its weight when it comes to science. Despite this, we are missing out on a huge amount of talent, as 50% of the population is heavily under-represented in the discipline. The statistics are horrific: only 13% of all science, technology, engineering and mathematics (STEM) jobs in the UK are occupied by women despite equal gender representations at A level and undergraduate level for many STEM disciplines.

Across the whole of academia, women occupy only 17.5% of the top academic positions in the UK, which is below the average proportion for the EU. Several universities are falling well short of that already low benchmark. The situation is even worse in the UK’s natural sciences and engineering and technology, where only 7-9% of professors are women.

Recent research shows that a diverse scientific workforce is more creative and better performing than a homogeneous one. Such diverse organisations perform better financially, recruit from a wider talent pool, suffer lower staff turnover and increase creativity and problem solving capability.

Linking policies

There are three key areas that the next government will need to tackle to make progress on this.

Arguably the most important one is education. As highlighted by the former, Conservative science minister Greg Clark, bringing the next generation onboard is a key priority for UK science to prosper in the future. However, what was missed by the Conservatives is that we need to boost support for girls not only to consider an education in STEM subjects, but also to persist in the pursuit of a career.

One of the most important factors here is the low level of confidence among girls when it comes to science and maths, which has been recorded in a number of studies. Targeted action in schools is therefore needed to provide girls with inspirational role models and to boost their confidence, which is especially important in mixed gender schools.

Another crucial area is career progression. While some women may leave science for perfectly good reasons, there is no doubt that others leave because they don’t feel valued and think they’re not good enough. Research has shown that universities presented with two equally good CVs – one from a man and one from a woman – were more likely to want to hire the man. Most scientists are horrified to learn of their own personal bias; raising awareness of unconscious bias and providing training to employers and managers is a quick-fix way to help scientists get over such prejudice.

Marie Curie made it - against the oods. Tekniska museet/Flickr, CC BY-SA

The third area that the next government can have a big impact on is by providing better support for scientists who take career breaks to accommodate caring responsibilities such as parental leave. Scientific careers are in many ways more accommodating of personal commitments than other demanding jobs. The next government should embrace this, by supporting more affordable and on-site childcare for scientists. More funding opportunities should also be provided for parents returning to science after taking a few years out to raise a family.

But why stop there? The fact remains that women are still far more likely to take out parental leave than men, despite the fact that many men would appreciate spending more time with their children. To really boost gender equality across all careers, the government should put policies in place to encourage more men to share parental leave.

Obstacles to success

The next government will need to work with universities, funding agencies and research institutes to ensure women are better supported through their career path. But getting to grips with the problem will not be easy. While we know that role models, mentoring, and personal development programmes all have positive impacts on women’s careers in science – particularly on junior women – implementation will not be straightforward.

For example, the extra demand that mentoring puts on the diminishing number of women that are senior scientists earns them no recognition in the established programme for assessing university performance (Research Excellence Framework). In short – if academics invest in mentoring the next generation, their research “credentials” suffer. But the new government has the power to shift this imbalance.

A subtle, re-emphasis towards valuing mentorship and investment in the scientific workforce will help promote a more positive environment for all. One step in this direction is the implementation of the Athena SWAN awards – essentially badges of equality for university departments and research institutes. These awards encourage departments to set new standards for themselves to achieve in their promotion and support of equality and diversity. Mentoring is a key part of this. But we need an official way of recognising the contributions of those who invest in mentorship.

Perhaps the biggest challenge of all is overcoming the stereotypes and prejudices that are embedded in our culture and put women off science. The sooner we realise that women make equally good scientists as men and that men that men make equally good parents as women, the easier it will be to change things.

While such cultural change can take time, having the right policies in place can certainly speed things up. The next government has the power to prevent financial and intellectual loss from the UK’s scientific community, but to achieve this they will need to properly connect policies on science, education and wider societal and welfare issues.

The Conversation

The dating jungle: how men and women see each other when online dating

What we may imagine isn't necessarily the truth. Dating by Shutterstock

In the world of online dating, nothing is as it seems. But that doesn’t stop many of us from leaping to the wrong conclusions about people. A recent paper presented at the Annual Conference of the International Communication Association and reported on in the press suggested that when evaluating photographs from online dating profiles, men and women judge enhanced and un-enhanced photos somewhat differently.

Enhanced photos, those in which a person has used makeup, hair styling, filters, or post-editing, were rated by both men and women as more being attractive. But while women also rated men in these photos as more trustworthy than in ordinary photos, the opposite was true of women: men rated women in enhanced photos as less trustworthy.

Attractive man: happy, successful. Trust by Shutterstock

One theory posits that “what is beautiful is good”, which means people tend to attribute other positive traits to attractive people. For example, we tend to think that attractive people are also happier and more successful in their careers. This appears to be the case with the attractiveness and trustworthiness ratings made by women, but not by men.

In general, when evaluating potential romantic partners, men and women similarly respond that they want a kind, trustworthy, loyal, and honest partner. Men and women, however, diverge when it comes to some other traits such as resource acquisition (the ability to obtain and provide resources, typically financial) and physical attractiveness.

According to evolutionary theory, men who have cheap, disposable gametes can maximise their reproductive success by pursuing multiple partners. Women, on the other hand, have to invest much more time in the gestation and rearing of offspring. As a consequence of our biology, the theory goes, women seek loyal partners who can provide resources for them and the potential child. Men, however, value physical attractiveness in a female because good looks (for example, facial symmetry or youthfulness) are the manifestation of healthy genes and serve as signs of fertility.

This added emphasis on the value of physical attractiveness in the eyes of men may explain why they would put less trust in the women in the enhanced photos. Because attractiveness is important, but is masked in enhanced photographs, men ultimately have less desire to date those women. Ratings of attractiveness predicted desire to date, but perceived trustworthiness was also a significant predictor of desire to date.

Attractive woman: untrustworthy? Dating by Shutterstock

Evolutionary motivations are unconscious and operate without our explicit awareness. Despite social norms and the availability of contraceptives, evolutionary theorists believe that innate, instinctual drives to reproduce still govern our behaviour (though others believe this to be too simplistic).

The online dating game

Today, more couples are meeting online than ever before. Dating sites provide someone seeking a partner with a pool of available options. When completing a profile on an online dating site, people want to put their best face forward, but still accurately portray their true selves. It becomes a battle between one’s ideal self and one’s actual self. As a result, when clicking through online profiles, people also expect to be deceived to some degree.

Considering research related to evaluating potential partners, it seems we don’t always know what we want either. People often enter a dating site with some thoughts about the kind of significant other they are seeking, but research shows that people are not actually very accurate when it comes to attraction. After recording the traits of their ideal partners, speed-daters agreed to go on dates with people who are very much unlike the ideal partner they described. After recording the traits of their ideal partners, speed-daters involved in this study then agreed to go on dates with people who were very much unlike the ideal partner they described.

In another study , researchers asked people to describe an ideal partner and then paired the people with either an ideal (matching the description provided) or non-ideal person (who did not match the description provided by the participant). After viewing a written profile of a non-ideal match, few of their paired partners agreed that they would be interested in dating that person. However, after meeting their match, those paired with non-ideal partners were as interested in dating their partner as those paired with ideal partners. Overall, people did not know they could be attracted to these originally non-ideal people.

Online dating is successful for many individuals seeking love. While research has shown that people deceive others in their profiles, perceived deception can be negatively received. People can deceive others by misrepresenting their physical appearance or their personal narrative. There are those who struggle with the image of themselves they wish to portray, while others are trying to sort through the lies.

And then there are those who view others' profiles thinking they know what they want, but in reality are attracted to someone quite different. So instead of judging all those books by their covers, it would probably be best for online daters to schedule some dates to meet potential partners in person. It could turn out to be an unexpected surprise.

The Conversation

Most people want it, but the UK isn't ready to legalise assisted dying

Demonstration in favour of legalising assisted dying in London, November 2014. David Holt, CC BY-SA

The same week that the UK press reported the death of Jeffrey Spector, who travelled to Switzerland to die rather than face a life of pain and paralysis, the Scottish parliament has rejected the general principles of the Assisted Suicide (Scotland) Bill by 82 votes to 36.

The bill sought to decriminalise assistance in the suicides of registered medical patients in Scotland aged 16 years and above with a terminal or life-shortening illness or progressive condition who experienced an unacceptable quality of life without prospect of improvement. It set out a complex procedure that lawful assisted suicides should follow.

Patrick Harvie MSP, who took charge of the Assisted Suicide (Scotland) Bill following the death of Margo Macdonald MSP has pledged to continue the campaign:

Spector, a 54-year-old Lancastrian with an inoperable spinal tumour, had received assistance to end his own life at the Swiss Dignitas clinic.

Spector, who was accompanied in his final moments by his family, stated that the law prohibiting assisted suicide in England and Wales had pushed him to end his life earlier than he would otherwise have wished. In an interview with reporters, quoted in The Independent, he said:

I don’t want to take the chance of very high-risk surgery and find myself paralysed … If the law was changed then what difference if I had an operation? I could do it after. Rather than go late, I am jumping the gun.

Meanwhile, Lord Falconer has announced his intention to reintroduce an Assisting Dying Bill for England and Wales into the House of Lords in Westminster.

His Assisted Dying Bill which would have permitted adult residents whose terminal illness was likely to cause death within six months to request lethal medication from doctors if a specific procedure were followed, ran out of time in the most recent parliamentary session.

Public support

While recent independent polls (commissioned by organisations in favour of permitting assisted suicide) show very high levels of public support for legalising some form of assisted suicide in Scotland (69% in favour) and Britain as a whole (82% in favour), the prospects for a change in the law are grim, particularly in Scotland.

While support for assisted suicide has more than doubled in the Scottish parliament in the four years since Margo Macdonald’s End of Life Assistance (Scotland) Bill, there still needs to be a considerable shift in political will before a future bill can succeed.

The rejection of Patrick Harvie’s bill on principle shows that even a measure whose drafting and purpose is not criticised for “significant flaws” is unlikely to become law.

Things may look rosier in Westminster, since the recent Assisted Dying Bill passed the second reading stage at which the principle of a bill is debated and usually put to a vote. However, there was no vote on the principle of the bill at this stage, because supporters and opponents of the bill agreed that the issue deserved further debate and line-by-line scrutiny at the committee stage. So the fact that the Assisted Dying Bill made it to committee does not in this case show that peers are favourable to the legalisation of assisted suicide.

It is also very easy to kill legislation in committee. Parliament sets aside very little time for scrutiny of legislation that is not part of the government’s programme – such as Lord Falconer’s bill. If opponents table more amendments than there is time available to discuss them, a bill will fail. This is exactly what happened to the Assisted Dying Bill; few of the 175 tabled amendments were discussed over two days of debate. After the committee stage, there are two further stages (report and third reading), which also present opportunities to debate or amend a bill out of existence.

Even if an assisted suicide bill could be agreed in the House of Lords, it would then have to survive a near identical legislative process in the House of Commons. Let’s not forget that MPs, unlike peers, do not have the luxury of being unelected and may be nervous about supporting legal change on a controversial moral issue in the face of supremely well-organised opposition.

Moral case

Supporters of assisted suicide need to convince politicians and the public that legalisation will not endanger the lives of “vulnerable” people. The empirical evidence from jurisdictions where assisted dying is lawful can help show this. The challenge is to communicate key findings from this complex and incomplete data set in a political moment.

Tactically, it may be desirable to talk less about autonomy and more about equality. Individuals should be able to choose assisted suicide not because choice has supreme value, but because respecting others’ choices on how to live and die respects them as equals.

People who seek assisted suicide and the vulnerable who worry about the impact of assisted suicide want the same thing: for their life plans to be recognised as having equal moral worth.

Supporters of assisted suicide should take note that in the Tony Nicklinson case, the UK’s Supreme Court dropped a strong hint that restricting suicide assistance to the terminally ill may fail to show due respect for all individuals' right to private life as protected by article 8 of the European Convention on Human Rights. Supporters may therefore need to reconsider who would be eligible for an assisted death in their proposals for law reform.

The Conversation

Typo that caused air traffic control failure shows we need a better approach to programming

The higher they are, the further they have to fall. Ramil Sagum, CC BY

The causes of the National Air Traffic Services (NATS) flight control centre system failure in December 2014 that affected 65,000 passengers directly and up to 230,000 indirectly have been revealed in a recently published report.

The final report from the UK Civil Aviation Authority’s Independent Inquiry Panel set up after the incident examines the cause of and response to the outage at the Swanwick control centre in Dorset, one of two sites controlling UK airspace (the other is at Prestwick in Scotland). Safety is key, said the report. I agree. And safety was not compromised in any way. Bravo!

“Independent” is a relative term, after all the panel includes Joseph Sultana, director of Eurocontrol’s Network Management, and NATS’s operations chief Martin Rolfe, as well as UK Civil Aviation Authority board member and director of safety and airspace regulation Mark Swan – all of whom have skin in the game. (Full disclosure: a panel member, Professor John McDermid, is a valued colleague of many years.)

For a thorough analysis, however, it’s essential to involve people who know the systems intimately. Anyone who has dealt with software knows that often the fastest way to find a fault in a computer program is to ask the programmer who wrote the code. And the NATS analysis and recovery involved the programmers too, Lockheed Martin engineers who built the system in the 1990s. This is one of two factors behind the “rapid fault detection and system restoration” during the incident on December 12.

The report investigates two phenomena: the system outage, its cause and how the system was restored. It also examines NATS' operational response to the outage. The report also looks at what this says about how well the findings and recommendations following the last major incident, a year earlier, had been implemented. I just look at the first here, but arguably the other two are more important in the end.

Cause and effect

In the NATS control system, real-time traffic data is fed into controller workstations by a system component called the System Flight Server (SFS). The SFS architecture is what is called “hot back-up”. There are two identical components (called “channels”) computing the same data at the same time. Only one is “live” in the running system. If this channel falls over, then the identical back-up becomes the live channel, so the first can be restored to operation while offline.

This works quite well to cope with hardware failures, but is no protection against faults in the system logic, as that logic is running identically on both channels. If a certain input causes the first channel to fall over, then it will cause the second to fall over in exactly the same way. This is what happened in December.

The report describes a “latent software fault” in the software, written in the 1990s. Workstations in active use by controllers and supervisors either for control or observation are called Atomic Functions (AF). Their number should be limited by the SFS software to a maximum of 193, but in fact the limit was set to 151, and the SFS fell over when it reached 153.

Deja vu

My first thought is that we’ve heard this before. As far back as 1997-98, evidence given to the House of Commons Select Committee on Environment, Transport and Regional Affairs reported that the NATS system, then under development, was having trouble scaling from 30 to 100 active workstations. But this recent event was much simpler than that – it’s the kind of fault you see often in first-year university programming classes and which students are trained to avoid through inspection and testing.

There are technical methods known as static analysis to avoid such faults – and static analysis of the 1990s was well able to detect them. But such thorough analysis may have been seen as an impossible task: it was reported in 1995 that the system exhibited 21,000 faults, of which 95% had been eliminated by 1997 (hurray!) – leaving 1,050 which hadn’t been (boo!). Not counting, of course, the fault which triggered the December outage. (I wonder how many more are lurking?)

How could an error not tolerated in undergraduate-level programming homework enter software developed by professionals over a decade at a cost approaching a billion pounds?

Changing methods

Practice has changed since the 1990s. Static analysis of code in critical systems is now regarded as necessary. So-called Correct by Construction (CbyC) techniques, in which how software works is defined in a specification and then developed through a process of refinement in such a way as demonstrably to avoid common sources of error, have proved their worth. NATS nowadays successfully uses key systems developed along CbyC principles, such as iFacts.

But change comes only gradually, and old habits are hard to leave behind. For example, Apple’s “goto fail” bug which surfaced in 2014 in many of its systems rendered void an internet security function essential for trust online – validating website authentication certificates. Yet it was caused by a simple syntax error – essentially a programming typo – that could and should have been caught by the most rudimentary static analysis.

Unlike the public enquiry and report undertaken by NATS, Apple has said little about either how the problem came about or the lessons learned – and the same goes for the developers of many other software packages that lie at the heart of the global computerised economy.

The Conversation

Is technology making your attention span shorter than a goldfish's?

Now then, where was I? Shutterstock

If you’ve ever found it hard to concentrate on one thing without stopping to check your emails or post to social media, you’re not alone. The average human attention span – how long we can concentrate effectively on a single task – wasrecently reported by Microsoft to have dropped below the level attributed to goldfish.

This certainly plays to our fears about what the daily flood of social media and emails is doing to us, and to younger generations in particular. However, these figures may be misleading. For one thing, the report contains no real detail for either the goldfish or human attention span beyond the numbers on the web page Microsoft pulled them from.

More importantly, our minds are adaptive systems, constantly reorganising and refocusing our mental faculties to suit the environment. So the idea that our ability to pay attention may be changing in response to the modern, online world is neither surprising nor anything to necessarily worry about. However, there is an argument that we must take care to keep control of our attention in a world increasingly filled with distractions.

Attention is a phenomenally awkward thing to study and the manner in which it is tested enormously impacts on the results. This is one of the reasons attention is one of the most enduring and active research areas in psychology: more than 1,200 papers have been published on it just in the past 10 years.

But assuming the numbers in the report reflect some research – no matter what the method behind the data was – it’s still not reasonable to apply them to any situation other than the one in which they were generated. Applying them to all aspects of our lives, as the report implies we should do, is a huge stretch.

Published scientific research looking at the effect of modern technology on our cognitive abilities does show an effect on attention. But contrary to popular opinion, it shows attention spans have actually improved. For example, habitual video gamers have demonstrated better attentional abilities than non-players – and non-players who started playing video-games began to show the same improvements.

Brain training Shutterstock

There’s no reason why the modern world should necessarily diminish our mental faculties and no reason to fear them changing. Our cognitive abilities are constantly changing and even naturally vary across the day.

One of our projects at the Open University is currently collecting data on these daily cycles. We’ve developed a smartphone app that includes a measure of attention alongside four other cognitive tasks. By using the app across the day, you can participate in this research and chart these natural changes in your own performance. This can enable you to better plan your day and finally understand if you actually are a morning or evening person.

However, as interesting as possible variations in cognitive abilities are, a more pertinent question may be what or who is driving the changes in our environment. Happily, this question is much easier to answer. The Microsoft study is aimed at advertisers, not the general public, and calls on companies to use “more creative, and increasingly immersive ways to market themselves”.

The increasing number of distractions in our world is partly due to the new and ever-evolving ways in which advertisers can put their message in front of us – and the “increasingly immersive” techniques they’ll use once the message is there. Realising this helps us understand that our attention is a resource being fought over by advertisers.

The online world is increasingly comprised of spaces where advertisers attempt to tempt us with their products. Similarly, public spaces are increasingly full of adverts that can play sound and video to further capture our attention. Escaping this advertising battleground is becoming one of the luxuries of the modern world. It’s why paid-for executive lounges at airports are free from noisy, garish adverts and why the removal of adverts is a key selling point for paid-for apps.

Our mental abilities are changing, as they always have done in order to best serve our success in changing environments. But now, more than ever, our environment is made by those who either want our attention or want to sell access to it. It will certainly be interesting to see how our cognitive abilities adapt to meet this new challenge. However, as individuals we too must start valuing our attention as much as the advertisers do.

The Conversation

Your smartphone could be good for your mental health

Self-help Shutterstock

When it comes to mental health, technologies such as smartphones and social media networks are almost always discussed in terms of the dangers they pose. Alongside concerns expressed in the media, some experts believe that technology has a role in the rising rates of mental health problems. However, there is also evidence to suggest your smartphone could actually be good for your mental health.

The brain is a sensitive organ that reacts and adapts to stimulation. Researchers have looked into smartphone usage and the effects on the day-to-day plasticity of the human brain. They found that the finger movements used to control smartphones are enough to alter brain activity.

This ability of technology to change our brains has led to questions over whether screen-based activity is related to rising incidence of such conditions as attention deficit hyperactivity disorder (ADHD) or an increased risk of depression and insomnia. Technology has also been blamed for cyber-bullying, isolation, communication issues and reduced self-esteem, all of which can potentially lead to mental ill health.

Positive potential

However, focusing only on the negative experiences of some people ignores technology’s potential as both a tool for treating mental health issues and for improving the quality of people’s lives and promoting emotional well-being. For example, there are programmes for depression and phobias, designed to help lift people’s moods, get them active and help them to overcome their difficulties. The programmes use guided self help-based cognitive behavioural principles and have proven to be very effective.

Computer games have been used to provide therapy for adolescents. Because computer games are fun and can be used anonymously, they offer an alternative to traditional therapy. For example, a fantasy-themed role-playing game called SPARX has been found to be as effective as face-to-face therapy in clinical trials.

Researcher David Haniff has created apps aimed at lifting the mood of people suffering from depression by showing them pleasing pictures, video and audio, for example of their families. He has also developed a computer game that helps a person examine the triggers of their depression. Meanwhile, smartphone apps that play subliminal relaxing music in order to distract from the noise and worries of everyday living have been proven to be beneficial in reducing stress and anxiety.

Doctor on call Shutterstock

Technology can also provide greater access to mental health professionals through email, online chats or video calls. This enables individuals to work remotely and at their own pace, which can be particularly useful for those who are unable to regularly meet with a healthcare professional. Such an experience can be both empowering and enabling, encouraging the individual to take responsibility for their own mental well-being.

This kind of “telemedicine” has already found a role in child and adolescent mental health services in the form of online chats in family therapy, that can help to ensure each person has a chance to have their turn in the session. From our own practice experience, we have found young people who struggle to communicate during face-to-face sessions can be encouraged to text their therapist as an alternative way of expressing themselves, without the pressure of sitting opposite someone and making eye contact.

Conditions such as social anxiety can stop people seeking treatment in the first place. The use of telemedicine in this instance means people can begin combating their illness from the safety of their own home. It is also a good way to remind people about their appointments, thus improving attendance and reducing drop-out rates.

New routes to treatment

The internet in general can provide a gateway to asking for help, particularly for those who feel that stigma is attached to mental illness. Accessing information and watching videos about people with mental health issues, including high-profile personalities, helps to normalise conditions that are not otherwise talked about.

People can use technology to self-educate and improve access to low-intensity mental health services by providing chat rooms, blogs and information about mental health conditions. This can help to combat long waiting times by providing support earlier and improving the effectiveness of treatment.

More generally, access to the internet and use of media devices can also be a lifeline to the outside world. They allow people to connect in ways that were not previously possible, encouraging communication. With improved social networks, people may be less likely to need professional help, thus reducing the burden on over stretched services.

Research into the potential dangers of technology and its affect on the brain is important for understanding the causes of modern mental health issues. But technology also creates an opportunity for innovative ways to promote engagement and well-being for those with mental health problems. Let’s embrace that.

The Conversation

Found: our 3m-year-old forebear who lived alongside 'Lucy'

Scientists get their teeth into A. deyiremeda fossils. Credit: Laura Dempsey

They call it Australopithecus deyiremeda. The name comes from a language spoken in the Afar region of Ethiopia and means “close relative”. This is a brand new and previously unsuspected species – discovered in Ethiopia – that lived at the same time as one of our potential ancestors: “Lucy” (A. afarensis).

There are few things more exciting to a palaeontologist than the discovery of a new species. Work will now begin to try to figure out exactly how this hominin relates to our own species. The discovery was made from a number of fossils, dating back to 3.5m-3.3m years ago. They comprise part of an upper jaw bone with some of the teeth as well as most of a lower jaw bone with a few of its teeth. There are also a couple of other fragments of jaws and teeth.

Walking or crawling?

The fossils raise many questions that are hard to answer. For instance we can’t know whether the hominin actually walked upright, as there are no bones apart from the skull bones available. The researchers, who report their findings in the journal Nature, have previously found fragments of a foot bone in the same area which dated to 3.4m years ago. The owner of this foot may not yet have completely left the trees.

As there are specimens of Lucy’s species A. afarensis not far away, it is likely that the two were contemporary. We know that Lucy and her kind were bipeds as we have their foot bones. There is also an amazingly preserved footprint trail in Laetoli, Tanzania.

Lucy stood tall at 1.1m. Matt Celeskey/Flickr, CC BY-SA

The foot of A deyiremeda – if that earlier discovery of a partial foot does turn out to belong to this species – is quite different from Lucy’s.

There are other Australopiths around at the same time too. In Chad there is the enigmatic A. bahrelghazali, only a little older. In South Africa, there’s the Littlefoot skeleton from Sterkfontein, recently re-dated to 3.7m years ago. Some palaeoanthropologists want to interpret Littlefoot as a new species of Australopith A. prometheus, but others are reluctant to identify it as a new species yet.

But the case the discoverers of A. deyiremeda have put forward supports their fossils being a new species. There are numerous differences in the jaws and teeth between these remains and those of other species of Australopith. Further south in Kenya there is another hominin around at the same time, not only a different species, but one belonging to a wholly different genus – Kenyanthropus platyops. However A. deyiremeda is not like this either.

A palaeontologist’s puzzle

Exactly where A. deyiremeda fits in among our ancestors is, however, hard to know. We are Homo sapiens sapiens. Our genus, Homo, is the family we belong to along with our extinct cousins like Homo neanderthalensis and possible ancestors like Homo erectus. Our species is sapiens, meaning wise, and we add another sapiens on to our name to distinguish ourselves from the very earliest members of our species. But here is the thing: we are the only species in our genus – and, from an evolutionary perspective, that is not a healthy sign.

Up until today, genus Australopithecus had six, maybe seven species in it depending on who you believe. Now that is an astonishingly successful genus as far as evolution goes. The oldest yet found is A. anamensis, which is more than 4m years old. The youngest is A. sediba which is about 1.9m years old. That’s a life span of nearly two million years between these species. The reason so many species can emerge is because natural selection experiments with different adaptations and different ecological niches.

Upper jaw of A. deyiremeda Yohannes Haile-Selassie

The newly discovered A. deyiremeda comes from the earlier phase in Australopith evolution. Exactly how it relates to our own species is hard to know. However, many of the features of its jaws and teeth are seen in later hominins, particularly a group of flat-faced ape men called Paranthropus. These are not on our evolutionary line. The researchers also describe some similarities between A. deyiremeda and early Homo, but in the paper they also point out some important differences between the new discoveries and the earliest known member of our own genus, currently dated to 2.8m years old. So for the moment this is an open question.

But there is a last twist to the tale here. For a long time we believed that members of our own genus were the only tool makers in the hominin record. Now we know that’s not true. Recent reports have established the oldest yet discovered stone tools date to 3.3m years ago, that’s half a million years older than the earliest member of genus Homo.

Tool making may even go back a little bit further. There are contested cut marks from stone tools on bones dated at 3.4m years ago at Dikika in Ethiopia. Guess which species are around at that time in East Africa? You guessed it: A. afarensis, K. platyops and A. deyiremeda. Up until today it was K. platyops that was the favoured candidate for this early tool maker, but today’s announcement of A. deyiremeda puts a new player in the game.

The Conversation

Thriving resale of dwindling IP addresses at last provides commercial reason to adopt IPv6

You can hold off for now, but IPv6, like change, is inevitable. IP by Grasko/shutterstock.com

The Internet Protocol (IP) has been phenomenally successful. From an experiment in the 1970s, it has evolved to an internet spanning the globe, connecting billions of users. IP underpins the enormous success that is the World Wide Web – and its ubiquity has led to the convergence a wide range of technologies upon it, including digital phone calls made using Voice-over-IP (VoIP).

But IP isn’t just successful, it’s valuable too. IP requires computers attached to the network to have an IP address. The current version (IPv4) has a 32-bit addressing scheme, which provides a total of 232 or 4.3 billion globally unique addresses. While that may seem like a lot, historically IP address space was given out in large blocks – in the early days up to 16m addresses at a time.

As available addresses disappear, their value grows, and a thriving market is developing around the resale of unused IP addresses. Recently the UK government’s Department for Work and Pensions sold unused addresses within the enormous space allocated to it to a Norwegian company, Altibox, for £600,000.

IPv4 addressing, a hierarchical address where each element is assigned as network or host bits. Indeterminate

Conserving a dwindling resource

The need to conserve IP addresses was realised as early as the 1990s. This led to the creation of a system of five regional internet registries (RIRs) under the global Internet Assigned Numbers Authority (IANA). The five RIRs requested blocks of 16m addresses at a time from IANA as needed, from which they would assign addresses for use by others. In February 2011 IANA allocated the last of its largest blocks, one to each RIR, and declared the IPv4 address space exhausted.

Allocated IPv4 address space over time, by RIR from http://ift.tt/196wgDb

Yet, reading this online as you are, clearly this has not brought about the end of the internet. But RIRs have become much more strict about assigning addresses. For example RIPE NCC, the European RIR, now provides at most only a block of around 1,000 addresses at a time. The development of network address translation (NAT) has hugely slowed the consumption of globally unique IP addresses, allowing an entire network of client computers, such as in your own home network, to connect to the internet while sharing a single globally unique IP. However, it’s clear a new approach is needed – and in the meantime a growing open market for IPv4 addresses is emerging.

Putting a value on IP

Microsoft made the first big purchase of IP addresses in March 2011, buying around 660,000 addresses from bankrupt Nortel for US$7.5m, or around US$11 each. This figure has remained fairly stable since, with one broker recently indicating variance from US$7 to US$13. These transfers are publicly viewable, at least under RIPE’s transfer policies in Europe.

We’re likely to see much more trading activity as the RIRs edge closer to exhausting their allocations – which could happen within weeks for the North American RIR, ARIN. Price fluctuations will depend on IPv4 demand, which in turn will depend on how quickly and painlessly its successor, IPv6, is deployed.

With 128-bit addressing, IPv6 can supply enough addresses for every networked device on the planet for the foreseeable future (around 340,000,000,000,000,000,000,000,000,000,000,000,000 – or hundreds of times more than required to assign an address to every atom on Earth).

IPv6 addressing, an enormous address space. Indeterminate

The IPv6 waiting game

The core IPv6 specification was published around 20 years ago, and today all major operating systems and routing platforms support it, yet without any urgency to move from IPv4, few have done so.

So what’s the hold-up? One problem is its incompatibility with IPv4: an IPv6-only device cannot communicate directly with an IPv4-only device, instead a translation mechanism is required. Nor is there a deadline to drive the switch, as there was to fix the Y2K bug or with the UK’s phased analogue-to-digital television switch-over.

The initial plan for global IPv6 deployment, as much as any plan existed, was to transition before IPv4 address space ran out. Instead, we’re now in the position of individual ISPs and organisations trying to work out how to implement IPv6 while sustaining their IPv4 operations.

Some larger content providers offer their services over both IPv4 or IPv6 – Google and Facebook, for example, have done so since World IPv6 Launch day in June 2012. Google’s public IPv6 stats show that around 6% of all its customer traffic is now IPv6, heading towards 10% within a year. Content delivery networks such as Akamai also offer IPv6, allowing their customers an easy way to enable IPv6 for their own services. Akamai’s own stats show it recently topped 1m hits per second via IPv6 worldwide. ISPs have also begun IPv6 roll-out, perhaps the biggest example being Comcast in the US.

Projected IPv4 address space run-out, by RIR from http://ift.tt/196wgDb

Inevitably, there will be a (perhaps very protracted) period of “dual stack” deployment, where both protocols are offered to cater for older devices. But this adds complexity and isn’t a long-term solution. Running IPv6-only networks, translating to an increasingly “legacy” IPv4 where necessary at the edge, is the most viable future model. And with providers such as Google, Facebook and Netflix already IPv6-enabled, a typical home network may already see as much as 50% of its traffic being natively IPv6-capable.

What difference will IPv6 make?

Simply put, IPv6 allows internet growth. It’s unfortunate that the general public will be blissfully unaware of it, despite standing to benefit from the new internet technologies and devices it will enable. No one should need to ever see or, heaven forbid, have to type in a IPv6 address. In that sense, IPv6 is set to become the unsung hero of the internet. With its vast globally unique address space, IPv6 allows every device on the internet to be directly addressable – no need for complexities such as NAT or limitations on running services from home computers.

For those in charge of networks or developing applications, IPv6 simplifies operation and design – reported to be a major driver for Comcast’s IPv6 deployment. Facebook has recently announced that its internal network traffic will soon be IPv6-only. These are big indicators of an IPv6 future. For application developers, the benefits are less readily realised until a significant portion of their customers can use IPv6, but the potential is there.

For the internet to meet the increasing demands of its connected users, and certainly for the much-touted “Internet of Things” of a multitude of internet-connected devices to be made possible, the move to IPv6 is essential.

The Conversation

How DNA is helping us fight back against pest invasions

Perfect swarm Juan Medina/EPA

They are the original globe trekkers. From spiders bunking along with humanity’s spread into south-eastern Asia, to sea squirts hopping on military craft returning after the Korean War, invasive species have enveloped the globe.

These species outcompete native ones for resources and also cause immense environmental damage, for example by eating native species and their young, or by introducing parasites and diseases.

Their largest impact, however, is economic. The estimated annual cost of invasive species to the UK and Ireland is £2 billion. This includes the cost of damage from all invasive animal and plant species to sectors such as tourism, business, human health and agriculture.

The cost of controlling invasive species is also huge. Eradication, if possible, may cost millions of pounds. This cost increases as the population becomes bigger, and a late-caught invasion can cost thousands of times more to control than one that was caught early on. Aside from some small technical differences, this model can be applied across all invasive species.

Environmentally, they pose a significant threat to global biodiversity by competing with other species and altering the environment, for example by blocking waterways or accelerating erosion.

Conventional monitoring techniques, such as checking and photographing the bottom of recreational boats for potential invaders, are not robust enough to handle this threat. Many invasive species also look similar to natives and can also confuse detection. Fortunately, a potent tool is available that lends well to management of invasive species: analysis of their DNA.

Spot the difference? Anders Sandberg/flickr, CC BY-NC

Decoding genetics

In the 1970s Frederick Sanger devised a method of automatically reading and sequencing DNA. Sanger sequencing allowed biologists to study the genes (the stretch of DNA coding for a specific trait that is passed down through generations) of invasive species and piece together much information about their genomes (the complete sets of all their genes and DNA) and evolution.

Genetic techniques also enhance management of invasive species. They are essential for informed sustainable policy decisions.

For example, comparing genetic variation within and between populations allows biologists to understand how invading species spread, mix and compete with native species. This has given researchers a better understanding of the routes that invasive animals such as sea squirts, ladybirds and invertebrate pests took when colonising new areas.

Biologists can also use genetic techniques to catch invasions earlier by detecting animals' DNA in the environment from shed material such as skin or urine. This was demonstrated by detecting the DNA of an invasive bullfrog in French ponds much earlier than the invasion would otherwise have been noticed.

Foreign invaders John Flesher/AP/Press Association Images

Genomic leap forward

Over the past decade, genetics has slowly been caught by the study of genomics, which offers a much more comprehensive view of DNA because it involves sequencing the entire genome rather than just a number of genes. Studying the genome enables us to analyse variability between invasive populations in greater detail and sensitivity.

Since next-generation sequencing technology drastically reduced the price of DNA sequencing in the mid-2000s, scientists have been able to explore hundreds of thousands of regions of DNA, rather than the tens used in genetic studies. Genomics has also helped to create new tools to assist our study of invasive species.

Studying organisms' full genomes also introduces an extremely powerful method to study the evolutionary history of invasive species. It allows biologists to distinguish the neutral DNA changes that all organisms undergo but that don’t spread through a species, from positive changes that improve an organism’s chances of survival and reproduction and so do spread.

This positively-selected evolution drives the fast adaptation of invasive species. So by understanding the effects of positive evolution, we can predict how species might adapt in the future.

Genomics has yet to realise its potential in invasion biology, but invasion genetics is slowly progressing into invasion genomics. Both disciplines offer a cost-effective solution to the monitoring and management of invasive species. For example, a programme exists in the US for early detection of the invasive Asian Carp in the Great Lakes. This early detection, which involves sampling water to check for shed DNA material, will save significant sums in managing the invasive fish.

If more widely and effectively employed, genetic and genomic techniques have the potential to save both natural environments and the public purse from the environmental side-effects of globalisation.

The Conversation

Rare glimpse: satellites catch the birth of two volcanic islands

An island is being created during a volcanic eruption in 2011. Jamal Sholan/youtube

The birth of a volcanic island is a potent and beautiful reminder of our dynamic planet’s ability to make new land. Given the destruction we’ve seen following natural events like earthquakes and tsunamis in the past few years, stunning images of two islands forming in the southern Red Sea are most welcome.

Birth of an island captured by satellites. Jónsson et al., Nature Communications

The images have been published as part of a study in Nature Communications. It describes how the two new islands formed during volcanic eruptions in 2011 and 2013 respectively, are now being steadily eroded back into the depths. And they erode quickly: one of the islands has lost 30% of its area in just two years. Superb images document the birth and growth of these new islands and also document their changing shape as the Red Sea washes over them.

Ridges and rifts

Magma from an undersea eruption has a difficult journey to travel from the sea floor to the surface to form a new volcanic island, as it becomes continually quenched by an endless supply of water. But that’s what happened when the two volcanic islands, dubbed Sholan and Jadid, formed in the remote Zubair archipelago, part of Yemen.

Video of the 2011 eruption in the Zubair islands, that formed Sholan island. Credit: Jamal Sholan

The southern Red Sea is not a part of the world that many people would recognise as being volcanically active, but it is part of an immense African rift system – a chain of cracks in the Earth’s crust more than 3,000km long. The southern Red Sea is a place where a new ocean is forming as the tectonic plates spread apart at about 6mm per year. Underneath the Red Sea is an embryonic mid-ocean ridge, an undersea range of mountains created by volcanic eruptions.

Mid-ocean ridge spreading is mimicked in the system that feeds the eruptions – long and linear magma-filled cracks called dykes. The researchers used satellite images and knowledge of ground deformation to understand the eruptions and their feeder systems. They discovered that the dykes were at least 10km in length whereas the islands are both less than 1km in diameter.

This is similar to what happens in other volcanic areas where spreading takes place such as Iceland, where a long fissure may be active at the very start of an eruption, but as the eruption progresses the activity becomes focused around just a few vents. These features support the claim by the researchers that active spreading is taking part.

The birth of Sholan island. First image in 2010, last image in 2012. Credit: Jónsson et al., Nature Communications

Growing archipelago?

Another key finding of the research is that the seismic swarms that occurred during the formation of these volcanic islands have been observed in the past, but without eruptions being witnessed (this is a remote area). The authors argue that these older seismic swarms were caused by dyke intrusions or submarine eruptions – either of which would suggest that this area is more volcanically active than previously thought.

This is corroborated by observations that the islands in the Zubair archipelago are all constructed of a type of fragmental volcanic rock that characterises the magma-water interactions which occur when volcanic islands are formed.

Hubris? The Zubair archipelago could grow considerably. Credit: Jónsson et al., Nature Communications

The value of this research is that by combining high-resolution optical imagery, satellite (InSAR) observations, and seismicity, the researchers have characterised the birth and development of two volcanic islands along a mid-ocean ridge system with unprecedented detail.

Perhaps the most exciting finding of the new research is that the birth of these islands suggests that the Zubair archipelago is undergoing active spreading and that further submarine and island-building eruptions are to be expected.

The Conversation

Brexit prospects for the UK digital market are none too rosy

Plugged into Europe, or UK unplugged? digital europe by silver tiger/shutterstock.com

There is a real prospect of Britain leaving the European Union following the proposed in-or-out referendum to be held by the end of 2017. This would have various repercussions, one of which might be that the UK would be shut off from operating in the European Single Digital Market, a European Commission priority.

Our recent digital marketing research found major differences in the attitudes of business and students in different European countries toward the use of digital and social media marketing. If the UK leaves the EU, these differences are likely to widen.

Cutting cost and complexity

The EU Digital Agenda for Europe is a strategic initiative for the long-term prosperity of European member states, which attempts to reduce the challenges faced by the digital economy. For example, companies face a VAT compliance cost of €5,000 per country it trades its digital products in. Costs associated with legal compliance across various member states can reach €9,000 according to some estimates.

These are burdens for small companies, so the ambition of the Digital Agenda is to harmonise these differences and simplify cross-border trading. Even if the UK were outside this harmonisation process it would still benefit from a simpler European market with which to trade.

This would come with restrictions: organisations trading within the market would be more likely to trust each other due to common understanding of tax and legal requirements, for example. The tax issue is also a major challenge – as can be seen by governments' efforts to try (and generally) fail to collect tax from global giants such as Amazon, Google and Facebook (to be fair, Amazon is now at least trying to regularise its tax payments).

In any case, the UK might find companies are more interested in access to the bigger, European-wide markets and so set up shop in European capitals rather than London.

UK compared to its peers

The performance of EU member states is tracked as part of the Digital Agenda, using five indicators: connectivity, human capital, use of internet, integration of digital technology and digital public services.

UK progress is not bad, but is still a long way from achieving the levels found in Denmark and Sweden, for example. Human capital, or skilled labour, is one of the indicators where the UK is performing well – yet, by leaving the EU, the movement of skilled labour would be reduced. It’s worth noting that Norway – which isn’t part of the EU – manages to maintain close links with European Free Trade Association members. For example, citizens of Norway can work in EU without needing a work permit – but this sort of arrangement would undermine one of the main reasons the UK wants to leave the EU.

In addition to the movement of skilled labour, the increasing reach of the internet means that organisations no longer need to be physically located in one country. New business models and ways of working mean that a flexible workforce can be found at the click of a button through crowdsourcing sites such as Fiver or Amazon’s Mechanical Turk.

Digital economy and society index European Comission

A single digital market would bring benefits: better access to products and services at reduced costs, common data protection laws making cross-border communications easier, and a digital-by-default public sector that could make the use of public funds more efficient and transparent.

This would bring increased acceptance and adoption of digital services and bring European countries closer together. The example of Norway operating outside the EU but in association with it through trade agreements is often used by those who want the UK to exit the union. However, the major difference between the UK and Norway is that the population of Norway is just over 5m; the UK is nearly 13 times the size. There are far more businesses in the UK to trade and engage with Europe that would benefit from staying in the union.

Would London lose its status?

When it comes to innovation, the right environment combining academic research from universities, commercial interests and favourable innovation policies – known as the triple helix of innovation – is of fundamental importance.

The latest EU innovation scoreboard placed the UK among the top, but not a leader. Another report, the Atlas of ICT activities in Europe , suggests that – based on the volume and value of research and development, innovation, and number of businesses – Munich is the place to be followed by London and then Paris.

The Digital Agenda for Europe is a well-financed priority area – some €2.8 billion for research and development – allowing organisations to build knowledge collaboratively by working on joint research and development programmes. As funding is a key element, EU members will be at an advantage compared to the UK in the case of a Brexit. Organisations with access to tech hubs and funding are more likely to grow; if Britain exits the EU it will undoubtedly be a step away from the benefits this digital agenda offers.

The Conversation