Threatening parents isn't the way to protect children from videogame violence

Trigger happy? Magnus Fröderberg/norden.org, CC BY

Headteachers from 16 schools in Cheshire have warned parents by letter that they would be reported to the authorities if they allowed their children to play videogames marked as suitable for adults with an 18 age rating.


The letter argued that not only did videogames subject children to violent scenes, but that they also increased sexualised behaviour and led children to be more vulnerable to sexual grooming. The letter stated that in the case of children allowed to play such games “we are advised to contact the police and children’s social care” as parents' actions would be “deemed neglectful”.


The letter’s author, Mary Hennessy Jones of the Nantwich Education Partnership, told The Sunday Times:



We are trying to help parents to keep their children as safe as possible in this digital era. It is so easy for children to end up in the wrong place and parents find it helpful to have some very clear guidelines.



The innocent days of Frogger and PacMan are largely gone, and popular videogames today often boast photo-realistic graphics depicting violence and other adult themes, which is why games such as Grand Theft Auto and Call of Duty are age rated in the same way as films.


I’m sure the letter was written with the best of intentions but as a parent of three “screenagers” and someone who has spent almost 30 years researching the effects of videogames on human behaviour, this is a heavy-handed way to deal with the issue.


Although it is illegal for any retailer to sell 18-rated games to minors, it’s not illegal for children to play them, nor is it illegal for parents to allow their children to do so. It’s true that many parents may benefit from an education in the positives and negatives of videogames, but threatening them with the “authorities” is not helpful.


Don’t blame the game


I’ve been researching the effects of videogames on children since the early 1990s and played a role in the introduction of age ratings to videogames, writing educational leaflets for parents that outlined the effects of excessive gaming. There’s no doubt that there are many positive benefits to videogaming too.


Children often play age-inappropriate videogames. My 13-year-old son moans that he is the only boy in his class that doesn’t own or play Call of Duty. This anecdotal evidence is borne out by research: in one study we found that almost two-thirds (63%) of children aged 11-13 had played an 18+ video game. Of the two-thirds who had played them, 8% reported playing them “all the time”, 22% reported playing them “most of the time”, 50% reported playing them “sometimes”, 18% reported playing them “hardly ever”. Unsurprisingly, boys were more likely than girls (76% vs 49%) to have played an 18+ video game – and more likely to play them more frequently.


How had they got access to the games in the first place? The majority had the games bought for them by family or friends (58%), played them at a friend’s house (35%), swapped them with friends (27%), or bought games themselves (5%). So this certainly suggests that parents and siblings are complicit in allowing children access to them.


Does it do what it says on the tin?


With the development of age rating and descriptors of the games content to be carried on the packaging, there is a growing amount of research studying the content of these games aimed at adults. For instance, one study led by Kimberley Thompson examined whether the description on the box of violence, blood, sexual themes, profanity, drugs and gambling in 18+ videogames matched the game’s content. The study found that although warnings for violence and gore were relatively well handled, 81% of games studied lacked descriptions of other adult themes in the game content. The same researchers have found adult content in lots of games aimed at young children and teenagers.


Another study led by David Walsh tested the validity and accuracy of media age ratings, including those for videogames. The findings suggested that when the entertainment industry rated a product as inappropriate for children, parents agreed. But parents disagreed with many industry ratings that designated material suitable for children, with those rated as appropriate for adolescents by the industry of greatest concern to parents.


In truth, there’s no difference between the issue of children and adolescents playing 18-rated games and that of children and teenagers watching 18-rated films. It does seem, however, that parents are more likely to act on ratings for films than for videogames. So while parents could be better informed and more responsible in how they monitor their children’s activities, threatening letters from schools are unlikely to have the intended effect on parents' attitudes.


The Conversation

The delivery drones are coming: so rules and safety standards will be needed – fast

DHL: drones have landed? Frankhöffner, CC BY-SA

Imagine a scenario where tens of thousands of drones are routinely flown across UK airspace. Some of these are very large, more than 100kg – and some are equipped with jet engines that can reach speeds beyond 100mph. If you think this seems unlikely then you’re quite wrong: there are already more than 36,000 remote control model aircraft hobbyists in the UK flying small aircraft at more than 800 sites.


But there are remarkably few accidents, despite their numbers. To start with there is a strong sense of self-regulation among hobbyists. More important is that the community of enthusiasts requires that its members have insurance for their flying activities. Insurance premiums are low because this group has collective buying power and there is an incentive to keep claims low by ensuring good practice among those flying these aircraft.


We are now moving into an era in which drones are increasingly used commercially – for law enforcement, surveying, or film and photographic duties. Amazon is in discussions with regulators in the UK and Canada for its plans for drone deliveries. Facebook intends to bring internet access to rural areas via solar-powered drones, currently being trialled in the UK.


There is a potentially huge market associated with drones – and unnecessarily strict government regulation could stifle its growth. On the other hand, weak regulation might lead to accidents and a counterproductive public backlash. There are lessons that regulators could learn from the many years experience with remote control aircraft hobbyists.


Financial persuasion


The key is insurance, which needs to be a requirement for commercial activities with a mandated level of cover. In order to satisfy insurers, the drone operator will have to adhere to certain standards and codes of practice. This is where self-regulation and collective bargaining comes in: the British Model Flying Association (BMFA) currently organises insurance for all its members and has a vested interest in making sure that accident levels, and therefore claims, are low.


The analogous commercial organisation to BMFA is the Association of Remotely Piloted Aircraft Systems, ARPAS-UK. Similarly this is a non-profit organisation run by its members, which include several hundred UK commercial drone operators. To be truly successful it needs to offer insurance, which would make it very attractive to commercial operators which would then have a strong incentive to join and comply with its standards in order to obtain the best insurance rates.


This will be more effective than control through the Civil Aviation Authority (CAA) or some other arm of government as it uses a financial incentive to minimise risks and accidents. As a small, focused organisation it will also offer a degree of agility and responsiveness that government bodies typically lack.


The CAA has been remarkably proactive in trying to regulate commercial operations with a “light touch”. A cynic might suggest this is a result of government pressure to cut down on government red-tape and cumbersome bureaucracy.


Safety first


There is no doubt that there should be a compulsory register of all commercial operators and a requirement for would-be operators to demonstrate sufficient competence to be licensed to fly. What’s less clear is whether there will be an enforced incident reporting requirement and (where necessary) incident investigation process. As is the case across the public sector, the CAA faces reduced budgets and it’s not clear how it will respond to the workload when commercial drones really take off in the future.


But, again, an organisation such as ARPAS-UK has a vested interest in responding swiftly to members and recording safety issues. If it coordinated an insurance offering it would be able to record all claims, and hence presumably all but a very few minor incidents that warranted no claim. For larger and more serious incidents the expertise of the CAA will be required.


Generic guidelines not helpful


The recent House of Lords report into the drone industry makes some well-intentioned but somewhat vague recommendations. For example it suggests there should be an “online database of drone operations” – would this be mandatory? Who would administer and pay for this, and to what end?


Another recommendation is a “shared manufacturing standard” for drones, such as the CE Mark, which demonstrates that a product conforms to relevant standards and laws. But anyone with any aviation experience will know that this is absurdly simplistic and probably unworkable. Certain aspects of drone engineering, for example electronics, must already comply with electrical safety, noise and Ofcom frequency interference regulations. It’s not possible to write a generic “airworthiness standard” for the enormous range of drone types, sizes and configurations available now or in the future – and attempting to do so would inevitably constrict innovation.


Ultimately, while it is in everyone’s interest that risks are minimised, they can never be completely eliminated. There will be accidents – and it’s important to bear that in mind. As we’ve seen just recently, even a highly-regulated industry like aviation experiences fatal accidents.


It is interesting that most of these stem from human error, such as the Germanwings flight 4U9525 crash in France, or human error on top of technical faults, such as the Transasia flight GE235 air crash in Taipei where the pilot shut down the remaining functional engine, rather than technical failure alone. By taking humans out of the loop, increasingly commercial drone flights will deliver steadily higher levels of safety for everyone.


The Conversation

Genome editing poses ethical problems that we cannot ignore

In the future, our DNA could be different by design. DNA by Seamartini Graphics/www.shutterstock.com

The ability to precisely and accurately change almost any part of any genome, even in complex species such as humans, may soon become a reality through genome editing. But with great power comes great responsibility – and few subjects elicit such heated debates about moral rights and wrongs.


Although genetic engineering techniques have been around for some time, genome editing can achieve this with lower error rates, more simply and cheaply than ever – although the technology is certainly not yet perfect.


Genome editing offers a greater degree of control and precision in how specific DNA sequences are changed. It could be used in basic science, for human health, or improvements to crops. There are a variety of techniques but clustered regularly inter-spaced short palindromic repeats, or CRISPR, is perhaps the foremost.


CRISPR has prompted recent calls for a genome editing moratorium from a group of concerned US academics. Because it is the easiest technique to set up and so could be quickly and widely adopted, the fear is that it may be put into use far too soon – outstripping our understanding of its safety implications and preventing any opportunity to think about how such powerful tools should be controlled.


The ethics of genetics, revisited


Ethical concerns over genetic modification are not new, particularly when it comes to humans. While we don’t think genome editing gives rise to any completely new ethical concerns, there is more to gene editing than just genetic modification.


First, there is no clear consensus as to whether genome editing is just an incremental step forward, or whether it represents a disruptive technology capable of overthrowing the current orthodoxy. If this is the case – and it’s a very real prospect – then we will need to carefully consider genome editing’s ethical implications, including whether current regulation is adequate.


Second, there are significant ethical concerns over the potential scope and scale of genome editing modifications. As more researchers use CRISPR to achieve more genome changes, the implications shift. Our consideration of a technology that is rarely used and then only in specific cases will differ from one that is widely used and put to all sorts of uses.


Should we reach this tipping point, we will have to revisit the conclusions of the first few decades of the genetic modification debate. Currently modifying plants, some animals, and non-inheritable cells in humans is allowed under strict controls. But modifications that alter the human germ-line are not allowed, with the exception of the recent decision in the UK to allow mitochondrial replacement.


While this may mean weighing up potential benefits, risks and harms, as the potential applications of genome editing are so broad even this sort of assessment isn’t straightforward.


What patterns can genetic surgeons weave? too human by lonely/www.shutterstock.com


Use for good and for ill


Genome editing techniques have so far been used to change genomes in individual cells and in entire (non-human) organisms. Benefits have included better targeted gene therapy in animal models of some diseases, such as Duchenne Muscular Dystrophy. It’s also hoped that it will lead to a better understanding of the structure, function and regulation of genes. Genetic modification through genome editing of plants has already created herbicide- and infection-resistant crops.


But more contentious is how genome editing might be used to change traits in humans. While this has been the basis for many works of fiction, in real life our capacity to provide the sort of genetic engineering seen in films and books such as Gattaca and Brave New World has been substantially limited.


Genome editing potentially changes this, presenting us with the very real possibility that any aspect of the human genome could be manipulated as we desire. This could mean eliminating harmful genetic conditions, or enhancing traits deemed advantageous, such as resistance to diseases. But this ability may also open the door to eugenics, where those with access to the technology could select for future generations based on traits considered merely desirable: eye, skin or hair colour, or height.


Permanent edits


The concern prompting the US academics' call for a moratorium is the potential for altering the human germ-line, making gene alterations inheritable by our children. Gene therapies that produce non-inheritable changes in a person’s genome are ethically accepted, in part because there is no risk for the next generation if things go wrong. However to date only one disease – severe combined immunodeficiency – has been cured by this therapy.


Germ-line alternations pose much greater ethical concerns. A mistake could harm future individuals by placing that mistake in every cell. Of course the flip-side is that, if carried out safely and as intended, germ-line alterations could also provide potentially permanent solutions to genetic diseases. No research is yet considering this in humans, however.


Nevertheless, even if changes to the germ-line turn out to be safe, the underlying ethical concerns of scope and scale that genome editing brings will remain. If a technique can be used widely and efficiently, without careful oversight governing its use, it can readily become a new norm or an expectation. Those unable to access the desired genetic alterations, be they humans with diseases, humans without enhanced genetic characteristics, or farmers without genetically modified animals or crops, may all find themselves gravely and unfairly disadvantaged.


The Conversation

Banks undermine chip and PIN security because they see profits rise faster than fraud

Who pays, and who gains? Card by LDprod/www.shutterstock.com

The Chip and PIN card payment system has been mandatory in the UK since 2006, but only now is it being slowly introduced in the US. In western Europe more than 96% of card transactions in the last quarter of 2014 used chipped credit or debit cards, compared to just 0.03% in the US.


Yet at the same time, in the UK and elsewhere a new generation of Chip and PIN cards have arrived that allow contactless payments – transactions that don’t require a PIN code. Why would card issuers offer a means to circumvent the security Chip and PIN offers?


Chip and Problems


Chip and PIN is supposed to reduce two main types of fraud. Counterfeit fraud, where a fake card is manufactured based on stolen card data, cost the UK £47.8m in 2014 according to figures just released by Financial Fraud Action. The cryptographic key embedded in chip cards tackles counterfeit fraud by allowing the card to prove its identity. Extracting this key should be very difficult, while copying the details embedded in a card’s magnetic stripe from one card to another is simple.


The second type of fraud is where a genuine card is used, but by the wrong person. Chip and PIN makes this more difficult by requiring users to enter a PIN code, one (hopefully) not known to the criminal who took the card. Financial Fraud Action separates this into those cards stolen before reaching their owner (at a cost of £10.1m in 2014) and after (£59.7m).


Unfortunately Chip and PIN doesn’t work as well as was hoped. My research has shown how it’s possible to trick cards into accepting the wrong PIN and produce cloned cards that terminals won’t detect as being fake. Nevertheless, the widespread introduction of Chip and PIN has succeeded in forcing criminals to change tactics – £331.5m of UK card fraud (69% of the total) in 2014 is now through telephone, internet and mail order purchases (known as “cardholder not present” fraud) that don’t involve the chip at all. That’s why there’s some surprise over the introduction of less secure contactless cards.


Not only do contactless cards allow some transactions without a PIN, but the data can be stolen from the card and, by extension, potentially money from any account linked to it, just by brushing past someone near enough to trigger the contactless chip into transmitting.


Figures for UK card fraud reveal the effect Chip and PIN has had of forcing criminals to change tactics. Financial Fraud Action UK


Fear of fraud versus potential for profit


So why are some banks issuing chip cards which don’t support PIN verification at all, leaving customers to sign for transactions instead? Why has the US been so slow to roll out Chip and PIN and why have UK banks actually decreased security for contactless cards? All three decisions are driven by, perhaps unsurprisingly, profit.


The share of transactions that card issuers take (the interchange fee) depends on the country and type of transaction. In the US, a lower fee is charged for PIN transactions than for those verified by signature. Since the fee is paid by merchants to the card companies and banks, that explains why merchants upgraded their terminals to support Chip and PIN long before the US banks started issuing chip cards. Encouraging banks to start issuing cards is being handled the same way: as of October 2015 if the merchant’s terminal which accepts a fraudulent payment supports Chip and PIN but the card doesn’t, the card issuer pays for the cost of the fraud. If the merchant’s terminal doesn’t support Chip and PIN but the card does, the merchant pays.


Contactless cards are being promoted because it appears they cause customers to spend more. Some of this could be accounted for by a shift from cash to contactless, but some could also stem from a greater temptation to spend more due to the absence of tangible cash in a wallet as a means of budgeting.


Greater convenience leads to increased spending, which means more fees for the card issuers and more profit for the merchant – this is the real reason why the PIN check was dropped from contactless cards. The risk of fraud is mitigated to some degree by limiting transactions in the UK to £20 (rising to £30 in September), but it’s been demonstrated that even these limits can be bypassed.


Doing the maths


Card fraud involves a very large amount of money – £479m in 2014 in the UK – and affects many millions of people. In a EU-wide survey, 17% of UK internet users said they had been the victim of credit card or online banking fraud – the worst in the EU. Some of the costs of fraud are borne by the merchants. Others are passed to the victim because the Payment Services Directive allows banks to refuse to refund customers if they can’t identify a more likely cause for the fraud than customer negligence.


However, even if all the costs of fraud were paid for by the card companies, the cost they would bear would only make up 0.075% of the value of card transactions. This sum they could comfortably pay for from the interchange fees they charge on these transactions, currently set at 0.7% of the transaction value – nearly ten times larger than the costs of fraud.


Earlier this month the European Parliament voted to cap interchange fees to 0.2% of transaction value for debit cards and 0.3% for credit cards, but even so there is a healthy profit margin between card fraud losses and interchange fee income. As for contactless, no-PIN transactions, they are a gamble that has paid off: fraud rates for contactless cards are even lower, at a mere 0.007% of total transaction value.


While fraud statistics in the US are not as systematically collected as in the UK and Europe, fraud there is estimated at around US$10 billion a year (about half the worldwide total). As a proportion of transaction volume, fraud rose 0.05% in 2007 to 0.1% in 2014. Still, Chip and PIN in the UK only temporarily disrupted the rising trend of card fraud until criminals focused on softer targets such as using UK cards in the US. Once this option is unavailable through the introduction of Chip and PIN to the US, the long-term effects are hard to predict.


The Conversation

Germanwings crash: the ins and outs of the two-person rule

Two hands on the wheel is twice as safe. keys by dextroza/www.shutterstock.com

As evidence mounts that Germanwings flight 4U9525 was crashed deliberately by its co-pilot who locked the flight’s captain out of the cockpit, there have been renewed calls to enforce a “two-person rule”, where two members of the flight crew are on the flight deck at all times.


Within hours of the crash, steps were taken to enact this: the Canadian government made it a mandatory requirement, and the UK Civil Aviation Authority urged UK airlines to review their rules, although some airlines including budget airlines Ryanair and Flybe already enforced the rule.


The idea dates back to the days of the Cold War, where two operators were required, typically with two separate keys, for drastic action such as launching nuclear weapons. The procedure is still in force today, to offer protection against the actions of rogue individuals. But the concept of the “buddy system”, that tells us not to be alone during critical or risky moments, is widely in place – from divers heading underwater, firefighters entering burning buildings or bankers making large withdrawals, and to school-aged children wandering out of sight of adults.


When it comes to nuclear war, it makes sense not to leave it within the ability of a single person. Scott Wagers/US DoD


In essence, the safety and integrity of actions and environments is improved by requiring the co-operation of two at a time. This way no single individual will be caught without help should they need it, and no one will be in a situation where the actions of a single person in a key role go unmonitored.


Flight rules


While co-pilot Andreas Lubitz may have considered his actions, it’s likely that he moved on the spur of the moment – on quite a short flight there was no way of knowing whether the captain, Patrick Sonderheimer, would have needed to visit the toilet, leaving him alone in the cockpit.


But it’s clear that being alone in the cockpit was all that was required for Lubitz to take himself and 149 others to their deaths on the slopes of the French Alps. Had the captain or any other member of the crew been there, they could have reversed any efforts to override the autopilot, or summoned other crew or passengers to help subdue Lubitz if necessary.


As it’s impossible to require that neither of the pilots leave the cockpit during flights, adopting the two-person rule seems to be a good move in order to increase the safety of flights from the potential for actions such as this, or in the event that the remaining pilot is incapacitated, perhaps by a heart attack. Current measures protect from actions outwith the cockpit, but provide little defence from those coming from within.


A human deterrent


Had Sonderheimer been replaced by a member of the cabin crew that morning, would Lubitz have believed that he had an opportunity to do what he did? The two-person rule, more than only enforcing, also dissuades and serves as a deterrent. Undeterred, a pilot set on crashing their aircraft could still override the autopilot, but – except in situations where they were able to overpower their fellow in the cockpit quickly – it would only be a matter of time before it was detected, reversed, or the absent pilot was able to return to the flight deck.


So it’s perhaps surprising that the two-person rule is not mandatory in the aviation industry worldwide but is left up to individual authorities. In the US, the Federal Aviation Administration made two in the cockpit a requirement a year after the 9/11 attacks in 2001 – along with the flight deck door reinforcement that contributed to the crash of flight 4U9525.


The experience of those airlines that have adopted the rule is that it requires minimal effort or organisational change. All that is required is that a cabin crew member takes the absent pilot’s place in the cockpit, and may leave only after they return. It is an easy fix with the potential to prevent cases such as this – a similar Egyptian Airlines incident in 1999 left 229 dead, and there have been at least eight other “pilot suicides” in the last 40 years – as well as other situations that could arise.


It’s clear that this has been rapidly taken up by airlines outside the US in the last 24 hours: EasyJet, Virgin, Air Transat, Emirates, Norwegian Air Shuttle, Air Canada, Air New Zealand and Lufthansa, the parent of Germanwings, have all announced they would implement the two-person rule. But ideally this would be adopted as a mandatory procedure worldwide – and sooner rather than later – as it can be done almost without cost, and with the potential to prevent the repeat of such tragedies.


The Conversation

Personality may be down to where you come from – but this doesn't mean we can't change

Is there hope for change? from www.shutterstock.com

How exactly do we become the people we are? A study published earlier this week found that there are vast regional differences in personality within the UK.


Londoners are more extroverted, for example, while people in north England and areas of east Scotland are quieter and more introverted. Scots have highly agreeable personalities, while those in London and areas of east England have low levels of agreeableness, described in the study as “uncooperative, quarrelsome, and irritable.”


What’s more, the study found that these differences relate meaningfully to regional socio-economic outcomes, including voting tendencies, health behaviours, education level, and social tolerance.


The research was reported by the BBC with the suggestion that people with specific personality traits might be better suited to certain locations around the country. But implicit within this suggestion is the notion that personality is fixed – and this is by no means the case.


The science of personality


After decades of research psychologists have a fairly good understanding of the psychological component that remains with us from one situation to another – otherwise known as our personality.


At the very broadest level who we are is made up of five personality dimensions – our tendency to be agreeable, conscientious, extroverted, emotionally stable, and open to experience. But how personality develops through our lives is still poorly understood – and we are only just beginning to discover that there could be big socio-economic implications.


Heat maps of personalities in the U.K. Blue is comparatively low, red is comparatively high. Rentfrow et. al/PLOS ONE


The idea that personality is more or less fixed over the course of our lives has been a popular belief in personality psychology for years. Changes due to biological factors were thought to occur until about the age of 30, at which point it was assumed that personality becomes set in stone. This is a belief that has permeated throughout society, which may have understandably led many to believe (rather dishearteningly) that it is impossible to change.


Change is possible


However, while personality is on the whole stable from one situation to the next, this does not mean it cannot and does not change over time. Personality researchers have traditionally paid little attention to aspects of long-term change, but there is now substantial evidence which shows that our personalities continue to change throughout our lives.


Given the recent work linking regional personality with regional socio-economic outcomes it is dangerous to imply personality is fixed. It may lead to the erroneous claim that people somehow deserve the situations in which they find themselves.


Recently we published a study showing that unemployment, a major life event that could happen to any of us, can reprogramme us fundamentally by changing our core personality – we become less agreeable, less conscientious and less open to the world around us. Thus the regional personality differences found in the research reported by the BBC may in fact be the result of the socio-economic conditions themselves.


A key to happiness


In recent years there has been a movement for promoting well-being in our societies. It has long been known that once basic material needs are met, income growth is no longer the key to greater well-being. More important factors are social relationships, mental health, and of course our personality.


Personality is one of the strongest and most consistent predictors of higher well-being. Being agreeable, conscientious, extroverted, emotionally stable and open to experience are all linked to higher well-being. Our personalities can also help us deal with difficult and traumatic life circumstances, something almost everybody will have to deal with at some point in their life.


How we deal with challenging events such as becoming disabled or unemployed depends heavily on our personality. For example, being agreeable, perhaps itself due to better quality relationships, appears to help individuals when they become disabled.


Personality development


Our research has also shown that personalities are just as likely to change as many socio-economic factors, including how much we earn. In fact, personality may contribute substantially more to changes in many indicators of well-being. For example, it has been found that counselling, and even powerful entheogens such as psychedelic mushrooms, may promote large positive personality change. Perhaps we would find ourselves on a faster route to greater well-being if we placed more attention on who we are, rather than on what we have.


By better understanding personality change, we may be led to a society where personality development is recognised as a valuable endeavour. We could try to limit the deep psychological damage from difficult life events such as unemployment and support psychological growth more generally.


It seems that we can change, and under the right circumstances such change may be positive and meaningful. By understanding and considering our personality more widely, we may uncover significant benefits to the quality of our lives.


The Conversation

What if our children are the screen-obsessed couch potatoes of the future?

I can't take my eyes off you. Screen obsessed by lassedesignen/www.shutterstock.com

The idea of “digital addiction” has returned to the fore with UCL researchers suggesting physical activity should displace the compulsive watching of television, internet surfing and video gaming. Often it’s suggested that at least gaming is more active and engaged than merely passively watching television, but the UCL study’s authors regard gaming as “just a different way of sitting down and relaxing”.


The problem with the topic of digital addiction is that there are no definitive scientific studies that have established it as a genuine condition. As far back as 2006 the American Journal of Psychiatry recommended digital addiction be more formally recognised, but studies are still largely piecemeal and no authoritative view exists.


A rising new addiction


Yet each year more studies are published that support the journal’s view that “internet addiction is resistant to treatment, entails significant risks, and has high relapse rates.”


Recently a few more accounts from around the world have emerged supporting the view that digital addiction is growing, and may be storing up problems for the future. A survey in New Zealand highlights the withdrawal symptoms people feel when not connected. The “fear of missing out” is another phenomenon that forms part of the dimension of digital addiction, as recently described in a survey in Japan. Here addiction is linked to the need to use specific apps, rather than a more general need to “connect” online.


The Net Children Go Mobile report in Ireland based on surveys conducted by researchers at the Dublin Institute of Technology highlights how many children are online a lot after 9pm. It shares concerns around the potential toxic combination of being “always-on” and exposed to potentially distressing content. As with drug use, addiction itself is one problem, while the “substance” or content of that addiction causes different kinds of harm.


How to measure the problem


Research that attempts to physically measure the impact of digital addiction is also expanding. A study from the University of Missouri reports that measurable increases in stress can be recorded when people have their smart phones taken away.


There has even been a rise in clinics serving digital addicts, an increasing amount of personal testimony from self-described addicts, as well as more firmly established evidence for repetitive strain injuries arising from overuse of technology.


It all points to an urgent need for far more comprehensive research – research that can really inform how the government approaches the problem with policy, as well as something to guide parents and managers in the workplace.


A slave to our screens, we’re locking ourselves to them. enslaved by Marcos Mesa Sam Wordley/www.shutterstock.com


A lack of digital denial


Interestingly, research attempting to deny digital addiction is almost impossible to find. Even as researchers claim that cell phone addiction can harm the parent-child bond, or that phone addicts may be more prone to mood swings, academic voices against are few and far between. Why is this?


It could be because the current research hasn’t made any significant impact on existing corporations and commercial models. When studies were published claiming definitive proof of dangerous mobile phone radiation the brain, particularly in children, those speaking out against the claims were were vocal and many in number. That battle is raging to this day, with the potential harm of wearables now coming under scrutiny.


I believe this battle is so energetic because the consequences, as with the tobacco industry, are directly and profoundly commercial. At present, digital addiction is an opportunity for innovation, both social and technological.


Putting digital in its place


We might better manage our digital devices by learning to place them more mindfully and skilfully – to learn to “handle our digital drink”. There are always apps to help us manage our addictions. Digital addiction has innovations in online child safety, the ways parents ration access to devices, better education – and even apps that prevent users constantly connecting. It might be that the impact of digital addiction and the way it manifests itself will have to be ever further revealed before serious research begins.


Added to this is the fact that many argue that profound freedom is a core principle in the digital space – any regulation or top-down governance, such as in the realms of alcohol, smoking, or gambling, will be strongly resisted. If the internet is in any way equivalent to a drug, then, according to the web’s founding father, Tim Berners-Lee, that drug is a “human right”). So the corporations working with digital devices and content wait and watch. While the evidence for digital addiction grow, it isn’t harming iPhone sales, and is unlikely to dent the success of smartwatches.


From India to America, from China to England, concerns that our children are turning into couch potatoes grows. But until a tipping point is reached, parents and teachers, managers and gamers will carry on checking into clinics, reading the top ten tips, possibly to become a generation with prematurely arthritic fingers, backache, and a whole host of yet to be named psychological disorders.


The Conversation

Germanwings flight 4U9525: a victim of the deadlock between safety and security demands

Two up front for safety? Jason Calston/Airbus

It seems incredible that a pilot of a passenger airline could be locked out of the cockpit. But analysis from the cockpit voice recorder recovered from Germanwings flight 4U9525 after it ploughed into the Southern Alps in France has revealed that this is what happened and that one of the two pilots had been trying to get into the cockpit before the crash.


An initial explanation that the pilot at the controls was incapacitated, perhaps from a heart attack or stroke, has since given way to an alternative given by French investigators: that the pilot in the cockpit – named in reports as Andreas Lubitz – deliberately prevented his co-pilot from entering in order to destroy the aircraft.


Following the September 11 attacks in New York in 2001, passenger aircraft cockpit doors have been reinforced in order to be made secure, and even bulletproof.


Access to the cockpit must be locked during flight, preventing passengers from forcing entry onto the flight deck so that pilots can safely fly the aircraft and manage any situation without worrying about potential hijackers. For the safety of the pilots the cockpit door must open at the pilot’s command from the flight deck, for example when there is no apparent risk of malicious attack. The outside of the cockpit door is secured by a keypad, to which the crew have the codes. But the request from the keypad to open the door must be confirmed by the co-pilot who remains inside.


It has become apparent that these two aspects – safety and security – are not always achievable at the same time. In the event of an incident like this, they even work against each other.


A trade-off between safety and security


People often confuse “security” and “safety”. In Chinese the two words are exactly the same. However, conceptually they are different.


Security offers protection from intentional attacks, while safety is to prevent from natural accidents. While some security incidents can be accidental, or made to look accidental, some element of usually malicious intent is involved.


The trade-off in both security and safety risks in this context is hard because the probability of accidents can be modelled while human intention cannot. One could try to estimate the probability of someone having bad intentions, especially pilots, but in the end it’s not possible to square one with the other – it is to compare apples with oranges.


With the ultimate goal of protecting the lives of those on board, the processes by which the cockpit door is open and closed is crucial. Closing the door is not always right, even though the flight may be threatened by potential terrorists. That a pilot on the flight deck must open the door to his fellow officer outside the door is not beneficial if the crew remaining on the deck inside are incapacitated or unwilling to do so.


Timing and context is key


Feature interaction manifests itself in the way hardware and software interacts, such as in the design of lifts, vehicles or even smart homes. In order to avoid problematic interactions priority needs to be assigned to those features that are paramount – on aircraft, this is protecting the lives of passengers. The key to this is context and timing.


How can the electronic, robotic controller of the cockpit doors collaborate with the human crew member desperately looking for ways to gain entry to the flight deck? Knocking, or even smashing down the door is not enough – because potential terrorists may do the same, and so these eventualities will have been catered for in the initial design.


In this case, an adaptive user interface mechanism, which has been used to simplify complicated software systems, could enhance the usability of an otherwise complex security system. Mobile payment systems, such as Apple Pay, have demonstrated it’s possible to simplify the interface to otherwise complex security systems. For example, users do not need to carry credit cards yet can still properly certify their transactions. Such time-saving elements to verify security could be, in such a contingency as this, a life-saving feature.


Control of the cockpit door must be adaptive to context of the situation, providing a means to bypass the risk of a situation where flight crew are locked out of the cockpit. Had the robotic door controller understood there was a reason the pilot at the controls could not confirm the entrance of the pilot outside – by registering a malfunctioning ejection seat, for example, or reading dying vital signs from a heart monitor – it could override the security requirements and allow the pilot to reenter the cockpit.


We need to reassess the risks and arguments around safety and security in the context of aviation, and find ways of bringing together hardware, software, and the flight crew themselves – perhaps through health monitoring devices – in order to ensure that both these demands work together, and do not become a threat in themselves.


The Conversation

Governments want to regulate bitcoin – is that even possible?

All that glitters is not gold. Antana, CC BY-SA

The UK government has shown its intention to regulate bitcoin and other digital currencies, drawing them into the realms of financial regulation applied to banks and other financial services. But bitcoin is not a bank or a financial company based in the City. How would regulation apply to something that exists in the cloud?


George Osborne’s announcement in his pre-election budget contained three measures. First, to apply anti-money laundering regulation to digital currency exchanges, for which formal consultation will begin soon after the election. Second, for the British Standards Institution and the digital currency industry to work together to develop voluntary standards for consumer protection. And third, £10m funding for the Research Councils, Alan Turing Institute and Digital Catapult to partner with industry to research the opportunities and challenges posed by digital currencies.


Balancing innovation and regulation


The government faces the familiar problem of needing to provide a suitable environment for innovation to flourish, while also ensuring that firms working in the same industry performing similar functions are regulated in the same way. All of this needs to be done in such a way as to protect the consumer and, in this case, perhaps the wider financial system itself. Heavy-handed regulation risks stifling innovation and driving away potential digital currency-based businesses. After all, as a truly global currency that exists in the cloud, the physical location of a digital currency-based business is irrelevant.


Too little regulation may leave digital currencies vulnerable to criminality – and the effect of this criminality consumers and the economy. The digital currency industry already faces problems that include theft from digital currency exchanges, malware and attacks on third-party websites, as well as the potential to aid money laundering. For example, within a week of Osborne’s announcement, another bitcoin exchange, Paybase, ceased allowing withdrawals and its administrator disappeared.


The realms of the possible


The regulation of digital currency is important in order to mitigate these sorts of risks and prevent abuse that destroys trust in the system. It is essential if digital currencies are to develop a major role in the UK economy. However their nature presents serious regulatory challenges: there is no central issuer, no control over supply and demand and no central organisation to impose regulatory requirements upon.


This might suggest the very idea of bringing them within the regulator’s embrace is futile. However, the aspect in which digital currencies are accepted as payment for goods and services seems a point at which to apply anti-financial crime measures – for example, customer due diligence measures when high-value goods are purchased using digital currency. In this sense they come under the same regulatory umbrella as cash, as defined in the Money Laundering Regulations 2007.


Striking where the virtual becomes real


A further approach favoured by the government is to focus on the digital exchange services – the sites where digital currencies are exchanged for real-world dollars, pounds or euros. Two key anti-money laundering initiatives are customer due diligence and suspicious activity reporting.


Customer due diligence – where banks or financial services must require proof of their customer’s identity – is one of the most significant aspects of anti-money laundering regulation. Without this there is no paper trail leading back to the criminal, but this cannot be applied in all instances as it would be too much of a burden. So it will have to be applied where there is most risk, an approach that reflects the different aspects that warrant regulation, but which treats all companies within the sector equally by creating a level playing field.


Suspicious activity reports would be more difficult to implement, not least because at present there is limited legitimate use of digital currencies. The main use for digital cryptocurrencies has been for purchasing illegal goods and services from markets in the dark net, such as Silk Road. This makes it difficult for an exchange to identify a “suspicious transaction”.


Consumer protection measures could be brought in by the introduction of a US-style licensing system for digital currency exchanges. A side effect of this approach is that it may simply drive businesses overseas to evade regulation. Ultimately, digital currencies are not restricted by national borders and, in that sense, it is not important from where they operate.


Another challenge is how to apply sanctions in the event regulations are breached. Sanctions are important to deter crime, but without the information gained from applying measures such as customer due diligence there may not be sufficient information to trace someone to punish. In fact, the entire point of virtual currencies like bitcoin is that they’re anonymous.


Given the difficulties of effectively regulating digital currencies, any research into the field is to be welcomed, as it’s clear there are considerable challenges to overcome before digital currencies can become an integral part of the mainstream economy.


The Conversation

John Nash, Louis Nirenberg share math’s Abel Prize

Pair to split ‘Nobel of mathematics’ for work on partial differential equations


John Nash and Louis Nirenberg

John Nash (left) and Louis Nirenberg (right) will receive the 2015 Abel Prize for their work on partial differential equations.


Nash: Courtesy of Princeton; Nirenberg: ©NYU Photo Bureau: Hollenshead


The 2015 Abel Prize, sometimes called the Nobel Prize of mathematics, will go to John F. Nash Jr. and Louis Nirenberg for work on partial differential equations, which are important in both pure math and describing natural phenomena.


Nash, of Princeton University and well-known as the subject of the book and movie A Beautiful Mind, shared the 1994 Nobel Prize in economics for work on game theory.


Nash and Nirenberg, of New York University, will split the approximately $760,000 prize for “striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis,” the Norwegian Academy of


Newly discovered layer in Earth's mantle can affect surface dwellers too

No Earths were harmed in the making of this image Johan Swanepoel/Shutterstock

Sinking tectonic plates get jammed in a newly discovered layer of the Earth’s mantle – and could be causing earthquakes on the surface.


It was previously thought that Earth’s lower mantle, which begins at a depth of around 700 km and forms the major part of the mantle, is fairly uniform and varies only gradually as it goes deeper.


However, our new study points towards a layer in the mantle characterised by a strong increase in viscosity – a finding which has strong implications for our understanding of what’s going on deep down below our feet.


The deep unknown


The Earth’s mantle is the largest shell inside our planet. Ranging from about 50 km to 3000 km depth, it links the hot liquid outer core – with temperatures higher than 5,000K – to the Earth’s surface.


The movement of materials within the Earth’s mantle is thought to drive plate tectonic movements on the surface, ultimately leading to earthquakes and volcanoes. The mantle is also the Earth’s largest reservoir for many elements stored in mantle minerals. Throughout Earth’s history, substantial amounts of material have been exchanged between the deep mantle and the surface and atmosphere, affecting both the life and climate above ground.


Because mankind is incapable of directly probing the lower mantle – the deepest man-made hole is only around 12 km deep – many details of the global material recycling process are poorly understood.


We do know, however, that the main way materials are transferred from the Earth’s surface and atmosphere back into the deep mantle occurs when one tectonic plate slides under another and is pushed down below another into the mantle.


A strong increase in the viscosity leads to a stiff layer which catches sinking slabs Hauke Marquardt


A trap for sinking plates


So far most researchers assumed that these sinking plates either stall at the boundary between the upper and lower mantle at a depth of around 700 km or sink all the way through the lower mantle to the core-mantle boundary 3,000 km down.


But our new research, published in the latest online issue of Nature Geoscience, shows that many of these sinking slabs may in fact be trapped above a previously undiscovered impermeable layer of rock within the lower mantle.


We found that enormous pressures in the lower mantle, which range from 25 GPa (gigapascal) to 135 GPa, can lead to surprising behaviour of matter. To picture just how high this pressure is, balancing the Eiffel Tower in your hand would create pressures on the order of 10 GPa. These pressures lead to the formation of a stiff layer in the Earth’s mantle. Sinking plates may become trapped on top of this layer, which reaches its maximum stiffness at a depth below 1,500km.


Under pressure


We formed this conclusion after performing laboratory experiments on ferropericlase, a magnesium/iron oxide that is thought to be one of the main constituents of the Earth’s lower mantle. We compressed the ferropericlase to pressures of almost 100 GPa in a diamond-anvil cell, a high-pressure device which compresses a tiny sample the size of a human hair between the tips of two minuscule brilliant-cut diamonds.


A diamond-anvil cell compresses a tiny sample under high pressure between two minuscule diamonds. Image via Hauke Marquardt, Author provided


While under compression, the ferropericlase was probed with high-energy x-rays to investigate how it deforms under these high pressures. We found that the ability of the material to resist irreversible deformation increased by over three times under high pressures.


These results were used to model the change of viscosity with depth in Earth’s lower mantle. While previous estimates have indicated only gradual variations of viscosity with depth, we found a dramatic increase of viscosity throughout the upper 900 km of the lower mantle.


Such a strong increase in viscosity can stop the descent of slabs and, in doing so, strongly affect the deep Earth material cycle. These new findings are supported by 3-D imaging observations based on the analysis of seismic wave speeds travelling through the Earth that also indicate that the slabs stop sinking before they reach a depth of 1500 km.


Surface effects


If true, the existence of this stiff layer in the Earth’s mantle has wide-ranging implications for our understanding of the deep Earth material cycle. It could limit material mixing between the upper and lower parts of the lower mantle, meaning mantle regions with previously different geochemical signatures stay isolated in separate patches instead of mixing over geologic time.


What’s more, a stiff mid-mantle layer could also put stress on slabs much closer to the Earth’s surface, potentially acting as a trigger of deep earthquakes.


We are really just at the beginning of a deeper understanding of the inner workings of our planet, many of which ultimately affect our life on its surface.


The Conversation

One photon wrangles 3,000 atoms into quantum entanglement

A particle of light is all it takes to establish a quantum connection between nearly 3,000 atoms, scientists report in the March 26 Nature. The finding brings physicists a step closer to studying the macroscopic effects of quantum entanglement, which links the properties of microscopic particles.


MIT quantum physicist Vladan Vuletić and colleagues bounced photons between two mirrors in a space that contained about 3,100 rubidium atoms cooled to nearly absolute zero. Occasionally the polarization of a photon changed slightly, indicating that the photon had interacted with the atoms. Measurements revealed that each brief interaction coaxed at least 2,700 of the atoms to become entangled.


The researchers hope to use clusters of entangled atoms to build extraordinarily precise atomic clocks.


A healthy public domain generates millions in economic value -- not bad for 'free'

Usefulness and value extends far beyond the century in which they were created. British Library

It’s frequently claimed that copyright law should be made more restrictive and copyright terms extended in order to provide an incentive for content creators.


But with growing use of works put into the public domain or released under free and permissive licenses such as Creative Commons or the GPL and its derivatives, it’s possible to argue the opposite – that freely available works also generate value.


Public domain works – those that exist without restriction on use either because their copyright term has expired or because they fall outside of the scope of copyright protection – create significant economic benefits, according to research my colleagues and I have conducted, now published in a report for the UK government’s Intellectual Property Office.


We found a surprising amount of transformative reuse of public domain materials by commercial users – economic value that wouldn’t have been possible without access to a thriving public domain. We tried to identify precisely how and where economic value is generated from public domain works in order to establish where there’s scope for improvement.


Setting the copyright term


Literary and artistic works in the UK are protected under copyright for 70 years following the death of the author. At that point, copyright expires and anybody may copy the work and make it available to others. Consumers can then enjoy the benefit of accessing the work for a lower price, and in some cases for free. For example the Project Gutenberg releases digital versions of classic literary texts that are in the public domain. The British Library’s Mechanical Curator project digitises illustrations from printed books and makes them available on Flickr.


The public domain marque. anarres


Conversely, this means rights holders will no longer be able to restrict copying of their work and will potentially lose revenue. It’s for this reason that some rights holders have lobbied governments to extend the scope of copyright so that they can continue to extract revenue from a small number of old, popular works. The Disney Corporation is one example: Some works featuring Mickey Mouse would have fallen out of copyright in 2003 had US Congress not passed the Copyright Term Extension Act in 1998 (derided by some as the Mickey Mouse Protection Act), which extended the US copyright protection from 50 to 75 years (95 years for corporate works).


Protection or obstruction?


Some economic theorists argue that long or indefinitely renewable copyright protection is an optimal solution because it creates an incentive for rights holders to keep works available. However, even in-copyright works can disappear from the market because rights holders decide that it’s not worth the effort to print or publish the work.


Another, perhaps more important, problem is that it’s difficult to build upon works protected by copyright to create new products. It’s costly and time-consuming to seek permission to use a work, and sometimes the original creator (or those to whom the rights have passed) cannot be located or does not wish to allow a derivative use.


For example, David and Stephen Dewaele, the Belgian brothers behind 2ManyDJs, had to have 187 samples approved in order to release their 2002 album As Heard on Radio Soulwax Pt. 2. Rights owners rejected 62, 11 were untraceable, and 114 were cleared – a process that took the best part of three years.


Creative Commons licenses allow greater flexibility. Creative Commons


Creative Commons licenses were developed to help solve this problem. By stating terms for the attribution and use for a work but freeing it from copyright restrictions from the outset, a Creative Commons-licensed work reduces costs for those wishing to use it and allows them to make use of a work within the bounds of the licence.


Use and re-use


We interviewed UK media firms and found that those that had worked with public domain materials were not put off by the fact their source material could also be used by others. Many firms reported that they saw their contributions as part of an ecosystem in which the joint efforts of creators, fans and audiences enriched a narrative product not owned by a single contributor.


Using data from crowdfunding platform Kickstarter, we examined how products based on public domain works performed compared with entirely original products or those under copyright. We found that public domain-inspired works were more likely to succeed and raised more funding (56%) compared with untested, entirely original projects. We also found that a third of all crowdfunding pitches incorporated various sources of intellectual property and derived works into the final product.


Public domain and other works on Kickstarter Kristofer Erickson, Author provided


Finally, we looked at how the availability of public domain materials could add value to non-commercial products or services, which may in turn create a commercial benefit. For example Wikipedia relies on public domain and Creative Commons licensed images to illustrate its pages. By extrapolating from a sample of 1,700 biographical pages for notable authors, musical composers and lyricists, we arrived at an estimated value for public domain images across English language Wikipedia.


Based on the costs of providing replacement images from commercial sources, we estimate that public domain material contributes £138m per year for the 1,983,609 English language Wikipedia pages. Having controlled for the notoriety of certain persons or subjects on Wikipedia it’s also apparent that pages with public domain images (rather than none) attract between 17-19% more visitors. Were Wikipedia a commercial website with advertising, the increased traffic would generate an additional £22.6m a year.


Digital creativity and innovation are vital components of today’s economy. Any policies that encourage growth in the creative industries should not only consider the value represented in the trade of copyrighted works, but also the range of public domain material that inspires or forms the basis of new products – and the importance of protecting and nurturing a thriving public domain.


The Conversation

How particle accelerator maths helped me fix my Wi-Fi

"The things I do for my housemates' downloading habit..." Maths by Sergey Nivens/www.shutterstock.com

Electromagnetic radiation – it might sound like something that you’d be better off avoiding, but electromagnetic waves of various kinds underpin our senses and how we interact with the world – from the light emissions through which your eyes perceive these words, to the microwaves that carry the Wi-Fi signal to your laptop or phone on which you’re reading it.


More or less every form of modern communication is carried by electromagnetic waves. They whisk through the antenna on your car, travel through walls whenever you need to make a phone call inside, yet also inexplicably reflect from seemingly nothing in the Earth’s upper atmosphere.


This happens because the atmosphere becomes a plasma at high altitudes – a state of matter where atoms split apart and electrons are no longer bound to their parent nuclei. Plasmas have interesting properties, as they react very strongly to electromagnetic fields. In this case usefully: at low enough frequencies it becomes possible to bounce radio signals around the world, extending their range.


It’s the interesting interactions between high-powered electromagnetic waves and plasmas that my research group and I study. The most intense electromagnetic waves in the world are found in the form of high-power laser pulses. The UK hosts some of the most powerful laser systems in rural Oxfordshire, and the same idea of using electromagnetic waves to accelerate particles is used at the Large Hadron Collider in CERN.


It’s all in the maths


We can accurately predict the interactions of intense electromagnetic waves and plasmas, as the underlying physical processes are governed by Maxwell’s equations – one of the triumphs of 19th century physics that united electric and magnetic fields and demonstrated that light is a form of electromagnetic wave.


Solving Maxwell’s equations by hand can be tortuous, but it transpires that a clever algorithm invented in the 1960s and rediscovered since makes the exercise relatively simple given a sufficiently powerful computer.


Armed with the knowledge of Maxwell’s equations and how to solve them, I recently turned my attention to a much simpler but more widespread problem, that of how to simulate and therefore improve the Wi-Fi reception in my flat. While “sufficiently powerful” in an academic sense often means supercomputers with tens of thousands of processors running in parallel, in this case, the sufficiently powerful computer required to run the program turned out to be a smartphone.


The black circle represents the router, and the ‘hotter’ the colour the stronger the signal strength.


For this trick you will need one Maxwell


The electromagnetic radiation emanating from the antenna in your wireless router is caused by a small current oscillating at 2.4GHz (2.4 billion times per second). In my model I introduced a current like this and allowed it to oscillate, and Maxwell’s equations dictated how the resulting electromagnetic waves flow. By mapping in the actual locations of the walls in my flat, I was able to produce a map of the Wi-Fi signal strength which varied as I moved the virtual router.


The first lesson is clear, if obvious: Wi-Fi signals travels much more easily through free space than walls, so the ideal router position has line-of-sight to where you’ll be using it.


The waves spread and fill the flat, then settle into a ‘standing wave’.


Sometimes it appears that the waves have stopped changing, and instead flicker in the same places. This is the phenomenon of a standing wave, where Wi-Fi reflections overlap and cancel each other out. These dark spots on the map (or “not spots”) indicate a low Wi-Fi signal, and are separated by several centimetres. Recently, a fellow enthusiast managed to map this phenomenon in three dimensions, as explained in this video.


So the second lesson is less obvious and more interesting: if reception is poor in a particular position, even a slight change of the router’s position may produce significant improvement in signal strength, as any signal dark spots will also move.


101 uses for electromagnetic waves


After publishing my findings I was struck by the number of people eager to perform simulations of their own. Ever eager to spread the gospel of electromagnetism, I bundled the simulation into an Android app to provide others with a simulated electromagnetic wave-based solution to a common modern problem: where’s the best place for my Wi-Fi router?


Assuming few would be interested, I was surprised when news spread via social media and the several thousand copies of the app sold over the course of a few hours.


Sales have gradually dwindled but the message remains clear: not only are electromagnetic waves fascinating, mathematically elegant and supremely useful, they can make your life easier, your internet connection stronger, and even make you a bit of money too.


The Conversation

A crash with no obvious cause: we must wait for answers from Germanwings black box

Recovering the lost aircraft will be hampered by the terrain, snow and weather. EPA/Sebastien Nogier

An investigation has begun into the unexplained crash of Flight 4U9525, of budget airline Germanwings, which crashed into the Alps in southeastern France en route from Barcelona to Dusseldorf with the loss of all 150 passengers and crew.


The aircraft descended from cruising height of 38,000ft to around 6,000ft in eight minutes before air traffic control lost contact just before 11am. According to witnesses who saw the aircraft descend, there was no sign of smoke or in-flight explosion, and weather at the time was good. The black box flight recorder has been found, and will reveal more in time.


Such incidents are actually quite rare in statistical terms. Flight 4U9525 appears to have involved a major malfunction of some kind as the aircraft was cruising, while the majority of accidents occur during take-off or landing. In fact most air accidents that involve fatalities also result in a large proportion of the passengers surviving because they occur nearer the ground, a fact that is not generally appreciated but sadly also not the case here.


The abrupt end of the aircraft’s flight path over the Alps. EPA/ZIPI


The aircraft: Airbus A320


The aircraft, an Airbus A320, is a model that is in great demand from all parts of the world, and its reputation for safety and reliability is unequalled. It is one of a smaller, single-aisled family that comprise the A318, A319, A320 and A321, and has been in production since the late 1980s, and sales of the updated models show little sign of decline.


The A320 family has an accident rate of 0.14 fatal crashes per million departures, which is considered excellent. The total number of accident fatalities is below 1,500, which good considering its two decade service history and that more than 6,000 are in daily use.


There have been some memorable A320 accidents; in June 1988 an Air France airliner crash landed in high trees while performing a fly-by-wire landing at the Mulhouse air display in France. Three of the 136 passengers on board died, and airliners are no longer permitted to perform at airshows with passengers on board.


In January 2009, in a remarkable piece of airmanship a US Airways A320 taking off from La Guardia in New York had a double engine failure from birdstrikes and subsequently glided to a perfect ditching in the River Hudson. Of the 155 people on board there was only a single serious injury.


In this case it’s been reported that the particular aircraft involved was 24 years old, with the aircraft having previously been in service with German national airline Lufthansa before being transferred to Germanwings, a Lufthansa subsidiary. While this may surprise some, there’s little doubt that its full service records will show it was airworthy before its final departure, and that all necessary servicing had been completed in the years since manufacture. European airspace and flights are heavily audited by the European Aviation Safety Agency and are considered very safe. Lufthansa operates 100 A320s, Germanwings 60.


The A320 family were among the first so-called “fly-by-wire” airliners, a great innovation when they first flew. In simple terms, the cables and pulleys connecting the moveable flight control surfaces (elevators, rudder and ailerons) to the pilots' controls are replaced by electronic connections. These permit lighter pressure, swifter response, and better handling than previous manual systems, and do away with the image of “wrestling with the stick”. It’s now accepted that fly-by-wire technology, once the preserve of military aircraft, are perfectly safe for commercial use.



In-flight emergency


With regard to airborne emergencies it goes without saying that there are procedures for all eventualities, and that these are practised by aircrews on a very regular basis. In all cases, teaching on the impact of human factors dictates that one pilot physically flies the aircraft while another attempts to isolate or solve the problem using checklist procedures, and will advise the cabin crew and the air traffic authorities that an emergency exists.


So it’s puzzling to investigators that Flight 4U9525 issued no “mayday” distress call, as confirmed by France’s aviation authority despite earlier contradictory reports. This is unusual: if the situation was so catastrophic that it led to an immediate and rapid descent, for whatever reason, then possibly the aircraft or its communications systems had become disabled in some way. If it was cabin depressurisation that caused such a descent, each pilot has about 15 minutes of independent oxygen supply (the passengers have no more than 12 minutes' worth).


It’s tragic that even at the low altitude of around 6,000ft that the aircraft was unable to avoid colliding into the lower slopes of the Alps, and that all on board perished. What remains certain is that the air accident investigators will piece together Flight 4U9525’s final moments to assemble a true picture of what happened in the run up to the crash in an effort to prevent its re-occurrence. Sad though these events are, commercial air travel remains the safest form of travel in the 21st century, and is likely to remain so.


The Conversation

Breaking up is never easy -- and splitting BT won’t give us better broadband

BT is big... but a smaller BT won't necessarily be more beautiful. Nick Ansell/PA

There have frequently been calls from one side or another for BT, the former telecoms monopoly, to be broken up. Now, with BT having emerged as the buyer of mobile phone network EE, complaints about BT’s power – which have never gone away – have grown louder.


But other than competitors' chagrin, is there any evidence that breaking up BT would deliver better a phone and internet service for customers?


By virtue of its history, BT owns and manages almost all the telephone and broadband cables and exchanges in the country (which other service providers must pay to access) while also offering its own competing home and business packages to customers. Operating from an advantageous position, competitors such as Sky and TalkTalk might say.


These concerns were previously tackled by telecoms regulator Ofcom in 2005, when it required BT to separate its broadband and phone network access services by creating Openreach, an arms-length division of BT that handles the national broadband network. Openreach is required by Ofcom to offer the same terms to competing firms as it does to BT in order to provide a level playing field – a process generally known as Local Loop Unbundling.


Consequently, even though Sky and TalkTalk have respectively 20% and 15% of the UK broadband market, this is only made possible because they can use the Openreach network to connect their customers. This is because, unless anyone else embarks on a hugely ambitious (and unneccessary) cable-laying project, it’s essentially the only telephone network there is. In contrast, BT has a 31% market share.


Mobile phone operator Vodafone is soon to enter the market as it gears up to offer domestic broadband services using Openreach’s network, and also its own national infrastructure acquired through the purchase of Cable & Wireless Worldwide in 2012.


The concern expressed by Sky and Talk Talk is that by being part of the BT Group, Openreach is too much influenced by the strategic decisions of its parent. This in turn, they argue, can result in an under-investment in the UK’s broadband infrastructure – to the detriment of their own business as they are totally reliant on Openreach to deliver services. Such under-investment could worsen, they claim, as BT has to find £12.5 billion to pay for its acquisition of EE. But these ignore the strict regulatory framework imposed by Ofcom and under which Openreach must operate.


Towering over the competition? BT Tower by Sue Robinson/www.shutterstock.com


More than one set of wires to consider


So, should the UK’s broadband network be managed by a totally separate and independent company, along the lines of Network Rail (railways) or National Grid (electricity and gas), by taking Openreach off BT?


Before we consider that question we must recognise another important player in the mix: Virgin Media has a 20% share of the UK broadband market but delivers services over its own cable TV network built during the 1990s – a market to which BT was denied access in order to stimulate competition. Virgin Media’s new owner, US firm Liberty Global, has recently sanctioned a £3 billion investment to expand its network reach by a third. But there is no obligation from Ofcom for Virgin Media to offer access to its network for other providers: if Openreach becomes independent, what should happen to the 20% marketshare based on infrastructure owned by Virgin Media?


Equally, what do you do about the growth in mobile broadband? With 4G connectivity speeds now rivalling those available on some domestic broadband connections, it’s clear there’s going to be significant growth in this area and new competition. So should mobile broadband access also be brought under the wing of a National Grid-style company?


Keeping the consumer’s interest at heart


Sky and TalkTalk clearly have a vested interest, hoping that an independent Openreach will mean lower prices for them – but not necessarily for us. On the other hand BT is unlikely to want to divest itself of Openreach, which currently generates almost 30% of its revenue. Complicating any potential break up would be the question of how a much smaller BT would then be able to support its pension scheme – already a cool £7b billion in the red.


In the ten years since Ofcom’s first strategic review of digital communications, the telecoms landscape and our internet use have changed enormously. While the UK may well have lagged behind Europe in broadband access speed for much of that time, there is clear evidence that things are changing for the better. Competition, regulation, investment and government initiatives to tackle difficult areas such as rural connectivity are helping to improve performance in a way that benefits us all.


So extricating Openreach from BT won’t necessarily change anything. It’s already heavily regulated by Ofcom, and that would continue. If service providers believe an independent Openreach will drive down prices, then where will the investment required to further expand the high-speed fibre and future G.Fast networks come from? And how should we compare the domestic broadband services from Openreach with cable and mobile operators' services?


Limiting our consideration to BT is to see only part of the overall picture. Hopefully Ofcom’s new strategic review will take a much wider view.


The Conversation

A promised 'right' to fast internet rings hollow for millions stuck with 20th-century speeds

Superfast? I'll be the judge of that. BT van by urbanbuzz/www.shutterstock.com

In response to the government’s recent declarations that internet speeds of 100Mb/s should be available to “nearly all homes” in the UK, a great many might suggest that this is easier said than done. It would not be the first such bold claim, yet internet connections in many rural areas still languish at 20th-century speeds.


The government’s digital communications infrastructure strategy contains the intention of giving customers the “right” to a broadband connection of at least 5Mb/s in their homes.


There’s no clear indication of any timeline for introduction, nor what is meant by “nearly all homes” and “affordable prices”. But in any case, bumping the minimum speed to 5Mb/s is hardly adequate to keep up with today’s online society. It’s less than the maximum possible ADSL1 speed of 8Mb/s that was common in the mid-2000s, far less than the 24Mb/s maximum speed of ADSL2+ that followed, and far, far less than the 30-60Mb/s speeds typical of fibre optic or cable broadband connections available today.


In fact a large number of rural homes still are not able to access even the previously promised 2Mb/s minimum of the Digital Britain report in 2009.


Serious implications


As part of our study of rural broadband access we interviewed 27 people from rural areas in England and Wales about the quality of their internet connection and their daily experiences with slow and unreliable internet. Only three had download speeds of up to 6Mb/s, while most had connections that barely reached 1Mb/s. Even those who reported the faster speeds were still unable to carry out basic online tasks in a reasonable amount of time. For example using Google Maps, watching online videos, or opening several pages at once would require several minutes of buffering and waiting. Having several devices share the connection at a time wasn’t even an option.


So the pledge for a “right” to 5Mb/s made by the chancellor of the exchequer, George Osborne, is as meaningless as previous promises for 2Mb/s. Nor is it close to far enough. The advertised figure refers to download speed, of which the upload speed is typically only a fraction. This means uploads far slower even than these slow download speeds, rendering it all but unusable for those needing to send large files, such as businesses.


With constantly moving timescales for completion, the government doesn’t seem to regard adequate rural broadband connections as a matter of urgency, even while the consequences for those affected are often serious and urgent at the same time. In Snowdonia, for example, a fast and more importantly reliable broadband connection can be a matter of life and death.


The Llanberis Mountain Rescue team at the foot of Mount Snowdon receives around 200 call-outs a year to rescue mountaineers from danger. Their systems are connected to police and emergency services, all of which run online to provide a quick and precise method of locating lost or injured mountaineers. But their internet connection is below 1Mb/s and cuts out regularly, especially in bad weather, which interferes with dispatching the rescue teams quickly. With low signal or no reception at all in the mountains, neither mobile phone networks nor satellite internet connections are alternatives.


All geared up but no internet connection. Anne-Marie Oostveen, Author provided


Connection interrupted


Even besides life and death situations, slow and unreliable internet can seriously affect people – their social lives, their family connections, their health and even their finances. Some of those we interviewed had to drive one-and-a-half hours to the nearest city in order to find internet connections fast enough to download large files for their businesses. Others reported losing clients because they weren’t able to maintain a consistent online presence or conduct Skype meetings. Families were unable to check up on serious health conditions of their children, while others, unable to work from home, were forced to commute long distances to an office.


Rural areas: high on appeal, low on internet connectivity. Bianca Reisdorf, Author provided


Especially in poorer rural areas such as North Wales, fast and reliable internet could boost the economy by enabling small businesses to emerge and thrive. It’s not a lack of imagination and ability holding people in the region back, it’s the lack of 21st-century communications infrastructure that most of us take for granted.


The government’s strategy document explains that it “wants to support the development of the UK’s digital communications infrastructure”, yet in doing so wishes “to maintain the principle that intervention should be limited to that which is required for the market to function effectively.”


It is exactly this vagueness that is currently preventing communities from taking matters into their own hands. Many of our interviewees said they still hoped BT would deploy fast internet to their village or premises, but had been given no sense of when that might occur, if at all, or that given timescales slip. “Soon” seems to be the word that keeps those in the countryside in check, causing them to hold off on looking for alternatives – such as community efforts like the B4RN initiative in Lancashire.


If the government is serious about the country’s role as a digital nation, it needs to provide feasible solutions for all populated areas of the country, which means affordable, and future-proof, which entails fibre to the premises (FTTP) – and sooner rather than later.


The Conversation