Showing posts sorted by date for query cybersecurity. Sort by relevance Show all posts
Showing posts sorted by date for query cybersecurity. Sort by relevance Show all posts

Monday, September 14, 2015

Diplomacy, not sanctions, are needed tackle state cyberespionage

More jaw jaw, less war war. Ad Meskens, CC BY-SA

The war of words between China, Russia and the US has escalated recently with the White House declaring its intention to apply sanctions in response to what the US sees as state-sponsored cyberattacks from the east.

So far in 2015, Russia been implicated in hacks of the IRS, the the White House, the Joint Chiefs, and the State Department.

China has been named the culprit for the hack of the Office of Personnel Management, which stole the personal records of nearly 21m US citizens. The two countries are now reportedly working together in the difficult work of deciphering the raw data from these hacks.

The proposed sanctions may target individuals and corporations – some of whom are likely to be close to the governments of both countries – for their role. The problem with confronting Russian moves in cyberspace is that using the tool of economic sanctions is virtually toothless and ineffective given what we know about Russia and sanctions. China may be an entirely different story, given its current economic problems and high interconnectedness with the global economy, but Russia stands better able to defy Western sanctions. The declining price of oil has hurt more than sanctions ever could, and Russia is not central to the global economy.

Russia is already in decline and confronting Russia’s leadership now will demand a response. With the continuing stalemate in the civil war in Ukraine, Russia has backed itself into a corner and has no easy way out. It has already been sanctioned for its actions in Ukraine, which include arming the separatists and sending in Russian regulars to fight alongside the rebels in Donbas. Yet these sanctions have only emboldened the Kremlin to see the conflict through to the very end and made Vladimir Putin more popular at home for his tough stance against the West.

So with this in mind, what good would sanctioning Moscow again for hacking computer networks do? Sanctions are an ineffective tool to deal with cyberspace disputes. They do not go to the root of the problem, which lies in nature of espionage and the oversights or weaknesses in securing our own networks. The fact that many government networks are still using 14-year-old Windows XP suggests that much of the blame lies with our own governments’ ineptitude. Huge vulnerabilities such as these are invitations to hackers of any sort. We should shore up our defence before finding a way to respond, to do otherwise is premature.

Espionage vs cyberespionage - tactics are different, but the game is the same. NSA

Bring everyone into the tent

Why do sanctions often fail, especially if against individuals and companies? To have any effect sanctions must be comprehensive, giving those sanctioned no other avenues to access the resources they’re denied. When sanctions are targeted and unilateral, this can be hard to achieve. It has been over a year since the Department of Justice indicted five People’s Liberation Army officers for cyber-espionage, yet China continues its campaigns against US networks.

Sanctioning Russian individuals or companies would not stop Moscow from continuing to exploit the continued vulnerabilities found in US networks without the support of the entire international community and a willingness to target the entire country. Of course, reaching international agreements is complicated by the fact that the US is also a major player in the game of international cyber-espionage, and Russia and China feel that if their cyberspace is violated by the US then they are justified in responding.

The cyberspace domain has existed for more than 25 years: these are not new threats or methods of attack – and confronting these problems with traditional sanctions fails to recognise their limitations when applied to this domain. Two steps are needed to confront Russia: achieve a workable framework for stability in Ukraine and develop rules and norms in cyberspace to regulate the constant violations that are considered part of spycraft.

There is evidence that we have done much to develop a system that might work for China. Just recently, senior Chinese and US officials have held talks to discuss cybersecurity issues ahead of Chinese president Xi Jinping’s official visit to Washington. But Russia is often left out of the picture. Russia must be brought into the international community and participate in developing a system of regulation for cyberspace. Russia should be included in the process of considering what cyber-laws might be, but currently this is impossible as this effort is centred in Tallinn, Estonia, which left post-Cold War Russia for NATO.

This is not a call for greater respect of Russia, but a call to respect every stakeholder in the international system as we try to figure out what is allowed in world of constant cyber-threats. Excluding a major state actor only insures they will do what they can to undermine any new framework.

Escalation is not the answer. Sanctions are weak and ineffective. They make us feel like something is being done even though the moves are generally regressive and target innocent civilians. “Smart sanctions” are just a buzzword. Those that feel the need to apply sanctions need to face up to their own inefficiencies in defence, their inadequacy of offence, and the weakness of any sanctions regime in achieving their aims.

There are no quick fixes, only concerted action by the entire international community will establish the rules for the cyberspace world.

The Conversation

Monday, July 20, 2015

In first case of its kind, UK court rules surveillance law unconstituional

European legislation has emerged triumphant once more. court by Peter Fuchs/shutterstock.com

Controversial surveillance legislation hustled through parliament last summer has been ruled unlawful by a UK court, which argued that the vague terms and descriptions of powers in the Data Retention and Investigatory Powers Act 2014 (DRIPA) renders the act incompatible with human rights under European law.

In a 44-page ruling, the divisional court criticised the lack of clarity and detail in spelling out the terms and conditions under which communications data can be intercepted by police and intelligence agencies, declaring the act “incompatible with the British public’s right to respect for private life and communications and to protection of personal data under Articles 7 and 8 of the EU Charter of Fundamental Rights”.

It is a decision that must have caused howling with both joy and indignation from both sides of the House of Commons, as the legal action that led to this ruling was a cross-party effort by Conservative MP David Davis and Labour MP Tom Watson. The judgment makes for interesting and instructive reading, providing as it does an overview of the fissures emerging between how national legislatures, led by the UK, regard their relationship to the European Union’s institutions.

Precedence of international law

DRIPA, one in a series of laws supporting controversial surveillance powers passed by successive UK governments, establishes the principle by which anti-terrorism measures and national security priorities take precedence over human rights considerations. However, the judgment rules that the EU Charter of Fundamental Rights must take precedence, and in doing so requires the UK government to undo its own act of parliament – a significant precedent by a British court.

David Anderson QC, the Independent Reviewer of Terrorism Legislation, comments that this ruling confirms the already well-established supremacy of EU over national law. But it also underscores the UK’s truculence in complying with this principle compared to other European nations.

Human rights online

The judgment also adds to international recognition, such as from the UN, that the way people use the internet is a human rights issue.

It does not refer to the wider geopolitical context of issues around the internet’s design, governance and use – from Wikileaks to the Snowden revelations, to the recent appointment of Joe Cannataci as the first UN Special Rapporteur on Privacy. But reading between the lines, it reflects the quiet sea-change underway in national and international courts as they start to comprehend the legal and political challenges of a world increasingly dependent upon computer networks and communication.

This judgment vindicates the efforts and the slow drip, drip effect of long-term lobbying from across the political spectrum for formal recognition that human rights matter online.

What this ruling makes more apparent is the lack of appropriate and affordable legal means for people to rectify violations of those rights. A recent report from the Council of Europe’s Commissioner for Human Rights on the rule of law on the internet, and a move to incorporate human rights into the heart of the internet’s governing bodies such as ICANN demonstrate that the debate is moving in that direction. The Right to Be Forgotten rulings are another example of courts deciding that human rights trump the technocratic approach.

Access regime poorly governed

It’s good news that those with power to enforce these principles are doing so, with courts correcting the government’s misuse of terrorist threats and abuse of the spirit of the law that governs the democratic process.

However, while the ruling is a positive step towards more robust checks and balances to abuses of executive power, it draws a distinction between its opinion on controversial EU laws governing blanket data retention, and its judgment that DRIPA lacks adequate standards governing access to that data.

Data retention and access to it may be legally distinct, but through mandatory data retention regulations EU member state governments have access to considerable details of our private lives online. With retention periods varying from six months to two years across the EU, this scale of data retention has been a source of friction between EU nations, and a bone of contention for civil liberties groups.

In 2014 the Court of Justice of the European Union (CJEU) ruled that the 2006 EU Data Retention Directive violated the same elements (Articles 7 and 8) of the EU Charter of Fundamental Rights as DRIPA. This ruling may have made the DRIPA legally redundant at the time it was hurried into law, but this did not diminish its political significance in the Conservative government’s use of cybersecurity rhetoric.

Why is the fine line between retention and access important? As privacy expert and online human rights advocate, the late Caspar Bowden noted in one of his interventions, this is much more than an academic distinction:

Ubiquitous personal communication technologies are here to stay. Because of exponentially falling data storage costs, two contrasting states of society can be envisaged … either that individuals determine whether and when their history is recorded, subject to exceptions, or that data will exist about everyone all the time. This is the policy choice between data retention and preservation, and it is a sharp dichotomy.

Bowden puts his finger on the political dimensions to legalities about the rights implications of the intimate entanglement of internet media and communications with everyday life, politics, and business. So the question remains: why do we permit governments and companies to retain so much data, about so many people, for so much of the time? I hope this judgment on access will be a first step along the path of the “broader-reaching change to data retention” David Anderson suggests may be in the air.

The Conversation

Thursday, July 9, 2015

Trusting hackers with your security? You'd better be able to sort the whitehats from the blackhats

Twitter

To think that men are so foolish that they take care to avoid what mischiefs may be done them by polecats or foxes, but are content, nay, think it safety, to be devoured by lions.

English philosopher John Locke’s words from 1689 describe the way in which fear for their own security may irrationally drive citizens to accept the absolute authority of the state. His words may bring to mind the NSA surveillance scandal, or more recently the devastating hack of cybersecurity firm Hacking Team.

The controversial “security” firm – parlance for hackers for hire – had its servers compromised, company files stolen and social media and email accounts hijacked. Some attribute the attack to activists aiming to expose the firm’s dealings with authoritarian regimes – the 400Gb file the attackers posted online contains details that apparently support the concerns of Reporters Without Borders and the University of Toronto’s CitizenLab.

Other believe the attack originates from a competing firm. In any case, what it demonstrates is that hackers today – as much as the well-funded government intelligence agencies – can affect national and international politics, to foster or to disregard human rights and ultimately to shape the development of democracy.

Striking the balance

Communication technology has become both a valuable asset needing protection and the means of attack. A balance must be struck between the rights of citizens – privacy, freedom of speech and information – and the requirements of the state to keep them safe and to secure itself against outside and inside threats.

The debate over the use of encryption is a case in point: on one hand encryption shields its users from intrusive surveillance, protecting their privacy. On the other, by thwarting the surveillance of law enforcement, encryption limits the state’s ability to protect its citizens. Striking a balance between individual rights and security is not a simple matter – Hobbes and Locke debated the problem centuries ago and it has been debated ever since. But the attack on Hacking Team reveals something more.

The new lords of the internet wild west

The term “hacker”, aside from suggesting a high level of technical expertise, fails to take into account the wide range of aims and motivations moving these experts, for example hacktivism, crime, or terrorism. Hackers are not just tech-savvy experts – they are the new makers, capable of shaping debate and consequently the path societies take. Look at their role in the events of the Arab Spring, those fighting regulation of intellectual property and copyright, and groups like the Syrian Electronic Army (or Hacking Team) that support governments’ intelligence activities.

The old rules, old sources of power don’t necessarily apply on the internet. security by Kirill__M\shutterstock.com

More worryingly this hack, like the many others before it, reveals the unregulated grey area in which hackers operate. Hacking Team, based in Italy, has always denied accusations that it works with authoritarian governments, including those for which European Union member states are under arms embargos, such as Sudan. But will the details now revealed lead to any action against the firm? Were its actions illegal under national or international law? It’s just not clear.

What is clear is the regulatory vacuum and lack of any effective restraints on the activities of hackers and cybersecurity firms, and the inability to distinguish legitimate from illegitimate uses of hacking expertise. Indeed, many working in the field cross from being “blackhat” (illegal) to “whitehat” (legal) operators. The distance between the two is often paper thin.

Bringing light to the shadows

Hackers prefer acting in the shadows, affording them anonymity and room for manoeuvre. Governments and intelligence services may favour a similar approach, allowing them to operate outside various constraints. But in the long run, information societies – especially democratic ones – cannot afford the risks of allowing this activity to remain in the shadows. As Locke pointed out, left to their own devices, the apparent saviour has the potential to become the next lion.

There have been attempts to regulate cybersecurity firms: the Wassenaar Arrangement, for example, defines rules controlling the export of surveillance software to specific countries. Very recently, the US Bureau of Industry and Security recently defined new rules based on the Wassenaar Arrangement. Although this showed a few significant drawbacks, the new rules are so broad that while forbidding collaboration with blacklisted counties they could restrict the legitimate use of tools used to improve computer security.

Such shortcomings are common when lawmakers attempt to regulate areas with which they are unfamiliar, overlooking their novelties and peculiarities. This has exacerbated the policy vacuum. The same can be seen in the application of the right to be forgotten in Europe, and the regulation of cyber-warfare.

The internet is the new realm with vital importance for all of us – and hackers are the masters of it, with the potential to bring about radical change, reshape the political status quo and redefine our understanding of political power. Until this is understood in the context of the current information revolution, any attempt to legislate and regulate the role of the hacker is doomed to failure.

The Conversation

Monday, June 29, 2015

Government must invest in skills and police resources to tackle cybercrime

There aren't enough skilled investigators to tackle the cybersecurity problem. polygraphus/shutterstock.com

It is estimated that the cost of cybercrime to the UK economy is around £27 billion per year, around 2% of national GDP. Some experts suggest this is too small, excluding as it does important vectors of cybercrime such as malware.

Computer security firm Norton estimates that more than 12.5m people in the UK fall victim to cybercriminals every year – 34,246 cases each day – with an average loss of £144 each. Again, this is probably an underestimation when one considers that many people will be victims of hacks or malware without ever knowing, and so they go unreported.

A global study conducted by the UN Office of Drugs and Crime reported rates of cybercrime including hacking leading to theft and fraud at rates of up to 17%, significantly higher than rates of their conventional equivalents at less than 5%.

Fighting cybercrime is by no means easy. The wide range of technologies and vectors of attack available to cyber-criminals and the cross-border nature of these crimes make investigating them difficult. The fragile nature of digital evidence complicates matters, tracks and traces that skilled cybercriminals can erase behind them. And the intrusive nature of investigating cybercrimes – which typically requires removing computer equipment for analysis – raises privacy issues that make digital forensics an even more complicated task.

Policing cybercrime in the UK

In the context of UK policing, the National Association of Chief Police Officers (formerly ACPO) Core Investigative Doctrine provides a strategic framework and good practice guidelines for forensic investigation of e-crimes. Since 2011, the UK government has adopted a centralised approach as part of its National Cyber Security Program, with the National Cyber Crime Unit (NCCU), part of the UK National Crime Agency, the central focus for tackling cybercrime in partnership with government agencies such as GCHQ and the Home Office.

The government has committed £650m to the cybersecurity programme to improve the nation’s cyber-defences and resilience. But considering that around 60% of this is to go to GCHQ for intelligence activities, this leaves only £260m for investigation and law enforcement – a figure that does not compare favourably to the estimated cost (£27 billion) of the crimes the NCCU is to investigate.

According to the commissioner of City of London Police, Adrian Leppard, there are 800 specialist internet crime officers, yet it’s expected that a quarter of them will lose their job due to budget cuts in the next two years. Again, considering Norton’s estimation of 34,246 individuals falling victim to cybercrime every day in Britain, the remaining 600 investigators would need to address 57 cases each day of the year – a mission impossible.

Skills needed

So the imbalance between the capabilities of organised e-crime groups and the limited capacities of law enforcement agencies is not something that the UK can resolve in the near future. However, some solutions may narrow the gap and confine criminals’ opportunities.

Most obvious is how few university courses there are at undergraduate and especially at postgraduate level in cybersecurity and e-crime forensics that could train the skilled investigators required. Tackling the threat of organised criminals working in cybercrime over the long term requires knowledgeable experts to profile, track, detect, and ultimately provide the information that can lead to their arrest.

At a recent TechUK event attendees suggested the lack of prosecutions under the Computer Misuse Act in the 25 years since it was introduced suggests the law is not fit for purpose – and the skills required to bring a prosecution under it are at the moment in short supply.

While the lion’s share of resources goes to GCHQ, the targets of its intelligence are not necessarily the criminal gangs of interest to the police. More resources for police agencies are necessary to bring investigative capacities up to the same level of the gangs they’re investigating.

GCHQ has reported that 80% of cyber-attacks can be prevented through better education and awareness among users. Developing regional hubs to promote cybersecurity training and education among general users would be key.

The fact that the Anonymous self-styled “hacktivists” whose attacks on Paypal cost the firm £3.5m were sentenced only to seven and 18 months might suggest that cybercrimes are sentenced lightly. A better understanding among judges and juries of the serious implications of cybercrimes and greater punishments and fines for financial crimes could help make cybercrime less rewarding to criminals.

The Conversation

Sunday, June 7, 2015

US hack shows data is the new frontier in cyber security conflict

Data mining Shutterstock

More than four million personal records of US government workers are thought to have been hacked and stolen, it has been. With US investigators blaming the Chinese government (although the Chinese deny involvement), this incident shows how data could be the new frontier for those in cyberspace with a political agenda.

In April 2015, the US Office of Personnel Management (OPM) – the body that provides the human resources function for the federal government and is responsible for background checks for security clearances – realised its records had been hacked.

Along with the direct personnel details, there are a whole range of references and contacts contained in the OPM records. The sensitive data could be used to identify people with security clearances, and could be used for the impersonation or blackmail of federal employees. Someone with security clearance could be exposed to identity fraud, where an intruder could gain access to sensitive information using the stolen identifies.

The data could also be used to hack into other government sites. For example, intruders recently attempted to breach the Inland Revenue Service’s systems (this time it was blamed on Russia) using personal information taken from tax returns stolen during other commercial breaches.

Such attacks create a certain amount of national humiliation. The hacking of confidential data from Sony highlighted how embarrassing it can be for information to leak. The contents of its sensitive emails are now searchable on Wikileaks, and we have probably only seen the tip of the iceberg in terms of the data that was taken.

How did the hackers beat the system?

Aware of the threat of attack, the OPM said it has “undertaken an aggressive effort” to improve its cybersecurity over the last year. So why, many might ask, did it take the government so long to detect the security breach?

Many large companies now use advanced intrusion detection systems (IDS) that raise alerts of possible security breaches that are then collected, logged and analysed. At the OPM, the system that detected the breach was called EINSTEIN. It was developed by a division of the Department of Homeland Security to monitor the exit points of US government by examining the packets carried around a network for possible signs of intrusion.

The growing threat of attacks has led to the use of tools that gather all the event logs from IDS agents on a network. Human analysts then have to make sense of the events coming in, in order to spot possible signs of an intrusion. To do this advanced computer systems filter down the event logs and present only the most important ones to the analysts.

Special Operations Centres (SOC) and SIEM (Security Information and Event Management)

Unfortunately some of the tell-tale signs of an intrusion could be lost. In the case of EINSTEIN, the system has to monitor the gateways devices coming from each of the partner government agencies, where it might be difficult to detect an intruder who has remote access to the inside of one the networks.

It is common for an IDS to detect where there are high rates of data loss (which large amounts of data are filtered off the network). So if this data loss is fairly slow, the IDS will often not detect it. The system must be tuned to show standard signs of intrusions so it does not trigger too many alerts and swamp its human administrators. Cyber attackers, however, often understand these standard detection methods and will use ways to slowing down the intrusion and avoid being noticed.

Many networks use a firewall to separate servers that can be accessed from untrusted networks from the protected main network infrastructure is then protected on another network. In many large networks, IDS agents exist across the whole network and listen for possible intrusions. The problem is that an intruder can often get over the firewall, and then remotely access the protected systems. Many organisations also allow employees to access their computer remotely through a secure network connection. With stolen access details, an intruder can use this remote access path in the same way.

The other major weakness of many IDSs is that they cannot examine the contents of encrypted data packets, such as where users visit secured websites starting with “https://”. To overcome this, many systems ban direct secure connections and route the data via a proxy, where they can examine the packets between the user’s computer and the secure connection to the internet. Unfortunately, intruders can set up connections using what is known as an end-to-end encryption tunnel that bypass this provision and in which data loss cannot be detected by the proxy or IDS.

Secure tunnels with proxy and end-to-end

While it has not been proven that the most recent attack was driven by a political agenda, the information once leaked from a site can then be sold on for the purposes of compromising nation states. Governments still need to understand the risks around their documents and make sure there are effective safeguards in place to restrict access to sensitive information. They often have a lot to learn from high-risk companies, such as in the finance sector, where there is often large-scale detection of intrusions and monitoring for data loss.

The US agencies are saying that all those affected by the hack of the OPM will be insured against any loss they might experience as a result. But data is the life blood of most organisations and probably one of its important assets, so the need for improved security increases by the day.

The Conversation

Tuesday, June 2, 2015

To avoid militarising the internet, cyberspace needs written rules agreed by all

There needs to be rules that govern what takes place in the cloud as there are for what occurs on the ground. David James Paquin

In the world of foreign affairs, there are written or unwritten rules – behavioural norms – under which states operate. But there is little, if any, comparable set of structures governing actions taken in cyberspace. As this becomes a larger and more important part of life and the security implications that arise, this poses a problem.

The US government recently released its strategy for cyberspace, the fourth update since 2010. Britain did the same in 2011 and again in 2013. The aim of the documents aim is to outline the consequences of foreign actions taken in cyberspace in order to provide a deterrent to their use. The problem is, that in order to promote an international norm that could be agreed upon, any global strategy should really be drawn up by a state that hasn’t already launched cyber-attacks.

For example, the US strategy document lists China, Russia, Iran, and North Korea as its prime digital enemies. The research that Ryan C Maness at Northeastern University and I undertook for our book on cyberwarfare found 20 attacks by China on the US from 2001-2011, three by Russia, one by Iran, and three by North Korea. After 2011, there have been Russian intrusions into the White House and Department of State, Iran’s attack on Saudi Arabia in 2012, and North Korea’s attack on Sony in 2014.

What is needed is a set of understood norms that specify the consequences of offensive actions taken in cyberspace. According to the US strategy, around 2% of the cyber attacks listed would invite a military response since they are of a significantly offensive nature, rather than merely inconveniences. Unfortunately, these statements alone would not be a deterrent. A military response may be an option for the US, but ultimately such threats are deemed empty without a demonstrated commitment to carry them out.

This is the classic nuclear dilemma covered so well in Dr Strangelove. How could any nation be sure another would commit to retribution in a given situation? Consequences are not a sure thing – as Syria discovered with the US’s moving “red line” in relation to chemical weapons. Suggesting that the evidence was not at all definitive, the US declined to launch an attack because it cannot be sure that Syrian president, Bashar al-Assad, condoned the attacks.

Cyber attacks prompt even deeper questions, as attribution is very difficult – and, even then, knowing who is responsible is of limited value when launching a conventional strike. Just because certain actors within a country might be responsible for an attack does not mean that the nation should be held accountable and punished.

Everybody needs rules of engagement. digitalgamemuseum, CC BY

A proper global cybersecurity strategy would need to move beyond consequences and threats towards a greater consideration of norms that could provide a basis for a collective response to the violation of agreed rules. These could include the limitation of physical damage, an agreement that civilians and civilian infrastructure are off-limits and to keep critical infrastructure such as power or water supply out of bounds in order to avoid the potential for humanitarian disasters.

But it’s tough for the US to call for military responses to cyber attacks when it is itself linked to nine such attacks between 2001-2011, including deploying the Stuxnet malware on Iran, the most advanced attack to date – not to mention all the revelations of the Snowden files. Other European nations can play a role here: with little connection to cyber-attacks, they could take a hand in outlining the future rules of the game without being hamstrung by obvious claims of hypocrisy and hidden agendas.

Cyberspace is the natural domain of research, education, social interaction and commerce. As far as is possible it needs to avoid militarisation. A just and proper strategy for cyberspace cannot be left to the aggressors or the victims to define – it is in the interest of all that every nation state contributes their voices.

The Conversation

Tuesday, May 19, 2015

How a hacker could hijack a plane from their seat

In-flight hacking Shutterstock

Reports that a cybersecurity expert successfully hacked into an aeroplane’s control system from a passenger seat raises many worrying questions for the airline industry.

It was once believed that the cockpit network that allows the pilot to control the plane was fully insulated and separate from the passenger network running the in-flight entertainment system. This should make it impossible for a hacker in a passenger seat to interfere with the course of the flight.

But the unfolding story of this hacker’s achievement, which has prompted further investigation by authorities and rebuttals from plane manufacturers, means that this assumption needs to be revisited.

In a similar way, it was once also believed that PIN protection was sufficient for ATMs. Then it was discovered that keystroke logging software can be used to translate sound signals created when pressing the ATM numeric keypad into the PIN, greatly reducing the time needed for hackers to guess for it. This could increase the risk of an ATM security breach compared with the previously held assumption that the system is secure as long as nobody can see it.

When it comes to technology, as one person is making sure that a system is secure, another is already working to bypass the established security. That is a worrying prospect when you’re at 30,000 feet and travelling at over 500 miles an hour.

Direct connections

The hacker claims to have been able to access the cockpit network through communication with the in-flight network. Many in-flight entertainment systems now have USB ports and some airlines run Wi-Fi. Both are potential entry points for the determined hacker to access all the plane’s computer systems.

It is highly unlikely, however, that someone hacking the passenger network could take direct control of the pilot’s network because the two systems are designed to be insulated from each other. Network engineers have long been able to control what data passes between different network segments, and aircraft systems are no exception.

The FBI and other authorities may reveal that there is no evidence that the two networks are connected. But another explanation may be the hacker was equipped with a device (or a software probe) that can gather information from both networks. Is that likely? It is certainly possible.

Cockpit control. Shutterstock

Although insulated, the two networks in a plane are connected as they share common information about velocity, direction and weather. By monitoring just one network and comparing its traffic to the real world events, it would be very difficult to work out which network signals corresponded to which pieces of information. But by looking at the networks for signals that appear in both at the same time, a hacker may be more likely to infer how the data relate to physical changes.

They could then attempt to copy this traffic and send the same instructions, potentially taking control of the aircraft. Even if the messages were digitally encrypted and insulated, theoretically it should still be possible to work out which parts of the network are talking to each other. This means they could also identify the systems sending the instruction and launch an internal denial-of-service (DOS) attack, flooding the system with useless information and preventing the pilots from sending control data to the engines.

Monitoring the network

It is becoming imperative that airlines re-evaluate their internal aircraft security, particularly with the introduction of in-flight passenger Wi-Fi. They should also monitor any unusual network traffic that passes between the passenger cabin and the cockpit in order to watch out for any attempts at hacking.

The same principles that enable the hacking could be used to watch out for them by allowing two independent monitors to observe the causes and effects of unfolding events on the network via satellite. When both believe that there is an issue, the information could be reported back to the pilot as a noted risk.

Network engineers already accomplish this by looking at network traffic behaviour and inferring possible issues, without actually seeing the physical problem first hand. With the-time critical nature of airline safety, having more than one individual check for alerts, increases the possible assurance given to the pilot.

Any traffic not expected or requested should be treated as suspect and the prelude to a more detailed investigation. The aircraft could then automatically call on the services of remotely working security experts. This would allow them to warn the pilot of any attempted security breach and provide advice on how to deal with it.

The Conversation

Apple and Starbucks could have avoided being hacked if they'd taken this simple step

Hack attack Shutterstock

Apple and Starbucks are two of the world’s most trusted companies, but their reputations were recently tarnished thanks to some recent novice cybersecurity mistakes. Both set up systems that could have allowed hackers to break into customers' accounts by repeatedly trying different passwords, a procedure commonly known as a “brute-force” attack. The mistake both firms made was in not employing the simple tactic of automatically locking accounts after several failed attempts to enter a password.

Last week it was revealed such tactics allowed thieves to steal money from users of Starbucks' mobile app. In 2014, an investigation around the publishing of nude photos of celebrities taken from their iCloud storage accounts, identified that intruders could access Apple’s Find My iPhone app by continually trying different login details.

In order to protect against this type of attack, many sites block login after a given number of incorrect attempts. The system can then go into a permanent lock-out mode (where the user must perform a lock-out procedure, such as calling the hosting company to verify their account), or lock out for a given time (known as the hold-down time).

Brute force from a stolen account Author Provided

The size of mobile keyboards can make it tricky for users to correctly enter their password on the first try, especially as it is increasingly common for companies to require passwords with non-alphabet characters. To counter this, developers now often support many more incorrect logins than was previously normal. But many just go for an infinite number of incorrect ones without the chance of a lock-out.

In the Starbucks case, and in many others, the hackers managed to gain stolen IDs and passwords and then try to brute-force the accounts on the Starbucks mobile app, trying hundreds of logins per second.

One tactic of intruders is to try many accounts rather than concentrate on a single one and try lots of passwords for it, which is more likely to trigger security measures. There is a high likelihood that there will be some user accounts that match from the stolen credentials.

Intruder trying lots of accounts Author Provided

Users will also typically use the same password for multiple accounts, so if the intruder manages to gain the password against one compromised account, they will try the same password against other login systems. Often, the same email address is used as a login for different systems, so that it can be fairly easy for an intruder to try the same ID and password that has been used on another system against a new target.

In the case of both Starbucks and Apple, the companies' authentication systems failed to provide a lock mechanism for repeated attempts to enter usernames and passwords. This should have included:

  • A lock-out on a certain number of tries
  • A network detection system setup to detect multiple logins
  • A task or question that can’t be completed by automated bots (for example: Captcha)

Stopping attacks at source

The problem in cybersecurity is often as simple as a developer’s desire to quickly produce a solution and get it online, but forgetting to think through the processes that an adversary might take. In this case, it was a novice problem. Most system administrators would advise that a three-try system works best and will quickly knock out an automated agent. This lock can then be identified by the user and often reported by to the host company.

However, companies must also do their own penetration testing and not wait for the general public to find the weaknesses. For big firms such as Apple and Starbucks, there is no excuse for this.

Starbucks has made massive advances in getting users to trust mobile payments – and this kind of sloppiness is unlikely to stop this trend. But it is the lack of due process that is the most worrying in such large firms.

These businesses perhaps have a great deal to learn from the finance sector, where companies often employ many network monitors to detect brute-force logins and stop attacks at their source.

We would never trust a bank not to implement an auto-lock-out on incorrect passwords. A simple email reset on three bad attempts seems a balanced approach. Obviously if someone compromises your main email account they can do the reset for you, but it is another hurdle in their path. Also an intruder could trip a whole range of accounts on a network too.

Increasingly, multi-factor authentication is used, often involving location-tracking via a phone’s GPS, to prove a user is who they claim to be. This means the best piece of security you have could actually be the mobile phone that goes everywhere with you (but please make sure to refresh your passwords on a regular basis).

For companies, however, there’s nothing else for it but to employ managed security services with highly trained staff who can pick-off threats as they occur.

The Conversation

Saturday, April 25, 2015

TV5 Monde take-down reveals key weakness of broadcasters in digital age

Attack on TV5 Monde is seen in France as an attack on media freedom. Yoan Valat/EPA

In what was one of the most severe outages of its kind, French national television broadcaster TV5 Monde was recently the target of a well-planned and staged cyberattack that took down its 11 television channels, website, and social media streams.

The hacker group responsible claimed to support the Islamic State, and proceeded to broadcast pro-IS material on the hijacked channels, while also exposing sensitive internal company information, and active military soldiers details.

It took TV5 three hours to regain control of its channels. The scale and completeness of the attack, and that it involved hijacking live television broadcast channels, has shocked the industry and prompted heated discussion on what steps might prevent or at least limit the likelihood of this reoccurring.

The shift from analogue

The fact that a major European public service broadcaster could be taken down so efficiently flags up an underlying weaknesses in modern broadcasting.

For years the industry has been moving away from traditional, analogue audio-visual broadcasting technology towards digital-only, network-based infrastructures. This is a logical and necessary process for broadcast companies to keep pace with technological development, and to benefit from the efficiencies of digital media network distribution. But any system based on delivering digital media over the internet is potentially vulnerable to cyberattack from outside.

These sorts of events often prompt moves that seem to be a case of bolting the stable door after horse has left. For example, when planning a new building or station installation, it’s common for there to be an argument over the value of a robust uninterruptable power supply system, or UPS. They are expensive and often seen as unnecessary – until the power fails, at which point a UPS redundant battery backup is worth, quite literally, its weight in gold (and batteries are heavy).

Similarly the reaction to the assault on TV5 has been a call for immediate and widespread cybersecurity improvements, including new collaborations between European security and law enforcement agencies in order to react faster and more effectively when such attacks occur.

The question must remain as to how the many, almost daily examples of hacking and cybercriminal attacks on firms hadn’t prompted broadcasters to take the threat seriously before now.

Old idea, new tech

There have been television broadcast signal hijacks before these modern, internet-enabled times. In 1977, the evening programming from broadcaster Television South in the UK was cut across by a hoax signal overriding the programme’s audio, claiming to be from an alien civilisation and demanding world disarmament. In 1986, HBO’s east coast satellite feed was interrupted by a hacker calling himself Captain Midnight, actually satellite engineer John R. MacDougall, protesting at cable television fees.

The Max Headroom hijacker – still on the loose. Youtube

In 1987, a Chicago television broadcast was interrupted by a man wearing a Max Headroom mask. He has never been identified. In these instances hijacking the signals involved physical access to or tampering with the transmitters uplink sites, or broadcast feeds. For example, MacDougall worked at firm that uplinked programmes onto satellite feeds and so had access to all the equipment needed.

There are other means of interrupting broadcasts, such as intentional jamming of signals by using one transmission of a higher power to block out another. During the Cold War it was common for the Soviet Union and Eastern European governments to use high-powered antenna to cancel out Western media such as Radio Free Europe east of the Iron Curtain.

More recently, the BBC World Service coverage of the contested Iranian election of 2009 was quashed by stronger signals causing interference throughout Iran and surrounding countries.

There are relatively few examples of incidents like these because it’s difficult to interrupt a television or radio broadcast chain – not so in our new, all-digital, internet-connected media infrastructure. The scale of this intrusion into a major European public service television station is unprecedented, and a worrying escalation of the scope and capability for politically-motivated attacks on the media and freedom of speech.

The Conversation

Thursday, February 26, 2015

UK has little to be proud of as survey reveals sorry state of European cybersecurity

That sinking feeling of inaction ... geralt

The European Commission’s annual Eurobarometer Cyber Security Survey, the third edition of which was recently released, is a substantial survey of more than 27,000 respondents from 28 countries. It contains interesting and, more often than not, disappointing revelations about the state of Europe’s security.


As specialists in the field, we look forward to the report’s release. But as we wrote a year ago, the complete lack of media and expert interest in the study is amazing. Heaven help the survey authors if they have to justify its impact based on media coverage.


Falling on deaf ears


The UK government has adopted a bizarrely triumphalist discourse around cybersecurity, one that is clearly at odds with the experience of the 1,329 survey participants from the UK. In fact, year on year the survey results reflect that the UK is not in a good position, particularly in comparison to some of our more advanced neighbours. This is probably not what Downing Street wants to hear or publicise – particularly in an election year – as it seems that providing some sort of external or independent accountability for the impact of the hundreds of millions of public money spent is not a top priority.


The UK is not alone in its disdain for the survey’s results, which were similarly disregarded by most other Europeans. It’s a sad outcome for the only large, non-commercial, unbiased, and independent survey on this important topic.


Eurobarometer survey results


There are lots of facts in the report, including some that are very apparent to most people: internet use is up, mobile internet use is leading the way, Europe shows a marked digital divide between nations like Sweden and The Netherlands and others like Bulgaria, Romania and Greece. Other findings include how more than half (57%) of Europeans shop online, 23% sell online, and 54% use online banking. That last figure is relatively large, in our view, taking into account the associated risks.


The UK is among the worst EU countries for identity theft. Eurobarometer 2015


The two most common concerns of European citizens are the misuse of personal data and the security of online payments – responders were significantly more worried about both than they were last year. At least, good practices such as installing antivirus software (61%), not opening suspicious-looking emails (49%), and being careful not to give away personal information (38%) seem to be increasingly popular.


UK almost tops the charts for fraud from goods bought online. Eurobarometer 2015


Not only are people more concerned with the risks of cybercrime but 47% believed they were well informed, up from 44% last year. They claimed to avoid disclosing personal information online (89%), believed the risk of cybercrime is increasing (85%), and were concerned their personal information is not kept sufficiently secure by websites (73%) or public authorities (67%). This last point is worth emphasizing: two thirds of the citizens don’t trust the government or any other public authorities to keep their personal data safe – there is a large margin for improvement here.


Citizens are worried about identity theft (68%), malware infection (66%), online banking or bank card fraud (63%), having email or social media accounts hacked (60%), receiving scam phonecalls or emails (57%), or coming across racial or religious hate material (46%) or child pornography (52%) online. Interestingly, 47% are concerned with cyber-extortion and ransomware – a relatively new method that’s been very profitable for cybercriminals of late. In all cases, concern is up on last year.


UK number one in Europe for bank card and online banking fraud Eurobarometer 2015


Quite shocking is the finding that, despite being apparently aware of the many risks they face online, an incredible 74% of respondents thought they were able to protect themselves sufficiently from cybercriminals. We simply haven’t the words to express what overconfidence this demonstrates, and how unrealistic and dangerous it is. Computers and network security are complex matters – most people’s understanding of them, including ours, is at best incomplete and at worst practically absent. How people can believe they can protect themselves after, for example, having already discovered malware on their devices (as reported by 47% of respondents) is beyond us.


What needs to be done


Denmark, the Netherlands and Sweden are the three leading European countries for internet use. That might naturally imply correspondingly higher levels of cybercrime – but the survey findings suggest not. Whatever these nations are doing in terms of education, investment and technology development, we can do much worse than learning from then – or at the very least imitating their good practises.


As ever the UK results are discouraging. Britain misses the leading group by a large margin, and despite well-publicised government campaigns and huge investment in cybersecurity, we show very little overall improvement. Britain leads the way in misplaced confidence: 89% feel we can protect ourselves against cybercrime, which is a bad omen. It experienced the largest yearly increase on accidentally finding materials promoting racial hatred or religious extremism. And the UK also tops European tables of bank card and online bank fraud with 17% of citizens affected. The average is 8%, and in Germany for example the rate is 2%. The UK performs poorly in other areas too, casting a cloud not only on the UK but on crime rates for the whole of Europe.


More positively, the UK seems to be good at changing passwords and feeling well-informed about cybercrime, is among the leading countries where citizens are concerned over the use of their personal data, and also enjoyed the largest fall in scam emails and phone calls. Despite the large increase from last year, it’s also still extremely rare for UK users to encounter child pornography or racial or religious extremism materials online.


One problem is that the government’s information campaigns are focused largely on companies rather than individuals – some may argue that in this respect it’s no exception to Tory policy in other areas. Thus the Eurobarometer survey is probably not doing justice to the current UK government’s considerable, but possibly misguided, efforts.


People, not companies, should be prioritised; legislation and incentives should be aimed at protecting citizens and helping them to protect themselves. The main response to mistrust of government use of their data, in particular, should be to give them back more control. There have been some positive moves from Labour and the Liberal Democrats in that direction – but for now they are merely pre-election promises.


At the very least, could future governments please copy whatever it is they’re doing right in Sweden, Denmark, the Netherlands and some of our other more competent neighbours?


The Conversation

Wednesday, February 18, 2015

Beyond Silicon Roundabout, the UK is a high-tech start-up nation

There's more to the UK than just this roundabout. Stephen McKay, CC BY-SA

Whether as “Tech City” or “Silicon Roundabout”, the cluster of digital start-ups centred around Old Street in East London is well known. The extensive network of similar start-up clusters in cities outside the capital, however, has now been revealed by a thorough study of the UK’s start-up scene.


Since the economist Alfred Marshall developed the idea of “spillovers” back in 1890, there has been debate over how best to encourage the transfer of knowledge between organisations. The perceived wisdom is that, by co-locating similar organisations in clusters, knowledge will circulate between them and drive further innovation.


Keen to promote economic growth, governments have striven to develop clusters artificially, with planners especially keen to replicate the success of California’s Silicon Valley with the UK’s high-tech industries – hence Silicon Fen (Cambridge), Silicon Glen (Scotland), Silicon Gorge (Bristol), and so on. In this sense, while the findings of last week’s TechNation report by Tech City UK are interesting, it isn’t a surprise to see so many clusters emerging elsewhere.


Strength in numbers


Media reporting on tech clusters is often London-centric, but the TechNation report shows that this isn’t the full picture: 74% of digital companies are based outside London. Inner London is the third-fastest growing cluster in the UK, but it’s Brighton & Hove that has the highest concentration of digital businesses (3.3 times the national average). South Wales is fast developing as a centre of health tech and data analytics firms, and Bournemouth, a centre of digital advertising and publishing, has quadrupled the number of digital businesses since 2010, while Liverpool, a centre for UK games development, has more than doubled.


Naturally many of the growing centres for digital business are Britain’s other major cities: Manchester’s well-established media and publishing industries have gone digital, boasting the country’s highest per-company turnover. Bristol and Bath are globally significant areas of high-tech engineering with Hewlett Packard, Bristol Robotics Lab and the Bristol and Bath Science Park. Leeds and Edinburgh are strong in financial tech businesses, Belfast and Dundee are strong in games development. But there are others that might seem unusual, such as the centre for cybersecurity expertise developing in Great Malvern, alongside GCHQ in Cheltenham.


It’s not just about London. Tech City UK


Building success


Being in a cluster can enable access to infrastructure, knowledge and skills for dynamic but usually resource-constrained small and medium enterprises.


The high costs of research and development have led to renewed interest in what Henry Chesbrough has called open innovation, which builds knowledge sharing into the business model of start-up companies. This has allowed tech start-ups with few resources to develop rapidly by drawing upon the expertise of universities and other firms.


Interestingly, as the tools to work remotely have improved, it’s no longer necessary for firms to be permanently located in clusters to benefit from collaborations and hack events. For example, the Open Data Institute maintains several “nodes” in more remote parts of the country, such as Devon, which allows distant companies to connect to knowledge-sharing facilities in London and elsewhere. Clusters themselves are interlinked, and this provides provincial tech clusters with an advantage: being able to draw upon others' knowledge to create solutions and products without the high cost of being based in the capital.


With over 20 clusters profiled, there’s plenty of activity outside London Tech City UK


Our own research suggests that even mere affiliation to a cluster or participation in professional communities can be enough to help start-ups by raising their profile and providing legitimacy and reputation in order to help advance firms' chances in a global market. A strong cluster identity also attracts highly skilled employees, which develops a labour pool with more diverse skills, in turn driving better and faster innovation.


In this sense, geographical location is important. Cosmopolitan urban areas with access to bars, restaurants and education attract talented young workers, drawing new generations of talent into the cluster. Subsequently, tech clusters don’t just appear in London but are supported by the strong cultural draw and educational centres of Sheffield, Greater Manchester, Bristol & Bath and Brighton & Hove.


What doesn’t work?


Problems faced by tech clusters are often features of their own success. Competition for office space and employees drives up costs for everyone – and their appeal may attract large, incumbent firms that can out-gun smaller firms in acquiring resources. In this way the young, dynamic start-ups that the clusters were created to assist can find themselves squeezed out – something already occurring in Silicon Roundabout and throughout London.


The rising costs associated with tech clusters in London may see a further increase in the appeal of clusters outside of the capital for Britain’s tech community. What is needed is venture capital, business advice, additional training and support for national and international networks to ensure these clusters can overcome the financial and skill gaps in order to grow. As the TechNation report rightly points out, each cluster has its own unique configuration and this needs to be taken into account; any one-size-fits-all approach is doomed to fail.


The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...