Showing posts sorted by date for query mobile technology. Sort by relevance Show all posts
Showing posts sorted by date for query mobile technology. Sort by relevance Show all posts

Monday, September 28, 2015

In the future, your internet connection could come through your lightbulb

mightyohm, CC BY-SA

The tungsten lightbulb has served well over the century or so since it was introduced, but its days are numbered now with the arrival of LED lighting, which consume a tenth of the power of incandescent bulbs and have a lifespan 30 times longer. Potential uses of LEDs are not limited to illumination: smart lighting products are emerging that can offer various additional features, including linking your laptop or smartphone to the internet. Move over Wi-Fi, Li-Fi is here.

Wireless communication with visible light is, in fact, not a new idea. Everyone knows about using smoke signals on a desert island to try to capture attention. Perhaps less well known is that in the time of Napoleon much of Europe was covered with optical telegraphs, otherwise known as the semaphore.

The photophone, with speech carried over reflected light. Amédée Guillemin

Alexander Graham Bell, inventor of the telephone, actually regarded the photophone as his most important invention, a device that used a mirror to relay the vibrations caused by speech over a beam of light.

In the same way that interrupting (modulating) a plume of smoke can break it into parts that form an SOS message in Morse code, so visible light communications – Li-Fi – rapidly modulates the intensity of a light to encode data as binary zeros and ones. But this doesn’t mean that Li-Fi transceivers will flicker; the modulation will be too fast for the eye to see.

Wi-Fi vs Li-Fi

The enormous and growing user demand for wireless data is placing huge pressure on existing Wi-Fi technology, which uses the radio and microwave frequency spectrum. With exponential growth of mobile devices, by 2019 more than ten billion devices are expected to exchange around 35 quintillion (1018) bytes of information each month. This won’t be possible using existing wireless technology due to frequency congestion and electromagnetic interference. The problem is most acutely felt in public spaces in urban areas, where many users try to share the limited capacity available from Wi-Fi transmitters or mobile phone network cell towers.

A fundamental communications principle is that the maximum data transfer possible scales with the electromagnetic frequency bandwidth available. The radio frequency spectrum is heavily used and regulated, and there just isn’t enough additional space to satisfy the growth in demand. So Li-Fi has the potential to replace radio and microwave frequency Wi-Fi.

Light frequencies on the electromagnetic spectrum are underused, while to either side is congested. Philip Ronan, CC BY-SA

Visible light spectrum has huge, unused and unregulated capacity for communications. The light from LEDs can be modulated very quickly: data rates as high as 3.5Gb/s using a single blue LED or 1.7Gb/s with white light have been demonstrated by researchers in our EPSRC-funded Ultra-Parallel Visible Light Communications programme.

Unlike Wi-Fi transmitters, optical communications are well-confined inside the walls of a room. This confinement might seem to be a limitation for Li-Fi, but it offers the key advantage that it is very secure: if the curtains are drawn then nobody outside the room can eavesdrop. An array of light sources in the ceiling could send different signals to different users. The transmitter power can be localised, more efficiently used and won’t interfere with adjacent Li-Fi sources. Indeed the lack of radio frequency interference is another advantage over Wi-Fi. Visible light communications is intrinsically safe, and could end the need for travellers to switch devices to flight mode.

A further advantage of Li-Fi is that it can use existing power lines as LED lighting so no new infrostatcutre is needed.

How a Li-Fi network would work. Boston University

Lightening the burden of the internet of things

The internet of things is an ambitious vision of a hyper-connected world of objects autonomously communicating with each other. For example, your fridge might inform your smartphone that you have run out of milk, and even order it for you. Sensors in your car will directly alert you though your smartphone that your tyres are too worn or have low pressure.

Given the number of “things” that can be fitted with sensors and controllers then network-enabled and connected, the bandwidth needed for all these devices to communicate is vast. Industry monitor Gartner predicts that 25 billion such devices will be connected by 2020, but given that most of this information needs only to be transferred a short distance, Li-Fi is an attractive – and perhaps the only – solution to making this a reality.

Several companies are already offering products for visible light communications. The Li-1st from PureLiFi, based in Edinburgh, offers a simple plug-and-play solution for secure wireless point-to-point internet access with a capacity of 11.5 Mbps – comparable to first generation Wi-Fi. Another is Oledcomm from France, which exploits the safe, non-radio frequency nature of Li-Fi with installations in hospitals.

There are still many technological challenges to tackle but already the first steps have been taken to make Li-Fi a reality. In the future your light switch will turn on much more than just illumination.

The Conversation

Thursday, September 24, 2015

Hackers have finally breached Apple's security but your iPhone's probably safe (for now)

Shutterstock

Cyber security experts recently discovered that the almost impenetrable Apple App Store had been hacked. While cyber break-ins have become routine news for many companies, Apple has long prided itself on providing technology for its phones and tablets that was incredibly secure.

This was done by controlling how developers – the people who create your apps on your device – not only create their code but also upload it on to the app store. Steve Jobs ensured that Apple would check each app before it entered the marketplace, as well as the developers themselves, and the firm has enforced tight controls on what the devices could access.

This meant that Apple mobile products arguably were (and probably still are) the most secure you could buy. However a new attack dubbed XCodeGhost has done a great job of undermining Apple’s otherwise strong security.

The attack method used was cunning and, in a technical sense, impressive. Rather than attack the devices or the App Store, the hackers compromised the xcode framework, the underlying programming system used by developers to create the apps. This is akin to poisoning a city’s water supply at its source rather than attacking the settlement’s buildings or army directly.

App developers use a suite of software known as xcode to create programs for Apple devices. Within this is a large library of functions that enable each created app to talk to the underlying phone or tablet. Each library function has different roles, from allowing you to share your location to making your phone sound like a light sabre when you wave it around.

The hackers created a malicious program (malware) that used the internet to seek out Mac computers with xcode installed, gambling on the possibility that some of these devices were used to create apps for the Apple App store. It then dropped contaminated code library features into the xcode system. These will appear to do what the app developers programmed them to do but also capture and send personal data from your device back to the hackers.

Malicious intent Shutterstock

Security experts are concerned that this innovative attack leaves Apple open to future attacks. It attacks anyone who has this coding environment installed on their computer system and compromises the code before it enters the secured systems offered by Apple.

Not only is this embarrassing for the company, as their checks clearly missed this compromise. It is also embarrassing for the many developers affected as their own internal security and anti-malware processes have been compromised.

What does this mean for you?

If you are the owner of an iPhone or iPad, there is nothing you can do. Apple has never offered Apple device owners the opportunity to protect their own technology. Apple has owned this, controlled this and until recently has been very successful in protecting its products.

Android-powered devices have historically been relatively vulnerable to an excess of 40,000 types of malware. The equivalent number for Apple devices remains very low. However, this new and interesting attack means that attackers have established an alternative route into your device, through the framework used by app developers. They only need one compromised app from one compromised developer machine to be successful.

Different experts have already found multiple apps, such as Angry Birds 2, that are infected. Many of these apps are being updated in earnest by their creators to patch the security breach and new versions are automatically being installed on your iPhone or iPad. If you are ultra concerned you can delete the app and re-install in a few days time when you know it has been secured.

In order to prevent further breaches, Apple must review its security policies and how it checks all code before it enters their App Store. It also means that the onus is on all developers to improve the way they scan their own systems. Otherwise, Apple will refuse to allow them to participate in this otherwise very successful and secure system.

The Conversation

Wednesday, September 23, 2015

Ten years on, invisibility cloaks are close to becoming a manufacturable reality

A new invisibility cloak can hide objects using an ultrathin layer of nanoantennas that relect off light. Are humans next? Courtesy of Xiang Zhang group, Berkeley Lab/UC Berkeley, CC BY

Invisibility has long been one of the marvels in science fiction and fantasy – and more recently in physics. But while physicists have figured out the concept for how to make invisibility cloaks, they are yet to build a practical device that can hide human-sized objects in the way that Harry Potter’s cloak can.

Objects are visible to the human eye because they distort light waves according to their shape. We see the objects by registering these distortions when the light from the objects hit our eyes. In a similar way an object can also be visible to a radar, which transmits radio waves or microwaves that bounce off objects in their path.

So far, most invisibility cloaks are made from engineered materials that can bend light in a way that manipulates the eye – or another device such as a radar. However, these typically only work for tiny objects. But that may be about to change. A new experiment has created a cloak that, for the first time, can hide small objects of any shape completely from visible light. The cloak, which is thinner and more flexible than any of its predecessors, can also be scaled up to hide bigger objects – potentially transforming the science into something that can be manufactured and sold.

Messy metamaterials

The first invisibility cloak was created in 2006 by British scientist John Pendry. It consisted of a material that could bend microwaves, but not visible light, around a tiny, 2D object measuring just a couple of micrometers – making it look like they had travelled straight and never touched the object. Since then, better versions that work for other wavelengths in both two and three dimensions have been created.

We may still be far away from making humans invisible, but at least we’re now one step closer. Charles D P Miller/Flickr, CC BY

Nearly all of these cloaks rely on the use of metamaterials, which are a class of material engineered to produce properties that don’t occur naturally. They typically have small internal structures built out of glass, metal or plastics or dielectrics, electrical insulators loaded with nano-particles. In this way they can be made to interact with light in unusual ways. However, these are generally bulky and can be hard to scale up.

Another problem is that it is difficult to make invisibility cloaks conceal light completely. If there’s just a little bit leaking out the hidden object can’t be completely invisible.

Promising technique

The new cloak is more sophisticated than past devices. It is ultra thin and able to conceal a small three-dimensional object measuring 36 by 36 micrometers by completely reflecting a wavelength of visible light, which has not been done before. And perhaps the most important feature is that the technology could be scaled up to hide bigger objects.

The downside? It only works for light at 730-nanometer wavelength, which is visible light near the infrared part of the spectrum. While this could be useful to hide things from for specific devices, such as radar, it would have to be improved to scatter lights from all wavelengths on the visible spectrum to be able to hide from the human eye. While we are still some way away from doing this, we are getting closer.

The cloak hides objects by wrapping them in layer of gold nanoantennas — only 80 nanometers thick. The antennas in the cloak manipulate the light as it hits the object in a way that makes it look like it’s bouncing off a flat surface instead – making it impossible to see the geometry of the object.

The technology of invisibility cloaking has many potential uses, ranging from military applications to bio medicine, computing and even energy harvesting.

For example, it could be used to render an aircraft invisible from radar. Stealth aircraft, which have been built to avoid detection by radar, are thought to have first been produced in Germany during World War II and use a number of technologies that reduce reflection and emission of light. The cloak can be also used to isolate closely placed antennas, which eventually reduces the footprint of antenna arrays and makes future communication systems extra compact.

Meanwhile, the UK QUEST project, led by Queen Mary, University of London to come up with new ways to manipulate electromagnetic fields, has challenged the fundamental physics of thin absorbers, which can dissipate unwanted incoming waves, by combining graphene with metamaterials to develop “stealthy” wallpapers to create wireless-secure environments, reduce the interference of handheld devices and reuse the radio to increase mobile communication capacity.

With so many important applications, it is surely just a matter of time before the cloaks get better and more practical. With the help of ever-emerging advanced manufacturing tools, ten years on, the future of invisibility is coming into view.

The Conversation

Tuesday, September 22, 2015

New iPad? Tech firms have abandoned radical innovation for mediocrity

Shutterstock

The dust has now settled on the latest product launch from Apple, which for many trumped headlines about refugees, poverty and the battles for the Republican nomination and leadership of the UK Labour Party. We have new iPads, iPhones and more. But how new are they really?

Innovation is often characterised as being either “radical” or “incremental”. When it is radical, it sets new precedents and fundamentally changes the way we do things. From self-administered insulin to solar powered houses to driverless cars, radical innovation releases potential. Incremental innovation on the other hand builds upon what is already there in small steps.

In the world of mobile phones and tablets, incremental has become the new radical, and true radical innovation has been relegated to the sidelines. Incremental innovation has become the norm because of a belief that “slow and steady wins the race”, that people don’t like the risks that come with big dramatic changes. That seems to be Apple’s long-term strategy and, as a dominant player, it is setting the culture for other players in the market.

Using staged marketing in the form of annual or biannual high-profile media launches, tech firms have groomed us as consumers to accept small change as normal. More radical innovation, such as a modular phone that can be continually upgraded, is seen as crazy, quirky or even science fiction.

No radical innovation

The new iPad Pro that is a few inches bigger than the last one is being hailed as a “big leap” when it’s really just tinkering with the old design. Despite the new features, it in no way represents a radical innovation worthy of ecstatic celebration. The whoops of delight at its launch were followed by voices of disappointment online.

It is primarily for commercial reasons that Apple has institutionalised incremental innovation and tried to convince us all it is radical. iPhones and iPads are brilliantly designed things. Incremental innovation requires expertise and excellence in design and improvement. Phones and tablets play a major part in millions of peoples' lives. But continued innovation happens at a slow pace designed to suit the supplier not the user, who is nonetheless pushed to pay significant amounts of money each year for minor changes.

When they said the new iPad was bigger they weren’t kidding. Beck Diefenbach/Reuters

Fear of failure may have also contributed to the disappearance of radical innovation. The struggles of more unusual designs such as that of the Amazon Fire phone may have made innovators more cautious, delaying and lengthening product development and rollout to compensate. Perhaps it isn’t surprising that virtually none of the radical (labelled “crazy” at the time) concept phones of 2010 have never appeared on the market.

We may have also reached a point where phone design is so good that truly impressive change has become much harder to achieve. So we continue to buy similar looking products, putting them to our ears (just as we did with landlines), snapping cameras with slightly better picture clarity, and getting slightly more intelligent answers from Siri. Same game, tiny changes, price hike.

A smartphone revolution

At the same time, major new challenges are emerging for smartphone makers, from evidence that current phone designs may be fuelling unhappiness and reducing productivity to the worrying environmental impact of manufacturing them. Radical innovation is needed so that phones fully serve customer interests in a sustainable way.

But for the time being, more radical products, such as the Yotaphone 2, (which offers a dual screen), or the Runcible (round, beautiful and rather different), will be at best seen as quirky and niche. The existing market leaders will only change their tortoise-speed approach to radical innovation if a major new player genuinely disrupts the market with fast, penetrative changes.

For example, Chinese company Xiaomi is creating a range of products for the home (from TVs to air purifiers) that automatically link with their smartphones in a single, integrated system. This is the kind of radical idea that could shock Apple into becoming more radical and adventurous.

We could eventually see mobile computing move away from hand-held, screen-based devices towards seamless interaction across different devices and platforms such as wearable technology and projected holograms.

For the foreseeable future, however, innovation in the mobile and wearable space is going to be dominated by incremental and fairly mediocre approaches to innovation. Radical thinking will be consigned to concepts for the future and the iPhone 7 will probably look a lot like the iPhone 3. But the launch will be offered as another revolution.

The Conversation

Tuesday, August 11, 2015

Google becomes Alphabet in effort to keep the innovative spark alive

Google: no longer just a search engine. mwichary/flickr, CC BY

In the corporate world you learn quickly that if small companies want to collaborate, it tends to happen, while efforts to collaborate with large companies may involve many meetings and involve many people with no guarantee anything will come of it. Small companies innovate as they need to; big companies are often risk averse.

Google’s announcement that it is to reorganise under a new parent company, Alphabet, is a step towards overcoming this sort of bureaucracy and maintaining the fiercely innovative and daring streak that has until now been its trademark.

Large companies have more freedom to ignore their end users, preferring secrecy from fear of having their ideas stolen, and instead focus on large stakeholders. This means that they often create products that are too wide in scope and which fail to address specific needs.

For smaller businesses, innovations are part of the way they engage with customers. Rapid prototypes are released, and assessed to see what works and what doesn’t. These prototypes are then scaled up and made relevant to a wider range of potential customers. Despite its enormous size and wealth, this is also the approach that Google favours.

Too often large companies don’t trust their engineers to make sensible judgements on business decisions. This probably shouldn’t be the case, as often the most successful technology companies are run by those who worked up through a technical role. Companies such as Hewlett Packard, Apple and Google made their names through being technically excellent, rather than a narrow focus on business objectives.

Google’s move effectively splits one monolithic company into several smaller companies wholly owned by Alphabet, of which Google is the largest. In this way, Google (or should we say, Alphabet) hopes to keep each of its areas of focus small, fast, and innovative.

G is for Google. Let’s hope M isn’t for mistake. Alphabet

Risk averse

After all, Google is not just a search engine any more. It has expanded in many directions, from mobile phone design and operating systems, to smart home control kits, automotous cars, geomapping, and off-the-wall projects. It is comfortable trying things out and dedicating the resources to ideas with potential.

This risk-taking is a key part of Google’s innovation infrastructure, giving independence of thought to staff and technical leaders without over-burdening them with business issues. In fact, it’s similar to a traditional academic research model, where academics with good ideas get the resources that allow them to drive them forward. Done well, the university becomes a leader in the field, just as Google has become a technology giant.

Small works in software

Google wants to attract the best staff into research labs, and achieves this by creating a small-company infrastructure where engineers are not burdened by bureaucracy. However, unlike smaller businesses, Google has the deep pockets to support its staff. A rising star can be given responsibilities without the need to progress through a formal hierarchy.

After all, the structure of large companies may limit their ability to produce useful software – take for example the many major government IT contract disasters, such as the £10 billion spent on an NHS IT system that ultimately never worked.

What would a small company have done differently? It would have invested time in searching for the best solution, created and tested prototypes, and used those as a basis for the final product. The large companies involved in the NHS contract had off-the-shelf solutions, which they pushed without questioning their suitability. Too much money was spent on design and requirements analysis, and it was years before the product reached the clinical staff, by which point it was a computer programmer’s dream but a nightmare for the intended user.

Reputations built on people

Leading universities generally have individuals to thank for their success – for examples cryptography at Royal Holloway, led by Professor Fred Piper, and the University of Edinburgh’s Informatics Group that thrived under the guidance of Professor Sidney Michaelson.

So big companies need to act like small ones and provide opportunities for innovation and risk-taking to thrive, where individuals who do not want to conform to strict rules and procedures can take on their vision of the future. After all, Apple was a garage company once, and Microsoft had to borrow someone else’s operating system (known as 86-DOS and purchased from Tim Paterson of Seattle Computer Products) to get a foot on the ladder.

Google’s enormous impact is mostly down to the creativity of individuals, its image still one of a bunch of software developers who just love to write code – not easy for a company whose products increasingly find places in almost every web user’s life. Let’s hope that the creation of Alphabet protects the small-company ethos that has made Google great.

The Conversation

Monday, August 10, 2015

Graphene is missing ingredient to help supercharge batteries for life on the move

Graphene could have a radical influence on the future of energy storage. graphene by nobeastsofierce/shutterstock.com

While our gadgets these days are constantly getting smaller and more powerful, the development of commercial batteries both small enough and with sufficient capacity to feed their power-hungry demands has not quite kept pace.

Most people will have heard of Lithium-ion (Li-ion) batteries. They’re in almost all mobile electronic devices – from your mobile phone and laptop, through to back-up power supplies on jets and even spacecraft. Surprisingly though, despite this huge demand, the fundamental design of Li-ion batteries has remained broadly similar in recent years.

Battery life is frequently the constraining factor in many existing and experimental applications. It’s key for the future of technologies such as electric cars, and for high-capacity energy storage for renewables such as wind and solar power. In fact the comparatively slow progress with developing new batteries has resulted in many electronics manufacturers turning to trying to reduce or maintain their products’ power requirements to find a balance.

Which is not to say that there’s no research into new energy storage techniques. Far from it in fact. The past few decades have seen an explosion of research in this area. Unsurprisingly, a good deal of this revolves around improving Li-ion batteries. The new “wonder material” graphene has also been suggested as a possible key to the solution. Graphene has number of interesting properties that have led researchers to suggest either modifying components of Li-ion batteries, or using graphene as the energy-storage medium instead as promising solutions.

Just add graphene

Graphene has also been used to develop electronic devices with extremely low power requirements. This is possible (in part) because pure graphene has the lowest resistivity of any known material at room temperature – devices made of pure graphene can conduct electricity more efficiently than any other material (at room temperature). As a consequence, very little energy is wasted.

Devices built with graphene would not experience the same problems of heating faced by current electronics – they could run indefinitely with very little increase in temperature. Heat is bad for electronics; it means energy is being wasted and it often serves to reduce the efficiency of the device further as it heats up. Pure graphene virtually eliminates energy losses of this kind, which makes devices produced from it extremely energy-efficient. For consumer electronics, this could mean significantly more powerful devices with massively improved battery life – a win-win scenario if ever there was one.

What’s more, studies indicate that using graphene to replace or enhance components of Li-ion batteries can significantly improve the energy density and longevity of the battery. One popular technique has been to make the anodes or cathodes in Li-ion batteries out of graphene.

Supercapacitors of various sizes – but none of them small enough, yet. Maxwell, CC BY-SA

Your next battery may be a supercapacitor

Another technique is to use graphene as the energy-storage medium itself. This has been used to construct supercapacitors – perhaps the strongest future competitor to Li-ion batteries in uses that require very rapid charge times, such as in the case of electric cars.

This is arguably their critical feature. A supercapacitor can go from fully discharged to fully charged many orders of magnitude faster than comparable Li-ion batteries. In this context, it is the large surface area of graphene that is important, because the amount of charge that can be stored is related to the surface area of the materials from which it’s made. So again, graphene is ideal.

Despite supercapacitors’ potential to challenge the ubiquitous Li-ion battery, current supercapacitors are invariably too large and too expensive to replace them in the same roles. However, prototypes indicate that superconductors may meet the requirements necessary to replace conventional batteries in the not too distant future.

Ultimately, the challenge with any of these prototypes is the ability to scale production to meet the demands of the consumer electronics industry. Graphene-based solutions have so far been notoriously difficult to manufacture on a large scale, thanks in part to the difficulty of isolating high-quality graphene. Nevertheless, the future for energy storage and energy-efficient technology looks bright. Whether graphene ultimately plays a part in the revolution or not, its clear that the research into these technologies will eventually lead to the introduction of cheaper and more durable products with a higher capacity.

It’s no understatement to say that an energy revolution awaits as a result of next-generation energy-storage devices, which could help usher in the age of fully electric vehicles, large-scale renewable energy generation and the end of our reliance on fossil fuels.

The Conversation

Wednesday, August 5, 2015

Researchers are looking to a surprisingly old idea for the next generation of ships: wind power

University of Tokyo

In many ways, it’s an obvious solution. For many centuries, world trade over the oceans was propelled by wind power alone. Now that we’re seeking an alternative to the fossil fuel-burning vehicles that enable our modern standard of living, some people are turning again to renewable solutions such as wind to power our tankers, bulk carriers and container ships. Globalisation and economic growth might mean a direct reversion to the wooden sailing boats of yore makes no sense, but there are several 21st-century ideas that could make wind-powered shipping commonplace again.

Ship design certainly has a way to go to return to its heritage and take advantage of the wind’s free, renewable resource in the same way we have reinvented the windmill to produce electricity. However, it’s worth remembering wind turbines took a long time to evolve into the structures optimised and deployed at scale we have today. In fact, they’re still developing. Scientists and engineers have debated for years about the relative merits of two, three or more blades, of horizontal versus vertical configurations, and of onshore versus offshore generation.

For ships, the design process for wind technologies is potentially even more complicated and multi-dimensional. There are soft sails, rigid “wing” sails, flettner rotors (a spinning cylindrical vertical column that creates lift using the Magnus effect, originally conceived by Flettner in the 1920s) and kites all vying for a share of this market. Soft sails are fabric sails, most reminiscent of existing sailing ship designs, examples include the Dynarig and Fastrig. Rigid wing sails replace the fabric with a rigid lifting surface like a vertically mounted aircraft wing - for example the oceanfoil design.

A flettner rotor is a vertical cylinder rotated by a motor. The rotation modifies the air flowing around the cylinder to generate lift much like the lift generated by an aircraft wing (it’s referred to as the Magnus effect). While there are many examples of all four, so far it’s the kites and the flettners that have seen the most significant implementation on large merchant ship designs.

Notable examples include the work that Cargill and Wessels have done trialing kite systems , and the experience of two separate operators, Enercon and Norsepower with installations of different flettner designs on different ships. These trials have produced important full-scale experience, lessons about costs, performance data, and evidence for investment cases. All of which are undoubtedly taking us closer to the tipping point when wind once again becomes a ‘no brainer’.

Enercon’s E-Ship 1 with flettners Carschen/Wikimedia Commons, CC BY-SA

Trials of these new technologies, in combination with the history of wind turbines, can help us understand why any transition to modern wind-powered ships won’t happen overnight. For one thing, no one yet knows which of the many candidate designs will be the most successful.

Modern wind-powered shipping technology also carries a significant engineering challenge that wind turbines don’t: it needs to be mobile. It’s not as simple as bolting a rig to the deck. The highest safety standards have to be maintained and the rig must pose no constraints to loading and unloading cargoes in an uncertain and wide range of different ports (many of which might be obstructed by bridges).

Resolving these issues will take time, money and investors with the appetite for risk and stamina to see an emerging technology from a prototype to a fully developed new product. But I believe the change will happen because of the price of fossil fuels and environmental regulation. Wind power is free so the technology will become a worthwhile investment once it can be clearly evidenced that the saving from moving away from fossil fuels outweighs the costs of installing and operating a wind-powered ship.

Many think that threshold oil price has already been achieved and exceeded, as evidenced by the large and growing number of projects proposing wind propulsion solutions, even allowing for the recent fall in oil prices.

While there is currently only weak regulation on shipping’s greenhouse gas emissons, the sector – like all those producing carbon dioxide – is likely to face more stringent controls as its emissions continue to grow. Exactly what form such controls will take remains the subject of further ongoing work. But any meaningful regulation would reinforce the case for wind-powered shipping as a favourable investment.

Shipping is a vital, if somewhat hidden, part of modern economies. Decarbonising those economies is the only way to avoid destroying them (and the environment). Wind power presents an astoundingly obvious and elegant solution to these combined challenges. But it will languish in the sidelines until we see rapid change from investors, politicians, or ideally both.

The Conversation

Thursday, July 23, 2015

Online carjacking: do auto manufacturers realise dangers of networked motors?

When your car becomes a computer, you're problems just got much bigger. car by Denys Prykhodov/shutterstock.com

While computers bring great benefits they come with drawbacks too – not least, as news stories reveal every day, the insecurity of often very private data connected to the public internet. Only now that computers are appearing in practically everything, the same insecurity also applies – as demonstrated by the drive-by hack of a speeding Jeep SUV, hijacked and shut down by security researchers as it sped past at 70mph.

Vehicles are growing ever more sophisticated, with technological additions to newer models designed to increase safety, comfort and convenience while providing entertainment features and improving the car’s environmental impact. These innovations are more than just marketing ploys for manufacturers to sell their vehicles as cutting edge, they also help save money on materials and to comply with increasingly stringent safety and environmental laws.

Consider the benefits of a fully-connected vehicle: computers are never distracted, never get tired. They may be able to learn from driver behaviour and, using technologies such as active lane assist, can even correct human errors of judgement to a certain degree. Human productivity can be boosted, allowing for example a hands-free phone call while behind the wheel. Concepts such as platooning – where cars follow each other closely in a train – could help reduce congestion while allowing speedier commutes and greater fuel economy.

However this drive-by vehicle hack (on which there will be a presentation at Black Hat conference later this year) and others, such as the method of compromising brake systems using DAB radio signals, demonstrates the dangers of considerably networked, computerised vehicles designed without adequate protections.

More software, more problems

Precise details about how the Jeep was hacked, other than that the public IP address must be known, and that the attack relies on the uConnect mobile phone network, are yet to be revealed. While this gives the manufacturer time to provide a patch to fix the problem in this case, the vulnerabilities of mobile phone and internet network connections have been researched for years and are well-known and well-understood. If anything, this vehicle hack shouldn’t come as any great surprise; more surprising is the lack of care paid to securing these well-known angles of attack in the first place.

Exploiting software flaws remotely through an internet connection – the most likely culprit – is made possible because we prize internet and phone connectivity sufficiently that manufacturers will fit it to our vehicles. This allows access to any piece of exposed hardware that is not “air-gapped”, in other words physically separate and unconnected from the rest of the system. An attacker can pivot through the system, using one compromised component in order to compromise another, until the keys to the kingdom are acquired – in this case the critical control units capable of shutting down the engine.

Keys no longer required.

Introducing these wireless network interfaces to vehicles presents the greatest danger: the ability to control cars, or even many cars en masse, from any distance. This possibility has caused such alarm there are plans in the US (where this attack was demonstrated) to introduce new legislation to tackle the issue.

Complexity creates vulnerability

That’s not to say that network connectivity is the only issue. The presence of considerably more software in modern cars alone is a significant contributing factor to security problems. It has been estimated there is a software engineering industry average of 15-50 errors per 1,000 lines of code. The same can be said for integrating so many different systems, features and technologies – added complexity makes security testing much more difficult. These challenges, when vehicles migrate from being connected to being fully autonomous, could potentially have even broader security ramifications.

With any feature that makes something more safe, convenient or entertaining, there is potentially an equal amount of convenience for an attacker if sufficient defences haven’t been put in place. The documented incidents of vehicles stolen by hacking keyless entry systems were down to technology designed to make unlocking a car more convenient for customers. Alas, the convenience works both ways.

Achieving safety and security has always been – and will continue to be – a balancing act. The National Highway Traffic Safety Administration (NHTSA) in the US states that in 94% of cases the last failure leading to a crash can be attributed to the driver. In the face of such evidence, despite the security vulnerabilities that may emerge as they are deployed and used, it would be counter-intuitive to ignore technology that could potentially save lives.

What is required to prevent these emerging problems from becoming overwhelming is an engineering process that embeds security in automotive design from the outset, implemented using secure coding practices as is found in other safety-critical areas such as nuclear reactor management or air traffic control, and reinforced with robust security testing procedures.

Only then will we see the world’s car manufacturers move from the back foot to the front foot in the face of an internet-full of would-be cyber-carjackers.

The Conversation

Friday, July 17, 2015

When Chrome, YouTube and Firefox drop it like it's hot, Flash is a dead plugin walking

Despite its longevity, now there's more than just aesthetic reasons to drop Flash. logo by 360b/Shutterstock.com

After more than 20 years making the web a slightly more interesting and interactive place, albeit one that pandered to designers’ worst excesses and (in pre-broadband days) led to interminable download waiting times, the word on the net is that Adobe Flash Must Die.

The ironic hack of Hacking Team, the controversial security and surveillance software firm, exposed yet another brace of security flaws and vulnerabilities in Flash, the hugely popular multimedia animation plugin for web browsers. This may be the final straw: Mozilla has disabled Flash by default in its Firefox browser, and Facebook’s chief of security has called for Adobe to set a date when the program will be taken behind the shed and shot:

Why hate Flash?

The software and services that Hacking Team sells provide the means for its government and law enforcement clients to break into and even control computers remotely through the internet. The huge leak of the firm’s company data also revealed details of previously unknown vulnerabilities in software that could be exploited to provide ways of hacking computers – known as zero-day vulnerabilities because the software’s manufacturer has no time to fix the problem.

Zero-day vulnerabilities are great news for criminals. Three of these vulnerabilities were in Flash, and some of those revealed in the leaked documents appeared in attack kits available online within hours – faster than the developers of the affected programs could fix the holes, let alone distribute the updates to millions of users worldwide.

The Flash plugin is notorious for being riddled with security flaws and other shortcomings. Yet it’s also one of the most popular pieces of software on the planet. So what will it take to kill it?

It seemed like a good idea at the time

Back in the web’s dim and distant past (the 1990s), web pages were static, unyielding things with just text and images and occasionally a dumb animated GIF that everyone but the designer hated.

But we wanted more: interactivity, responsiveness, perhaps even a little bit of bling. Flash made this happen, and animators and designers could create all the interactivity they wanted and wrap it up in a file that was inserted into the web page and downloaded on request.

The web is a hostile place for browsers, however, and the more functionality exposed to the web, the larger the surface exposed to attack. Flash offers a large attack surface, and because animation is often computationally demanding, Flash needed deep access to many aspects of the computer to work well, making any flaw potentially serious.

Security isn’t the only problem with Flash. For example it wasn’t security but Flash’s demanding processor and battery consumption that caused Steve Jobs to banish Flash from the iPhone and iPad. On a device with such limited resources as a smartphone or tablet, Flash just doesn’t fit.

While these drawbacks could be tackled, Flash’s proprietor Adobe seems uninterested in doing so, having not released an update to Flash Player on mobile since 2012.

Flash forward to the future

Yet Flash endures, mainly on account of the last 20 years in which websites have been created using it and the plugin has been installed in billions of browsers. There have been attempts at alternatives: Microsoft’s Silverlight was Windows-specific and never caught on, and even the company itself urges people not to use it; Java applets have even worse problems than Flash, and have already been deprecated or removed from modern browsers.

The best hope for the elimination of Flash is HTML 5. The latest version of HTML, the markup language in which web pages are written, finally includes support for directly embedding video and audio in a web page. In combination with JavaScript, web pages can now offer all the interactivity and animated bling that anyone could want. Having previously been without a doubt the largest user of Flash, YouTube now uses an HTML 5-based player as default for its video content. Google’s Chrome browser dropped support for Adobe Flash some time ago, and uses only its own version.

Inside, HTML 5 supports a lot of technologies such as audio/video now, with more to come. Sergey Mavrody, CC BY-SA

HTML 5 has two major advantages over Flash. As a much more modern technology (2014 versus 1995) it delivers better results with fewer resources, making it better suited to mobile devices. But more importantly it requires no plugin, which means the surface open to attack by hackers doesn’t expand just because you want to watch a video, or because some site wants to display an animated advert.

Of course there are still sites that use Flash extensively, and these will have to be redesigned in HTML 5. While these sites still exist and people wish to use them, the Flash problem will not go away.

It’s more than just Flash

Flash’s problems make it an easy target, but it’s just one place where security failures occur. Of the zero-day exploits discovered so far in the Hacking Team leak, three relate to Flash, one to Java, one to a font processor for Windows (also made by Adobe), and one to Microsoft’s Internet Explorer 11 browser. But security is hard, no software is invulnerable, and breaches like this will continue to happen. Even if Flash is somehow secured – or disappears entirely – security flaws will still be found and exploited in other software. Security is an ongoing journey, not a destination.

The bigger problem is how the exploits originate. Hacking Team didn’t discover most of these exploits – they bought them from hackers who found them, keeping them secret for use in their products. Perhaps this is why a security firm such as Hacking Team becomes a tempting target for criminals, as a concentrated source of zero-day exploits.

As governments and intelligence agencies collect more information, they will also become more valuable targets. If Britain’s GCHQ is able to bypass all encryption, as prime minister David Cameron has suggested, then all our data could be vulnerable to anyone who can find the slightest crack in GCHQ’s armour.

The Conversation

Friday, July 10, 2015

Obituary: Caspar Bowden, a fearless privacy pioneer

Caspar Bowden, privacy advocate and campaigner. Rama, CC BY-SA

The world’s privacy advocates are reeling over the loss of one of their most influential and feared campaigners, Caspar Bowden, who has died of cancer. His fierce and combative evangelism for online privacy over two decades and surgical analysis of complex surveillance legislation raised the standard of commentary that influenced advocacy groups at home and abroad.

I had the honour and the pleasure of becoming a close friend and co-conspirator of Caspar. It wasn’t always easy – he held high expectations of his colleagues, who could often experience his wrath whenever they dared to negotiate with “the bastards” (whoever they happened to be at the time). The archaic American expression “ornery” could well have been invented for Caspar Bowden, as his opponents well knew.

In conferences and meetings where officials and ministers appeared there was frequently what became known as the “popcorn moment”, when Caspar would stand up and, from the back of the hall, clear his throat and launch into a devastating critique that would utterly destroy the credibility of his opponents. Within two years, ministerial staffers were routinely calling me to find out whether Caspar would be in the audience. No better tribute could ever be awarded to any campaigner.

Caspar Bowden, mid ‘popcorn moment’. Rama, CC BY-SA

Caspar joined the mainstream privacy world in 1997 during the Scrambling for Safety encryption event that I organised at the London School of Economics, and soon after he co-founded the Foundation for Information Policy Research (FIPR), which became the most astute think-tank in Britain in the field of surveillance.

At the time Caspar chaired Scientists for Labour, an organisation which at the time believed that the Labour Party (which had been elected to government only 18 days earlier) would actually respect scientific advice. The reams of dangerous and intrusive legislation the Labour government subsequently passed caused him to ditch this fantasy. In the years since Caspar appeared to abandon all faith in parties, taking pride in comparisons with TV character Mr MacKay in the comedy series Porridge, who famously said: “I have a job to do and, whatever else I am, I’m firm but fair. I want you to know that I treat you all with equal contempt”.

In 2002 Caspar joined Microsoft’s operation in Europe as chief privacy strategist, but the arrangement was a bad fit. Caspar continued to be outspoken, eventually parting company with Microsoft after he criticised the lack of privacy measures in its software and the firm’s cosiness with US government spooks. Years before Snowden’s revelations about US and UK mass surveillance in 2013, Bowden had already become deeply worried about the relationship between companies and security agencies – with his arguments about the safety of cloud data proven true by the subsequent leaks.

Gus Hosein, executive director of Privacy International and an an old friend and colleague said:

I’m not new to this issue, but whenever I struggle to get my head around the implications of a new policy or technology, I always looked to Caspar. I sought his guidance to navigate it, but I feared what he would say if I came out with something stupid. The future is uncertain enough, but without him it is even more daunting.

Caspar was very accurately described by another close friend and colleague Ian Brown, professor of Information Security and Privacy at Oxford University:

Caspar was a truly unique individual, one of the most passionate, methodical, relentless advocates of any cause I have met. I learnt so much from him as we worked together on and off for nearly 20 years on privacy issues. His forensic analysis of UK surveillance laws, and later European and US legislation, was essential reading for anyone who wanted to understand the implications of some extremely obscure language – including legislators themselves.

Brown believes UK internet users are still benefiting from Caspar’s successful campaign to remove “Big Browser” surveillance powers from the Regulation of Investigatory Powers Act 2000, and to ensure the burden of proof was not put onto individuals who might have actually forgotten passwords later demanded by police. His important reports for the European Parliament will also be key in the long-term decisions made by the EU to protect the privacy of its 500m citizens.

Anyone who knew Caspar understood that he was dogged in his later years by a deep cynicism about progress in privacy. Deeply mistrustful of governments, corporations and even the law, he eschewed mobile phones and came to place his faith almost solely on mathematical solutions, for example by heavily promoting the concept of differential privacy, which attempts to prevent a loss of privacy in situations where details can be inferred from other data.

Perhaps Caspar’s greatest legacy is that, in an age of increasing compromise, he showed us the importance of dogged, non-negotiable persistence. As George Bernard Shaw observed, all progress depends on the unreasonable man. In that respect, Caspar was a beacon of progress.

The Conversation

How 3D printing helped robots tackle their greatest obstacle: stairs

Is there a lift? I'm trying to conquer the universe and I need to reach the first floor. Les Chatfield/Flickr, CC BY-SA

We’ve long attempted to recreate living creatures in robot form. From very early age of robotics, there has been attempts to reproduce systems similar to human arms and hands. This has been extended to flexible and mobile platforms reproducing different animals from dogs to snakes to climbing spider octopods, and even entire humanoids.

One of the key actions performed by animals from mantises to kangaroos is jumping. But incorporating a jumping mechanism into autonomous robots requires much more effort from designers. One of the main challenges for robots is still travelling efficiently over rugged surfaces and obstacles. Even the simple task of going up or down a staircase has proven to be rather difficult for robot engineers.

A jumping robot could provide access to areas that are inaccessible to traditional mobile wheeled or legged robots. In the case of some search-and-rescue or exploration missions, in collapsed buildings for example, such a robot might even be preferable to unmanned aerial vehicles (UAVs) or quadcopter “drones”.

There has been increasing research in the robotics field to take on the challenges of designing a mobile platform capable of jumping. Different techniques have been implemented for jumping robots such as using double jointed hydraulic legs or a carbon dioxide-powered piston to push the robot off the ground. Other methods include using “shape memory alloy” – metal that alters its shape when heated with electrical current to create a jumping force – and even controlled explosions. But currently there is no universally accepted standard solution to this complex task.

A new approach explored by researchers at the University of California San Diego and Harvard University uses a robot with a partially soft body. Most robots have largely rigid frames incorporating sensors, actuators and controllers, but a specific branch of robotic design aims to make robots that are soft, flexible and compliant with their environment – just like biological organisms. Soft frames and structures help to produce complex movements that could not be achieved by rigid frames.

Soft landing Jacobs School of Engineering/UC San Diego/Harvard University

The new robot was created using 3D printing technology to produce a design that seamlessly integrates rigid and soft parts. The main segment comprises two hemispheres nestled inside one inside the other to create a flexible compartment. Oxygen and butane are injected into the compartment and ignited, causing it to expand and launching the robot into the air. Pneumatic legs are used to tilt the robot body in the intended jump direction.

Unlike many other mechanisms, this allows the robot to jump continuously without a pause between each movement as it recharges. For example, a spring-and-clutch mechanism would require the robot to wait for the spring to recompress and then release. The downside is that this mechanism would be difficult to mass-manufacture because of its reliance on 3D printing.

The use of a 3D printer to combine the robot’s soft and hard elements in a single structure is a big part of what makes it possible. There are now masses of different materials for different purposes in the world of 3D printing, from flexible NinjaFlex to high-strength Nylon and even traditional materials such as wood and copper.

The creation of “multi-extrusion” printers with multiple print heads means that two or more materials can be used to create one object using whatever complex design the engineer can come up with, including animal-like structures. For example, Ninjaflex, with its high flexibility could be used to create a skin or muscle-like outer material combined with Nylon near the core to protect vital inner components, just like a rib cage.

In the new robot, the top hemisphere is printed as a single component but with nine different layers of stiffness, from rubber-like flexibility on the outside to full rigidity on the inside. This gives it the necessary strength and resilience to survive the impact when it lands. By 3D printing and trialling multiple versions of the robot with different material combinations, the engineers realised a fully rigid model would jump higher but would be more likely to break and so went with the more flexible outer shell.

Once robots are capable of performing more tasks with the skill of humans or animals, such as climbing stairs, navigating on their own and manipulating objects, they will start to become more integrated into our daily lives. This latest project highlights how 3D printing can help engineers design and test different ideas along the road to that goal.

The Conversation

Wednesday, July 8, 2015

BBC micro:bit aims to turn children from digital consumers into digital creators

Good things come in small packages, but are all small packages a good thing? BBC

The way computing is taught in schools is going through its greatest upheaval since the subject was first introduced at the turn of the century. After considerable lobbying by the industry, professional societies, universities and schools, the national curriculum has been re-oriented towards establishing computing as the “fourth science” for schools.

Out go interminable lessons on how to use specific word processor or spreadsheet applications. In comes more rigorous teaching about the scientific principles of technology and how to put it to use creatively – to be taught, importantly, by example rather than by rote.

Plugged in to this change of tack is the recently announced BBC micro:bit, a tiny, inexpensive pocket-sized computing device. The BBC plans to give away a million of these devices free to every Year 7 child (11 to 12-year-old) in the country this autumn, to encourage children to become a generation of digital creators.

Conceived by the BBC, the micro:bit has been developed by organizations including ARM, Microsoft, Freescale, Nordic Semiconductor, Element 14, Samsung and Lancaster University.

One of the goals of this initiative is to create a greater number of students that go into computer science and related fields of study with a better understanding of technology, transforming the UK from a nation of digital consumers into a creative powerhouse. To place this in context, a recent House of Lords select committee report suggested some 35% of current jobs in the UK could be lost to automation over the next 20 years. In my personal opinion, that’s an understatement. The strategy for the nation here is made clear: to create new jobs through digital innovation. This will only be possible if today’s children are adequately skilled and motivated to rise to this challenge.

This is an ambitious strategy, and one that requires an equally ambitious approach to delivering it. The response to this call to arms has been quite simply staggering. Not from the government, but from enthusiasts, volunteers and evangelists. To name but a few such organisations picking up the challenge: Computing at Schools support almost 20,000 teachers throughout over 600 regional UK hubs, Code Club organise after school programming clubs throughout the UK and Teen Tech who expose teenagers to the wide range of career possibilities in science, engineering and technology.

The BBC’s first foray into computers was more than 30 years ago. Stuart Brady

Not the BBC’s first computer

But we’ve been here before. It’s hard not to draw a parallel to the Model B, a computer released by the BBC in the 1980s for the same reason – to forge a new technologically-savvy generation of future students, innovators and entrepreneurs. The BBC Model B typified the computer of its day, a modestly-specified desktop machine with integrated keyboard that could connect to a home television. At just 4cm by 5cm, the micro:bit is a very 21st-century computer: compact, with built-in sensors and wireless communication, and an ARM Cortex M0 processor around 18 times more powerful than its forerunner.

Packed full: what the Micro:bit comes with. BBC

The micro:bit supports Bluetooth Low Energy, which means it can interact wirelessly with other nearby devices such as mobile phones and tablets. This pitches it more as part of the emerging internet of things, made up of small low-powered devices that can provide services or data to other, more powerful devices such as smartphones.

It has a simple display of 25 LEDs arranged in a 5x5 matrix, just enough for simple graphics and text, and it is also equipped with an electronic compass and three-axis accelerometer so it can detect its orientation, and standard connectors that provide an easy way for children to integrate the micro:bit into their own creative electronics projects. It’s not a motivational toy – it’s a computing case study, a simple demonstrator for how complex computers can be used.

Building a community, not just a device

Running the code simulator on a tablet. BBC

The device is only part of the package. The micro:bit can be programmed through the web, in a variety of programming languages tailored for different levels of ability. It will support highly visual drag-and-drop languages ideal for beginners, Microsoft’s Touch Develop, Python and JavaScript for intermediate users, and C++ for older, experienced programmers. The accompanying website also provides teachers access to pre-written learning resources and also a platform to create and share their teaching materials with other teachers, or to publish their pupil’s work if appropriate to inspire others.

After the initial 1m units are delivered, the devices will be available for commercial purchase with proceeds directed to a not-for-profit foundation. All the micro:bit hardware and software will be open-sourced, allowing others to build on the foundations laid.

The micro:bit is aimed at fostering an ecosystem to support digital creativity, balancing motivation and education while reducing the barrier to entry for both children and teachers. Will it be successful, inspiring fond memories in a generation like the Model B? We’ll see.

The Conversation

Friday, July 3, 2015

Why it makes sense for BT to shut down its telephone network

The telephone network is dead, long live telephone calls! guy_hatton, CC BY-NC

As the telecoms regulator Ofcom embarks on its next strategic review of the UK’s telecommunications services, BT has called for it to be allowed to close down its telephone network. Perhaps somewhat counter-intuitively, this actually makes sense.

As Ofcom’s studies confirm that landline and mobile telephone call use continues to fall, telecoms companies have faced mounting pressures to find other ways of making money.

BT, like many others, has sought to diversify by offering so-called “quad play” packages, common in the US, which bundle together telephone, broadband, mobile and television services. So it could be said that BT is as much a television company as it is telephone company these days. The world has moved on, and BT with it.

However, BT and KCOM Group (formerly Kingston Communications, serving Hull) are alone within the industry as they are designated by the Communications Act 2003 as “universal service providers”. This means that both companies must provide basic telephone services on request and at the same price to all customers throughout their areas of influence.

In the 12 years since the world has changed dramatically: in 2003 people were starting to switch their internet access from dial-up modems to the new, speedier connection called “broadband”. The UK’s first, relatively primitive 3G mobile phone network opened, the Freeview digital television service turned one year old, and BBC iPlayer was still many years in the future.

Today in 2015 the internet is everything; the average UK adult now spends more time per day interacting with connected digital technology than sleeping, and the average household has at least three internet-connected devices.

BT’s argument is that the future is not telephones but digital services over the internet. BT has an entire network built to handle the telephone calls of the last century (and the century before that) - the so-called Plain Old Telephone System (POTS). The rest of its network is a modern telecommunications network that’s digital right up to the cable that connects the exchange to the home, which carries both voice and data signals. Today telephony is just another service that can be delivered over the internet – why do we need a large and expensive network dedicated to offering telephone services?

POTS and kettles

The answer is we don’t. Telephone services can be provided in what’s called an “over-the-top” service running on a data network. The mobile industry has already recognised this and re-designed its networks accordingly. While first-generation, second (2G) and third (3G) networks provided both voice and data, today’s fast 4G networks are data only. Voice calls are just another form of network traffic like web browsing, social media or streaming video.

BT (or more accurately its arms-length Openreach division which provides the infrastructure), argues that it wants to do the same, but its hands are tied by the legally-binding requirements of the “universal service” clause of the Communications Act.

However, we need to be very careful here with our terminology. According to Ofcom, 16% of UK adults now live in a mobile-only household. This means those people rely solely on their mobile phones for making telephone calls and don’t have a landline telephone. Does this mean Openreach has removed their telephone line? No, because that same line is what provides broadband services to our homes – you can have a telephone line without connecting a telephone to it.

BT’s argument is that today the concept of a universal service has become a millstone around its neck that forces it to maintain a large and outdated telephone network at great cost to support telephone exchanges that fewer and fewer people use.

Despite the growing trend that sees people using apps such as Skype, Viber or Whatsapp to communicate through the internet, the companies responsible have no requirement upon them to build or maintain their own networks – they rely on those provided by telecoms firms. Why should BT be lumbered with offering a basic telephone service to all, when the same thing can be achieved via broadband? Scrapping its network that is solely responsible for providing a telephone service would allow BT to re-invest in the further development of its broadband and internet provision and so compete more freely with other “over the top” providers.

There is of course an obvious caveat to all of this: the concept of universal service is to ensure that every household has access to basic telephone services. This remains an important and worthy obligation, it’s just that there are now other ways of achieving it. That’s the point that Ofcom really needs to grasp: scrapping the telephone network does not mean scrapping the telephone.

The Conversation

Thursday, July 2, 2015

Virtual reality tech may make 'going shopping' in real life a thing of the past

'Too much Call of Duty, not enough shopping'. pestoverde, CC BY-SA

High street shops are well-established online these days and provide new opportunities for interaction between shop and shopper. Consumers have become accustomed to shopping using a range of devices and the immense popularity of smartphones and mobile devices has led to the rise of mobile or m-retailing, with new communication and distribution channels created with these in mind. Perhaps this mix of the real and online worlds are helpful precursors for what may be the “next big thing”: virtual reality shopping.

Virtual reality (VR) experiences are typically provided through wearable headgear or goggles that block out the real world and immerse the user in a virtual one. This is distinguished from augmented reality (AR), where layers of digital content can be overlayed on the real world, providing access to both. For example, the digital information displayed on the visor of Google Glass.

Apps can provide ‘live’ augmented reality to try on superimposed accessories and clothes. Eawentling, CC BY-NC-SA

While AR can work with mobile devices and is already included in some apps, for VR to succeed the headgear needs to be comfortable, stylish and powered by sufficiently capable software so that the immersive visual effects are credible – and useful. It’s possible to add deeper engagement with the virtual world by incorporating other senses, for example tactile hand controls for handling and manipulating objects.

In-store tech

Magic mirrors, where how you’d like to look is projected onto your actual appearance. Intel, CC BY-SA

However, the use of technology by retailers in-store has been patchy. The availability of in-store Wi-Fi has increased, and some stores offer touchscreens and tablets for customers to browse and search for items and look up information. More common are video screens displaying fashion collections, often connected to apps offering inspirational looks. However more cutting edge tech, such as magic mirrors that overlay the image of the shopper with the clothes they’ve selected, allowing them to switch style and colour options, are less widespread. Sometimes they’re also less than reliable.

In any case, shoppers tend to appreciate functionality over more playful or whimsical means of interacting with the retailer. New additions are welcome when they are informative and save the shopper time, helping them locate products in the store or at another. Not surprisingly consumers would rather not pay for these services, and prefer to be engaged rather than marketed to. Young fashion shoppers simply use their phones to share photos of potential purchases through Snapchat and Instagram. Image is everything, with the retailer providing the backdrop.

Present trends point to the expansion of interactive shop window displays and in-store communication that uses a combination of GPS, transmitters such as the Apple’s iBeacon and other devices using Bluetooth transmissions to interact with shopper’s smartphones. These will take personalisation and micro-marketing to a new level with real-time offers and information dispatched to their phone as they pass near product displays.

To support their brand, retailers will increasingly look at their customer relationships, so stories, images, videos and news – fashion and cosmetic blogs have been particularly successful – is where many new opportunities will arise. However, while creative and technologically novel, these are all at best examples of augmented rather than virtual reality.

Making a (virtual) impression

Where does this leave the use of virtual reality? We can expect to see trials as retailers become more comfortable offering content through them. New VR headsets such as from Oculus Rift and Sony will offer more and more realistic immersive environments. Sony, drawing on its Playstation expertise aims to to add movement to the user experience. Some brands have already piloted virtual stores, where VR-equipped shoppers could one day have the same experience of browsing through racks and shelves waiting for something to catch their eye – without needing to leave their home.

VR will provide an opportunity to re-visit and experience retailers' and desigers’ fashion shows of the past, events and exhibitions. For example, Top Shop recently transmitted London Fashion week as it happened through Oculus Rift headsets to customers in its Oxford Street store. It may also provide a means for retailer to extend the lifespan of certain promotions to individual customers.

Immersion is particularly promising in the creation or re-creation of 3D environments, which could be especially helpful for those buying furniture, furnishings, paint and decoration for their homes to envisage how it would look. The recently developed Virtuix virtual reality platform provides a motion controller that translates the users physical movements into equivalents in the virtual environment – a means to, literally, walk around a virtual world.

However, any major step forward will need to make the retailer’s investment worthwhile, and as neither the technology nor shoppers' complete acceptance of VR is where it needs to be today, there’s some way to go before VR becomes the next big thing in shopping.

The Conversation

Wednesday, July 1, 2015

Robot law: what happens if intelligent machines commit crimes?

I'd buy that for a dollar. Or, just steal it from you. elbragon, CC BY

The fear of powerful artificial intelligence and technology is a popular theme, as seen in films such as Ex Machina, Chappie, and the Terminator series.

And we may soon find ourselves addressing fully autonomous technology with the capacity to cause damage. While this may be some form of military wardroid or law enforcement robot, it could equally be something not created to cause harm, but which could nevertheless do so by accident or error. What then? Who is culpable and liable when a robot or artificial intelligence goes haywire? Clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.

While some may choose to dismiss this as too far into the future to concern us, remember that a robot has already been arrested for buying drugs. This also ignores how quickly technology can evolve. Look at the lessons from the past – many of us still remember the world before the internet, social media, mobile technology, GPS – even phones or widely available computers. These once-dramatic innovations developed into everyday technologies which have created difficult legal challenges.

A guilty robot mind?

How quickly we take technology for granted. But we should give some thought to the legal implications. One of the functions of our legal system is to regulate the behaviour of legal persons and to punish and deter offenders. It also provides remedies for those who have suffered, or are at risk of suffering harm.

Legal persons – humans, but also companies and other organisations for the purposes of the law – are subject to rights and responsibilities. Those who design, operate, build or sell intelligent machines have legal duties – what about the machines themselves? Our mobile phone, even with Cortana or Siri attached, does not fit the conventions for a legal person. But what if the autonomous decisions of their more advanced descendents in the future cause harm or damage?

Criminal law has two important concepts. First, that liability arises when harm has been or is likely to be caused by any act or omission. Physical devices such as Google’s driverless car, for example, clearly has the potential to harm, kill or damage property. Software also has the potential to cause physical harm, but the risks may extend to less immediate forms of damage such as financial loss.

Second, criminal law often requires culpability in the offender, what is known as the “guilty mind” or mens rea – the principle being that the offence, and subsequent punishment, reflects the offender’s state of mind and role in proceedings. This generally means that deliberate actions are punished more severely than careless ones. This poses a problem, in terms of treating autonomous intelligent machines under the law: how do we demonstrate the intentions of a non-human, and how can we do this within existing criminal law principles?

Robocrime?

This isn’t a new problem – similar considerations arise in trials of corporate criminality. Some thought needs to go into when, and in what circumstances, we make the designer or manufacturer liable rather than the user. Much of our current law assumes that human operators are involved.

For example, in the context of highways, the regulatory framework assumes that there is a human driver to at least some degree. Once fully autonomous vehicles arrive, that framework will require substantial changes to address the new interactions between human and machine on the road.

As intelligent technology that by-passes direct human control becomes more advanced and more widespread, these questions of risk, fault and punishment will become more pertinent. Film and television may dwell on the most extreme examples, but the legal realities are best not left to fiction.

The Conversation

Friday, June 26, 2015

Miniaturisation will lead to 'smart spaces' and blur the line between on and offline

A computer-on-a-stick is the start, but they'll get smaller and smarter yet. Lenovo

Lenovo, the Chinese firm that has bought up IBM’s cast off PC business, has announced a miniaturised computer not much larger than a smartphone, which can be connected to any screen via an HDMI connection.

Advances in electronic components manufacturing processes and integration have resulted in large-scale miniaturisation of computer systems. This has enabled the latest system-in-package and system-on-a-chip approaches, where the processor and other necessary functionality usually provided by many microchips can be incorporated into a single silicon chip package.

Lenovo’s Ideacenter Stick 300 runs Windows 8 or Linux, is powered by a micro-USB connector and comes fitted with a new Intel Bay Trail CPU, 2GB RAM, 32GB flash storage, an SD card reader, Wi-Fi – even speakers.

Lenovo isn’t the first to shrink the PC down to pocket size. Intel’s Compute Stick is another dongle-sized computer with similar specs released this year.

Intel’s Compute Stick is another effort to shrink the PC to pocket size. Intel

The Raspberry Pi, now upgraded to its second major release, was probably the first to provide the functionality of a desktop or laptop computer in a credit card sized electronic board. Over five million Raspberry Pi computers have been sold since launch in 2012.

Google has used its stripped-down Chrome OS based on its Chrome browser to reduce a Chromebook (Chrome OS-powered laptop) down to the Chromebit. While the Chromebit is no larger than a USB memory stick, it’s markedly less powerful than Intel’s offering, as it is powered by the Rockchip RK3288, an ARM processor, which makes it comparable in power to a smartphone.

Google’s Chromebit, in more colours than black. Katie Roberts-Hoffman/Google

There are other stick-sized, computers running low-power ARM processors capable of running Android, such as Cotton Candy or Google Chromecast. These plug into a digital television to play video directly to the TV or from internet streaming services such as Netflix – but not much else.

The appeal of small

Computers this small are attractive for many organisations, such as schools and universities who need to equip functional computer laboratories at minimum cost while taking up as little space as possible. Low power devices also consumer less power which keeps costs down.

A typical desktop computer uses about 65-250 watts (plus 20-40 watts for an LCD monitor) – considerably higher than a typical PC-on-a-stick at about 10 watts. There are obvious business uses, such as digital signage and advertising when connected to screens or projectors.

This new round of computer miniaturisation marks a third wave of computerisation. First there were room-sized computers, shared between many users – the mainframe era. These time-sharing systems gradually disappeared as computers were miniaturised, replaced by the one computer per user of the personal computer or PC era. Today one person could have many computers, whether recognisable as desktop and latop PCs or smartphones or compute sticks, but which are accessible everywhere and anywhere. Known as ubiquitous or pervasive computing, this is the third wave in computing.

A smart, mobile future

As all computing devices grow smaller, the aim is that they are more connected and more integrated into our environment. The computing technology fades into our surroundings until only the user interface remains perceptible to users. It is an emerging discipline that brings computing to our living environments, makes those environments sensitive to us and have them adapt to the user’s needs. By enriching an environment with appropriate interconnected computing devices, the environment would be able to sense changes and support decisions that benefit its users.

There is a growing interest in these smart spaces using miniaturised computing technologies to support our daily lives more effectively. For example, smart offices, classrooms, and homes that allow computers to monitor and control what is happening in the environment.

Apple’s HomeKit and Google’s Nest are a start in this direction, providing the hardware and software to allow home automation. A smart home that monitors temperature and movement could allow elderly to remain self-sufficient and independent in their own home, for example, and voice activated devices could help everyday tasks such as ordering the shopping. A smart office could remind staff of information such as meeting reminders. It could turn the lights on and off, or control heating and cooling efficiently. A smart hospital ward will monitor patients and warn doctors and nurses of any potential problem or human errors.

The Smart Anything Everywhere vision of the European Commission drives research and development in this area. The evolution and disruptive innovation across the field of computing, from the Internet of Things, smart cities and smart spaces down to nano-electronics – the applications and benefits of greater miniaturisation of computers are endless.

The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...