Showing posts sorted by date for query apple. Sort by relevance Show all posts
Showing posts sorted by date for query apple. Sort by relevance Show all posts

Tuesday, September 29, 2015

Ad industry may gripe about adblockers, but they broke the contract – not us

madpixblue/shutterstock.com

The latest version of Apple’s operating system for phones and tablets, iOS9, allows the installation of adblocking software that removes advertising, analytics and tracking within Apple’s Safari browser. While Apple’s smartphone market share is only around 14% worldwide, this has prompted another outpouring from the mobile and web advertising industry on the effects of adblockers, and discussion as to whether a “free” web can exist without adverts.

It’s not a straightforward question: advertising executivesand publishers complain that ads fund “free” content and adblockers break this contract. Defenders of adblocking point out that the techniques used to serve ads are underhand and that the ads themselves are intrusive. Who is right?

Why we use adblockers

There are good reasons for using adblockers. People are usually prompted to do so by online advertising techniques that they find intrusive. These include pop-ups, pop-unders, blinking ads, being forced to watch videos before getting to the content, and ads that contravene the Acceptable Ads Manifesto.

Adverts and trackers can be loaded from multiple third-party websites, inserted into the web page by advertising networks rather than by the site’s publishers. While this saves publishers the hassle of finding advertisers and negotiating rates, it means they often have little say over what ads appear, which can lead to ads that are irrelevant, dubious, even offensive. The additional load on the browser from connecting to multiple sites at once also drains battery and bandwidth and slows down the page load – all for something we don’t want and which scours our devices to collect information about us for further use.

The UK’s Internet Advertising Bureau (IAB) believe that 15% of British adults use adblockers. The IAB study found that people blocked adverts because they were intrusive (73%), ugly or annoying (55%), slowed down web browsers (54%), were irrelevant (46%), or over privacy concerns (31%). What this suggests is that users don’t reject advertising per se, but intrusive advertising specifically.

Advertising, ethics and the web

The advertising industry argues that adblockers undermine the revenue model for publishers that relies upon behaviourally targeted advertising. They claim adblockers stifle start-ups that are dependent on advertising as a means of generating revenue. The theory goes that without advertising revenue all that’s left is subscription services, something which generally only large corporations are good at building.

While there is some truth to this, the argument assumes that digital start-ups (whether this be an app, a new social media service, or a news website) have access to a large user base from which to generate ad revenue. But of course this isn’t the case when firms are only just getting going. Start-ups rely on investment to grow and be self-sustaining: only then can advertising assist.

It is reasonable to argue that content has to be paid for. We might try to ignore the adverts that subsidise printed newspapers and magazines, but we cannot remove them. However, in respect of mobile devices – which have now become the primary means through which the world gets online – we must also consider the data plan that we pay for as part of our mobile phone contract. The firm behind one mobile adblocker, Shine, estimates that depending on where we live, ads can use up 10-50% of a user’s data allowance.

Annoying mobile ads make for unhappy phones and users. ronbennetts, CC BY-ND

Browsers as consent mechanisms

So the case for mobile is different, in that ads represent a cost to the user. Europeans living in EU member states have the right to refuse to be tracked by third parties. This comes under Article 5(3) of the EU ePrivacy Directive, that in 2012 was altered so people have to be asked upfront whether they consent to cookies.

The aim of this was to shift third-party cookies from being opt-out to being opt-in. The ad industry argued that people’s web browser settings were sufficient to indicate consent to interest-based advertising and tracking – but of course, many people do not know how to alter browser settings. Seen in this way, adblockers are a means of expressing (or rather, denying) consent – something made clear by the need to find and install an adblocking programme or browser extension.

The problem with the implied contract of advertising-for-content is that it is opaque and built upon questionable terms. It’s disingenuous to blame people for using adblockers: we accept adverts in magazines, newspapers and cinemas and on radio, billboards and television. The good ones make us smile. The best we fondly remember. We mostly stick to the deal that we get content free or at reduced cost in exchange for being exposed to ads.

But the growth of adblocking demonstrates that parts of the advertising industry have overstepped the mark with their creepy tracking mechanisms and deliberately confusing or irritating formats. The ad industry broke the contract, not us. How does anyone think that irritating people is the way forward? Which brand, large or small, would want to be associated with annoying their customers?

The growing number of people using desktop and mobile adblockers leaves the online advertising industry two options: fight web users and ad-blocking firms by lobbying for legal change or protection, or the more interesting route of trying to create a model that works for everyone. Rather than fighting the tide, advertising and publishing need to find a way to swim with it.

The Conversation

Thursday, September 24, 2015

Hackers have finally breached Apple's security but your iPhone's probably safe (for now)

Shutterstock

Cyber security experts recently discovered that the almost impenetrable Apple App Store had been hacked. While cyber break-ins have become routine news for many companies, Apple has long prided itself on providing technology for its phones and tablets that was incredibly secure.

This was done by controlling how developers – the people who create your apps on your device – not only create their code but also upload it on to the app store. Steve Jobs ensured that Apple would check each app before it entered the marketplace, as well as the developers themselves, and the firm has enforced tight controls on what the devices could access.

This meant that Apple mobile products arguably were (and probably still are) the most secure you could buy. However a new attack dubbed XCodeGhost has done a great job of undermining Apple’s otherwise strong security.

The attack method used was cunning and, in a technical sense, impressive. Rather than attack the devices or the App Store, the hackers compromised the xcode framework, the underlying programming system used by developers to create the apps. This is akin to poisoning a city’s water supply at its source rather than attacking the settlement’s buildings or army directly.

App developers use a suite of software known as xcode to create programs for Apple devices. Within this is a large library of functions that enable each created app to talk to the underlying phone or tablet. Each library function has different roles, from allowing you to share your location to making your phone sound like a light sabre when you wave it around.

The hackers created a malicious program (malware) that used the internet to seek out Mac computers with xcode installed, gambling on the possibility that some of these devices were used to create apps for the Apple App store. It then dropped contaminated code library features into the xcode system. These will appear to do what the app developers programmed them to do but also capture and send personal data from your device back to the hackers.

Malicious intent Shutterstock

Security experts are concerned that this innovative attack leaves Apple open to future attacks. It attacks anyone who has this coding environment installed on their computer system and compromises the code before it enters the secured systems offered by Apple.

Not only is this embarrassing for the company, as their checks clearly missed this compromise. It is also embarrassing for the many developers affected as their own internal security and anti-malware processes have been compromised.

What does this mean for you?

If you are the owner of an iPhone or iPad, there is nothing you can do. Apple has never offered Apple device owners the opportunity to protect their own technology. Apple has owned this, controlled this and until recently has been very successful in protecting its products.

Android-powered devices have historically been relatively vulnerable to an excess of 40,000 types of malware. The equivalent number for Apple devices remains very low. However, this new and interesting attack means that attackers have established an alternative route into your device, through the framework used by app developers. They only need one compromised app from one compromised developer machine to be successful.

Different experts have already found multiple apps, such as Angry Birds 2, that are infected. Many of these apps are being updated in earnest by their creators to patch the security breach and new versions are automatically being installed on your iPhone or iPad. If you are ultra concerned you can delete the app and re-install in a few days time when you know it has been secured.

In order to prevent further breaches, Apple must review its security policies and how it checks all code before it enters their App Store. It also means that the onus is on all developers to improve the way they scan their own systems. Otherwise, Apple will refuse to allow them to participate in this otherwise very successful and secure system.

The Conversation

Tuesday, September 22, 2015

New iPad? Tech firms have abandoned radical innovation for mediocrity

Shutterstock

The dust has now settled on the latest product launch from Apple, which for many trumped headlines about refugees, poverty and the battles for the Republican nomination and leadership of the UK Labour Party. We have new iPads, iPhones and more. But how new are they really?

Innovation is often characterised as being either “radical” or “incremental”. When it is radical, it sets new precedents and fundamentally changes the way we do things. From self-administered insulin to solar powered houses to driverless cars, radical innovation releases potential. Incremental innovation on the other hand builds upon what is already there in small steps.

In the world of mobile phones and tablets, incremental has become the new radical, and true radical innovation has been relegated to the sidelines. Incremental innovation has become the norm because of a belief that “slow and steady wins the race”, that people don’t like the risks that come with big dramatic changes. That seems to be Apple’s long-term strategy and, as a dominant player, it is setting the culture for other players in the market.

Using staged marketing in the form of annual or biannual high-profile media launches, tech firms have groomed us as consumers to accept small change as normal. More radical innovation, such as a modular phone that can be continually upgraded, is seen as crazy, quirky or even science fiction.

No radical innovation

The new iPad Pro that is a few inches bigger than the last one is being hailed as a “big leap” when it’s really just tinkering with the old design. Despite the new features, it in no way represents a radical innovation worthy of ecstatic celebration. The whoops of delight at its launch were followed by voices of disappointment online.

It is primarily for commercial reasons that Apple has institutionalised incremental innovation and tried to convince us all it is radical. iPhones and iPads are brilliantly designed things. Incremental innovation requires expertise and excellence in design and improvement. Phones and tablets play a major part in millions of peoples' lives. But continued innovation happens at a slow pace designed to suit the supplier not the user, who is nonetheless pushed to pay significant amounts of money each year for minor changes.

When they said the new iPad was bigger they weren’t kidding. Beck Diefenbach/Reuters

Fear of failure may have also contributed to the disappearance of radical innovation. The struggles of more unusual designs such as that of the Amazon Fire phone may have made innovators more cautious, delaying and lengthening product development and rollout to compensate. Perhaps it isn’t surprising that virtually none of the radical (labelled “crazy” at the time) concept phones of 2010 have never appeared on the market.

We may have also reached a point where phone design is so good that truly impressive change has become much harder to achieve. So we continue to buy similar looking products, putting them to our ears (just as we did with landlines), snapping cameras with slightly better picture clarity, and getting slightly more intelligent answers from Siri. Same game, tiny changes, price hike.

A smartphone revolution

At the same time, major new challenges are emerging for smartphone makers, from evidence that current phone designs may be fuelling unhappiness and reducing productivity to the worrying environmental impact of manufacturing them. Radical innovation is needed so that phones fully serve customer interests in a sustainable way.

But for the time being, more radical products, such as the Yotaphone 2, (which offers a dual screen), or the Runcible (round, beautiful and rather different), will be at best seen as quirky and niche. The existing market leaders will only change their tortoise-speed approach to radical innovation if a major new player genuinely disrupts the market with fast, penetrative changes.

For example, Chinese company Xiaomi is creating a range of products for the home (from TVs to air purifiers) that automatically link with their smartphones in a single, integrated system. This is the kind of radical idea that could shock Apple into becoming more radical and adventurous.

We could eventually see mobile computing move away from hand-held, screen-based devices towards seamless interaction across different devices and platforms such as wearable technology and projected holograms.

For the foreseeable future, however, innovation in the mobile and wearable space is going to be dominated by incremental and fairly mediocre approaches to innovation. Radical thinking will be consigned to concepts for the future and the iPhone 7 will probably look a lot like the iPhone 3. But the launch will be offered as another revolution.

The Conversation

Friday, September 11, 2015

Apple's iPad Pro looks good, but who needs a phone with a 13" screen?

Monica Davey/EPA

Apple’s annual September keynote as usual brings hardware changes, software updates and the occasional surprise.

Rumours of a larger iPad Pro were proved true: the significantly larger 12.9 inch iPad with upgraded ARM A9X processor and faster graphics and internal components is being sold as a device on which desktop-class applications could run.

This is supported with a stylus and keyboard (sold separately in typical Apple fashion) that essentially converts the iPad Pro into a laptop. The stylus, dubbed Apple Pencil, has provoked comment as Steve Jobs had expressed his distaste for them in the past. The Pencil features hand writing recognition software, and improvements to iOS finally allow multitasking by splitting the screen between two apps.

However, with prices starting at an eye-watering US$799, there will be many who think that this won’t light a fire under tablet sales, which have been flat. For example, Amazon has taken the opposite approach, aiming for the bottom end of the market with a US$50 tablet subsidised by purchases made through Amazon’s services.

There may be iPad sales in education, and in retail where they are often used as point of sale devices, but in business the iPad faces considerable competition. For example, the iPad Pro bears an uncanny similarity to Microsoft’s own convertible tablet/laptop device, the Surface Pro, in cost and size and style. But the big difference is that Surface comes with a full operating system, Windows 10: few will take Apple’s claims that the iPad Pro can run desktop-class applications for professional use while it’s running the stripped-down iOS operating system originally designed for phones, instead of the full OS X as found on Macbooks and iMacs.

Microsoft’s Surface Pro tablet, keyboard and stylus combo. Microsoft

Apple’s iPad Pro - spot the similarity? Beck Diefenbach/Reuters

 

 

 

 

 

A surprise was the appearance of Microsoft staff on stage to demonstrate Microsoft Office apps running on the iPad – something greeted with a stunned silence in the auditorium. Microsoft Office has been updated to support the stylus, and the invitation to appear at such a high-profile Apple event shows the extent to which Microsoft has been pouring money and effort into ensuring its software suites are cross-platform, rather than tied to Microsoft Windows. Another visitor to the stage was Adobe, whose reps showed off new design tools with the stylus – which all suggests an outbreak of corporate peace between the firms.

Pushing Apple TV into the home

The Apple TV finally gets a long-awaited upgrade, a wait during which many competing devices have appeared such as NOW TV, Roku, or Google’s Chromecast. Originally classified as a “media extender”, Steve Jobs called the Apple TV a “hobby” when introduced in 2007, but with this update Apple has refreshed the device, reorienting it to support the app ecosystem that has thrived elsewhere.

The new Apple TV features a new operating system tvOS, making use of the extensive iPhone/iPad developer tools and software already available. Boasting a much higher hardware specification, the Apple TV now runs apps and games, provides a new interface and a touch-enabled remote that can also process audio commands through the Siri digital assistant voice recognition system. With this a user can use their voice to search for content across multiple television networks.

It should be easy to port existing iPad/iPhone applications to the TV, bringing an unparalleled range of services compared to the competition. The surge in streaming services from Amazon and Netflix has sidelined Apple to some extent, so it will be interesting to see whether reorienting the device around apps will increase Apple’s footprint in this space. Sony and Microsoft should be worried that the massive back catalogue of iOS games can now be used in the living room through Apple TV. Prices start from US$149, available from October.

Phone and Watch

An update to the Watch, dubbed WatchOS2, arrives later this month and features updated accessories, colours and straps. The update will give apps direct access to the hardware, allowing developers to write full native applications for that are more independent of the iPhone, to which the Watch has so far played second fiddle.

The iPhone 6S and iPhone 6SPlus are unchanged externally, but Apple claims internal upgrades including a 12 megapixel capable camera, faster A9 processor and a Force Touch capable screen, which responds to varying degrees of pressure. This is still a new tech, for which capable software has yet to be written.

Finally, as signalled in the developer conference earlier in the year, owners of older devices will get access to new features when iOS 9 is launched very soon. An incremental upgrade, nevertheless it offers features many users have been calling for and will provide a significant increase in speed and features for older devices.

It’s unlikely these changes will lead to the extraordinary sales achieved with the larger iPhones last year, so it may provide an opportunity for other manufacturers to play catch-up – improving their hardware and services which Apple has always claimed is what differentiates them from the competition in a crowded market.

The Conversation

The web has become a hall of mirrors, filled only with reflections of our data

The web should expand our horizons, but instead it's shrinking our view. uroburos

The “digital assistant” is proliferating, able to combine intelligent natural language processing, voice-operated control over a smartphone’s functions and access to web services. It can set calendar appointments, launch apps, and run requests. But if that sounds very clever – a computerised talking assistant, like HAL9000 from the film 2001: A Space Odyssey – it’s mostly just running search engine queries and processing the results.

Facebook has now joined Apple, Microsoft, Google and Amazon with the launch of its digital assistant M, part of its Messaging smartphone app. It’s special sauce is that M is powered not just by algorithms but by data serfs: human Facebook employees who are there to ensure that every request that it cannot parse is still fulfilled, and in doing so training M by example. That training works because every interaction with M is recorded – that’s the point, according to David Marcus, Facebook’s vice-president of messaging:

We start capturing all of your intent for the things you want to do. Intent often leads to buying something, or to a transaction, and that’s an opportunity for us to [make money] over time.

Facebook, through M, will capture and facilitate that “intent to buy” and take its cut directly from the subsequent purchase rather than as an ad middleman. It does this by leveraging messaging, which was turned into a separate app of its own so that Facebook could integrate PayPal-style peer-to-peer payments between users. This means Facebook has a log not only of your conversations but also your financial dealings. In an interview with Fortune magazine at the time, Facebook product manager, Steve Davies, said:

People talk about money all the time in Messenger but end up going somewhere else to do the transaction. With this, people can finish the conversation the same place started it.

In a somewhat creepy way, by reading your chats and knowing that you’re “talking about money all the time” – what you’re talking about buying – Facebook can build up a pretty compelling profile of interests and potential purchases. If M can capture our intent it will not be by tracking what sites we visit and targeting relevant ads, as per advert brokers such as Google and Doubleclick. Nor by targeting ads based on the links we share, as Twitter does. Instead it simply reads our messages.

‘Hello Dave. Would you like to go shopping?’ summer1978/MGM/SKP, CC BY-ND

Talking about money, money talks

M is built to carry out tasks such as booking flights or restaurants or making purchases from online stores, and rather than forcing the user to leave the app in order to visit a web store to complete a purchase, M will bring the store – more specifically, the transaction – to the app.

Suddenly the 64% of smartphone purchases that happen at websites and mobile transactions outside of Facebook, are brought into Facebook. With the opportunity to make suggestions through eavesdropping on conversations, in the not too distant future our talking intelligent assistant might say:

I’m sorry Dave, I heard you talking about buying this camera. I wouldn’t do if I were you Dave: I found a much better deal elsewhere. And I know you’ve been talking about having that tattoo removed. I can recommend someone – she has an offer on right now, and three of your friends have recommended her service. Shall I book you in?

Buying a book from a known supplier may be a low risk purchase, but other services require more discernment. What kind of research about cosmetic surgery has M investigated? Did those three friends use that service, or were they paid to recommend it? Perhaps you’d rather know the follow-up statistics than have a friend’s recommendation.

Still, because of its current position as the dominant social network, Facebook knows more about us, by name, history, social circle, political interests, than any other single internet service. And it’s for this reason that Facebook wants to ensure M is more accurate and versatile than the competition, and why it’s using humans to help the AI interpret interactions and learn. The better digital assistants like M appear to us, the more trust we have in them. Simple tasks performed well builds a willingness to use that service elsewhere – say, recommending financial services, or that cosmetic treatment, which stand to offer Facebook a cut of much more costly purchase.

No such thing as a free lunch

So for Facebook, that’s more users spending more of their time using its services and generating more cash. Where’s the benefit for us?

We’ve been trained to see such services as “free”, but as the saying goes, if you don’t pay for it, then it’s you that’s the product. We’ve seen repeatedly in our Meaningful Consent Project that it’s difficult to evaluate the cost to us when we don’t know what happens to our data.

People were once nervous about how much the state knew of them, with whom they associated and what they do, for fear that if their interests and actions were not aligned with those of the state they might find ourselves detained, disappeared, or disenfranchised. Yet we give exactly this information to corporations without hesitation, because we find ourselves amplified in the exchange: that for each book, film, record or hotel we like there are others who “like” it too.

The web holds a mirror up to us, reflecting back our precise interests and behaviour. Take search, for instance. In the physical world of libraries or bookshops we glance through materials from other topics and different ideas as we hunt down our own query. Indeed we are at our creative best when we absorb the rich variety in our peripheral vision. But online, a search engine shows us only things narrowly related to what we seek. Even the edges of a web page will be filled with targeted ads related to something known to interest us. This narrowing self-reflection has grown ubiquitous online: on social networks we see ourselves relative to our self-selected peers or idols. We create reflections.

The workings of Google, Doubleclick or Facebook reveal these to be two-way mirrors: we are observed through the mirror but see only our reflection, with no way to see the machines observing us. This “free” model is so seductive – it’s all about us – yet it leads us to become absorbed in our phones-as-mirrors rather than the harder challenge of engaging with the world and those around us.

It’s said not to look too closely at how a sausage is made for fear it may put you off. If we saw behind the mirror, would we be put off by the internet? At least most menus carry the choice of more than one dish; the rise of services like M suggests that, despite the apparent wonder of less effortful interactions, the internet menu we’re offered is shrinking.

The Conversation

Thursday, September 10, 2015

Apple's iPad Pro looks good, but who needs a phone with a 13" screen?

Monica Davey/EPA

Apple’s annual September keynote as usual brings hardware changes, software updates and the occasional surprise.

Rumours of a larger iPad Pro were proved true: the significantly larger 12.9 inch iPad with upgraded ARM A9X processor and faster graphics and internal components is being sold as a device on which desktop-class applications could run.

This is supported with a stylus and keyboard (sold separately in typical Apple fashion) that essentially converts the iPad Pro into a laptop. The stylus, dubbed Apple Pencil, has provoked comment as Steve Jobs had expressed his distaste for them in the past. The Pencil features hand writing recognition software, and improvements to iOS finally allow multitasking by splitting the screen between two apps.

However, with prices starting at an eye-watering US$799, there will be many who think that this won’t light a fire under tablet sales, which have been flat. For example, Amazon have taken the opposite approach, aiming for the bottom end of the market with a US$50 tablet subsidised by purchases made through Amazon’s services.

There may be iPad sales in education, and in retail where they are often used as point of sale devices, but in business the iPad faces considerable competition. For example, the iPad Pro bears an uncanny similarity to Microsoft’s own convertible tablet/laptop device, the Surface Pro, in cost and size and style. But the big difference is that Surface comes with a full operating system, Windows 10: few will take Apple’s claims that the iPad Pro can run desktop-class applications for professional use while it’s running the stripped-down iOS operating system originally designed for phones, instead of the full OS X as found on Macbooks and iMacs.

Microsoft’s Surface Pro tablet, keyboard and stylus combo. Microsoft

Apple’s iPad Pro - spot the similarity? Beck Diefenbach/Reuters

 

 

 

 

 

A surprise was the appearance of Microsoft staff on stage to demonstrate Microsoft Office apps running on the iPad – something greeted with a stunned silence in the auditorium. Microsoft Office has been updated to support the stylus, and the invitation to appear at such a high-profile Apple event shows the extent to which Microsoft has been pouring money and effort into ensuring its software suites are cross-platform, rather than tied to Microsoft Windows. Another visitor to the stage was Adobe, whose reps showed off new design tools with the stylus – which all suggests an outbreak of corporate peace between the firms.

Pushing Apple TV into the home

The Apple TV finally gets a long-awaited upgrade, a wait during which many competing devices have appeared such as NOW TV, Roku, or Google’s Chromecast. Originally classified as a “media extender”, Steve Jobs called the Apple TV a “hobby” when introduced in 2007, but with this update Apple has refreshed the device, reorienting it to support the app ecosystem that has thrived elsewhere.

The new Apple TV features a new operating system tvOS, making use of the extensive iPhone/iPad developer tools and software already available. Boasting a much higher hardware specification, the Apple TV now runs apps and games, provides a new interface and a touch-enabled remote that can also process audio commands through the Siri digital assistant voice recognition system. With this a user can use their voice to search for content across multiple television networks.

It should be easy to port existing iPad/iPhone applications to the TV, bringing an unparalleled range of services compared to the competition. The surge in streaming services from Amazon and Netflix has sidelined Apple to some extent, so it will be interesting to see whether reorienting the device around apps will increase Apple’s footprint in this space. Sony and Microsoft should be worried that the massive back catalogue of iOS games can now be used in the living room through Apple TV. Prices start from US$149, available from October.

Phone and Watch

An update to the Watch, dubbed WatchOS2, arrives later this month and features updated accessories, colours and straps. The update will give apps direct access to the hardware, allowing developers to write full native applications for that are more independent of the iPhone, to which the Watch has so far played second fiddle.

The iPhone 6S and iPhone 6SPlus are unchanged externally, but Apple claims internal upgrades including a 12 megapixel capable camera, faster A9 processor and a Force Touch capable screen, which responds to varying degrees of pressure. This is still a new tech, for which capable software has yet to be written.

Finally, as signalled in the developer conference earlier in the year, owners of older devices will get access to new features when iOS 9 is launched very soon. An incremental upgrade, nevertheless it offers features many users have been calling for and will provide a significant increase in speed and features for older devices.

It’s unlikely these changes will lead to the extraordinary sales achieved with the larger iPhones last year, so it may provide an opportunity for other manufacturers to play catch-up – improving their hardware and services which Apple has always claimed is what differentiates them from the competition in a crowded market.

The Conversation

Wednesday, September 9, 2015

The web has become a hall of mirrors, filled only with reflections of our data

The web should expand our horizons, but instead it's shrinking our view. uroburos

The “digital assistant” is proliferating, able to combine intelligent natural language processing, voice-operated control over a smartphone’s functions and access to web services. It can set calendar appointments, launch apps, and run requests. But if that sounds very clever – a computerised talking assistant, like HAL9000 from the film 2001: A Space Odyssey – it’s mostly just running search engine queries and processing the results.

Facebook has now joined Apple, Microsoft, Google and Amazon with the launch of its digital assistant M, part of its Messaging smartphone app. It’s special sauce is that M is powered not just by algorithms but by data serfs: human Facebook employees who are there to ensure that every request that it cannot parse is still fulfilled, and in doing so training M by example. That training works because every interaction with M is recorded – that’s the point, according to David Marcus, Facebook’s vice-president of messaging:

We start capturing all of your intent for the things you want to do. Intent often leads to buying something, or to a transaction, and that’s an opportunity for us to [make money] over time.

Facebook, through M, will capture and facilitate that “intent to buy” and take its cut directly from the subsequent purchase rather than as an ad middleman. It does this by leveraging messaging, which was turned into a separate app of its own so that Facebook could integrate PayPal-style peer-to-peer payments between users. This means Facebook has a log not only of your conversations but also your financial dealings. In an interview with Fortune magazine at the time, Facebook product manager, Steve Davies, said:

People talk about money all the time in Messenger but end up going somewhere else to do the transaction. With this, people can finish the conversation the same place started it.

In a somewhat creepy way, by reading your chats and knowing that you’re “talking about money all the time” – what you’re talking about buying – Facebook can build up a pretty compelling profile of interests and potential purchases. If M can capture our intent it will not be by tracking what sites we visit and targeting relevant ads, as per advert brokers such as Google and Doubleclick. Nor by targeting ads based on the links we share, as Twitter does. Instead it simply reads our messages.

‘Hello Dave. Would you like to go shopping?’ summer1978/MGM/SKP, CC BY-ND

Talking about money, money talks

M is built to carry out tasks such as booking flights or restaurants or making purchases from online stores, and rather than forcing the user to leave the app in order to visit a web store to complete a purchase, M will bring the store – more specifically, the transaction – to the app.

Suddenly the 64% of smartphone purchases that happen at websites and mobile transactions outside of Facebook, are brought into Facebook. With the opportunity to make suggestions through eavesdropping on conversations, in the not too distant future our talking intelligent assistant might say:

I’m sorry Dave, I heard you talking about buying this camera. I wouldn’t do if I were you Dave: I found a much better deal elsewhere. And I know you’ve been talking about having that tattoo removed. I can recommend someone – she has an offer on right now, and three of your friends have recommended her service. Shall I book you in?

Buying a book from a known supplier may be a low risk purchase, but other services require more discernment. What kind of research about cosmetic surgery has M investigated? Did those three friends use that service, or were they paid to recommend it? Perhaps you’d rather know the follow-up statistics than have a friend’s recommendation.

Still, because of its current position as the dominant social network, Facebook knows more about us, by name, history, social circle, political interests, than any other single internet service. And it’s for this reason that Facebook wants to ensure M is more accurate and versatile than the competition, and why it’s using humans to help the AI interpret interactions and learn. The better digital assistants like M appear to us, the more trust we have in them. Simple tasks performed well builds a willingness to use that service elsewhere – say, recommending financial services, or that cosmetic treatment, which stand to offer Facebook a cut of much more costly purchase.

No such thing as a free lunch

So for Facebook, that’s more users spending more of their time using its services and generating more cash. Where’s the benefit for us?

We’ve been trained to see such services as “free”, but as the saying goes, if you don’t pay for it, then it’s you that’s the product. We’ve seen repeatedly in our Meaningful Consent Project that it’s difficult to evaluate the cost to us when we don’t know what happens to our data.

People were once nervous about how much the state knew of them, with whom they associated and what they do, for fear that if their interests and actions were not aligned with those of the state they might find ourselves detained, disappeared, or disenfranchised. Yet we give exactly this information to corporations without hesitation, because we find ourselves amplified in the exchange: that for each book, film, record or hotel we like there are others who “like” it too.

The web holds a mirror up to us, reflecting back our precise interests and behaviour. Take search, for instance. In the physical world of libraries or bookshops we glance through materials from other topics and different ideas as we hunt down our own query. Indeed we are at our creative best when we absorb the rich variety in our peripheral vision. But online, a search engine shows us only things narrowly related to what we seek. Even the edges of a web page will be filled with targeted ads related to something known to interest us. This narrowing self-reflection has grown ubiquitous online: on social networks we see ourselves relative to our self-selected peers or idols. We create reflections.

The workings of Google, Doubleclick or Facebook reveal these to be two-way mirrors: we are observed through the mirror but see only our reflection, with no way to see the machines observing us. This “free” model is so seductive – it’s all about us – yet it leads us to become absorbed in our phones-as-mirrors rather than the harder challenge of engaging with the world and those around us.

It’s said not to look too closely at how a sausage is made for fear it may put you off. If we saw behind the mirror, would we be put off by the internet? At least most menus carry the choice of more than one dish; the rise of services like M suggests that, despite the apparent wonder of less effortful interactions, the internet menu we’re offered is shrinking.

The Conversation

Thursday, September 3, 2015

Facebook's digital assistant blends AI with customer service staff – but will it cope without human help?

M – no Bond jokes please. Facebook

With the arrival of its monosyllabic M, Facebook has introduced its own personal digital assistant, following those from Apple (Siri), Microsoft (Cortana), Google (Now) and Amazon (Echo). Technically, M operates partly on the user’s smartphone via the Facebook Messenger app, but it is mostly a cloud-based service. Unlike the others, however, this isn’t just an artificial intelligence but a mix of smart machine learning and human assistance.

What makes M different is that it takes recommendations or answering queries one step further, able to actually make purchases or arrange services for you, and order deliveries. This is the logical conclusion of recommending something, allowing the system to spend your money for you as well. This approach might be risky, or might be brilliant. If it works, suppliers will be clamouring for Facebook’s M to spend users’ money with them, and Facebook will be able to take a percentage in return.

With Facebook’s enormous reach – the site recently claimed one billion concurrent users – even a small percentage of such a large number of users spending even relatively small sums of money would still add up to a great deal of cash for Facebook. Mind you, a few unfortunate misunderstandings of what a user wants to buy might lead to some negative publicity – and one can imagine some Facebook users attempting some very dubious transactions.

Technical and human intelligence

Under the hood, it appears Facebook is not using cutting-edge AI. While its digital assistant’s interface is stored and run from users’ phones, the processing occurs on Facebook’s servers in the cloud where computing power and data can be distributed. It uses technology from wit.ai, which is understood to use conditional random fields, a popular statistical technique dating from the 2000s, and maximum entropy classifiers, based on information theory. These pick up on the structure of the data, and use this to make predictions. These may not be cutting edge, but they are well established and understood. Not only that, but they can use prior knowledge, and one of M’s aims is to improve and to get better through training.

There’s a huge amount of contextual information about the user’s likes and preferences within Facebook’s enormous datasets, and this could help M’s algorithms provide answers. It could also be used to help constrain queries – things to exclude – particularly if both the purchaser and the recipient are Facebook users. But it will take leading edge AI techniques like sentic technologies, which attempt to extract mood, emotion, intention and meaning from text, in order to mine the full value of the text and image datasets generated by Facebook users.

M’s natural language processing picks out a message’s intent. But it has a lot to learn. Facebook

David Marcus, vice president of messaging products at Facebook and in charge of M, has said that without explicit consent M won’t embark on such data-mining. In fact there is a limited range of possible services and purchases that the software can perform automatically, while trickier tasks are carried out by the human element behind the scenes - customer service staff working for Facebook. Humans are needed to be able to cover the gaps in the AI’s ability to understand natural language, understanding what users are after, able to sign off purchases to ensure they’re reasonable, and legal.

While the idea is that M learns the right behaviours by associating the user’s intent with the solutions provided by human staff, for this to scale to even a fraction of Facebook Messenger’s 700,000 users, the AI will have to be good enough to relieve the human staff of their role. And that may take a while. Of course, M is being rolled out area by area – currently only San Francisco, of course – so perhaps the firm is just dipping a toe in the water to start with.

So while M may be the personal assistant of the future, at the moment it’s a curious mix of machine learning, automation, and human comprehension. But powered by the tutoring of actual humans and human-created data, in time it could still become more adept than the competition.

The Conversation

Friday, August 21, 2015

Windows 95 turns 20 – and new ways of interacting show up desktop's age

Windows 95 and DOS6: actual museum pieces. m01229, CC BY

The arrival of Microsoft Windows 95 on August 24 1995 brought about a desktop PC boom. With an easier and more intuitive graphical user interface than previous versions it appealed to more than just business, and Bill Gates’ stated aim of one PC per person per desk was set in motion. This was a time of 320Mb hard drives, 8Mb RAM and 15” inch CRT monitors. For most home users, the internet had only just arrived.

Windows 95 introduced the start menu, powered by a button in the bottom-left corner of the desktop. This gives a central point of entry into menus from which to choose commands and applications. The simplicity of this menu enables users to easily find commonly used documents and applications. All subsequent versions of Windows have kept this menu, with the notable exception of Windows 8, a change which prompted an enormous backlash.

We take these intuitive graphic interfaces for granted today, but earlier operating systems such as DOS and CP/M allowed the user to interact using only typed text commands. This all changed in the 1970s, with Ivan Sutherland’s work with Sketchpad and the use of lightpens to control CRT displays, Douglas Engelbart’s development of the computer mouse, and the Xerox PARC research team’s creation of the Windows Icon Menu Pointer graphical interfaces paradigm (WIMP) – the combination of mouse pointer, window and icons that remains standard to this day. By the early 1980s, Apple had developed graphical operating systems for its Lisa (released 1983) and Macintosh (1984) computers, and Microsoft had released Windows (1985).

DOS - these were not good old days. Krzysztof Burghardt

Imagining a desktop

All these interfaces rely on the central idea of the desktop, a comprehensible metaphor for a computer. We work with information in files and organise them in folders, remove unwanted information to the trash can, and note something of interest with a bookmark.

Metaphors are useful. They enable users to grasp concepts faster, but rely on the metaphor remaining comprehensible to the user and useful for the designer and programmer putting it into effect – without stretching it beyond belief. The advantage is that the pictures used to represent functions (icons) look similar to those in the workplace, and so the metaphor is readily understandable.

Breaking windows

But 20 years after Windows 95, the world has changed. We have smartphones and smart televisions, we use the internet prolifically for practically everything. Touchscreens are now almost more ubiquitous than the classic mouse-driven interface approach, and screen resolution is so high individual pixels can be difficult to see. We still have Windows, but things are changing. Indeed, they need to change.

The desktop metaphor has been the metaphor of choice for so long, and this ubiquity has helped computers find a place within households as a common, familiar tool rather than as specialist, computerised equipment. But is it still appropriate? After all, few of us sit in an office today with paper-strewn desks; books are read on a tablet or phone rather than hard-copies; printing emails is discouraged; most type their own letters and write their own emails; files are electronic not physical; we search the internet for information rather than flick through reference books; and increasingly the categorisation and organisation of data has taken second place to granular search.

Mouse-driven interfaces rely on a single point of input, but we’re increasingly seeing touch-based interfaces that accept swipes, touches and shakes in various combinations. We are moving away from the dictatorship of the mouse pointer. Dual-finger scrolling and pinch-to-zoom are new emerging metaphors – natural user interfaces (NUI) rather than graphical user interfaces.

What does the next 20 years hold?

It’s hard to tell but one thing that is certain is that interfaces will make use of more human senses to display information and to control the computer. Interfaces will become more transparent, more intuitive and less set around items such as boxes, arrows or icons. Human gestures will be more commonplace. And such interfaces will be incorporated into technology throughout the world, through virtual reality and augmented reality.

These interfaces will be appear and feel more natural. Some suitable devices already exist, such as ShiverPad, that provide shear forces on surfaces that provide a frictional feel to touch devices. Or Geomagic’s Touch X (formerly the Sensible Phantom Desktop) that delivers three-dimensional forces to make 3D objects feel solid.

Airborne haptics are another promising technology that develop tactile interfaces in mid-air. Through ultrasound, users can feel acoustic radiation fields that emanate from devices, without needing to touch any physical surface. Videogame manufacturers have led the way with these interfaces, including the Microsoft Kinect and Hololens that allow users to use body gestures to control the interface, or with their eyes through head-mounted displays.

Once interaction with a computer or device can be commanded using natural gestures, movements of the body or spoken commands, the necessity for the Windows-based metaphor of computer interaction begins to look dated – as old as it is.

The Conversation

Tuesday, August 18, 2015

To service global trade, today's ships and cargo are smarter than ever

Federico Rostagno/Shutterstock

A glance at the objects around you at home or work will reveal objects brought from across the world, from the bagged salad in your fridge (Kenya), to the computer or smartphone upon which you’re reading this article (Taiwan, China, the US), or the table upon which it rests (Sweden). The enormous volume of global trade that brings us products from all over the world has been made possible by a profound technological revolution occurring behind the scenes.

The world’s biggest container shipping firm, Maersk, estimates the cost of transporting an apple from a field in New Zealand to a cold store in Europe is eight US cents. Logistics experts talk of “landed costs”, the sum of all of the various costs associated with freight. There is also an environmental cost, of course, for example bringing vegetables from afar rather than from local farms. But the landed costs of many products have fallen such that it’s usually cheaper to transport many items halfway around the world than to produce them locally.

Much of this fall in costs comes from the efficiencies ushered in by containerisation which, since it was introduced in 1956, has had a greater effect on globalisation than all the trade agreements signed in the past 50 years.

The largest container ships today, such as the CSCL Globe and MSC Oscar carry around 19,000 TEUs (20ft equivalent unit – a standard 40ft container is two TEUs). But ships are not likely to greatly grow beyond 20,000 TEUs for the foreseeable future – much as the Airbus A380, currently the world’s largest passenger aircraft carrying up to 853 passengers, is probably as big an aircraft as we will see. If a ship or aircraft is so big and expensive that nobody can operate it (in terms of costs and system constraints) then it’s of little use. So while there have been remarkable achievements in speed and scale, today it is the communication technology that links systems together that has the greatest logistical impact.

The network is the unseen hero

The internet of things (IoT) refers to small, internet-connected sensors that can detect and transmit information. Network technology company Cisco extended this idea to the “internet of everything” (IoE), in which the sensors talk not to a central hub but to each other, exchanging data and making decisions autonomously based on that data. The four components are data (how it is gathered and used), people (how they are connected), things (network-connected devices providing data for intelligent decision making) and process (delivering the right information to the right person or machine at the right time).

Increasingly it is not just control processes that are growing in intelligence but the ships and cargoes themselves. For example: by law, ships above a certain size are obliged to transmit updates on their position via Automatic Identification Systems (AIS). It is possible to see this information online, a real-time snapshot of global shipping.

Connected ships perform better. For example, by taking smart routes around bad weather, and allowing remote monitoring of ships for safety. Smart container technology can monitor temperature and humidity of containers for changes that could damage the contents, or to change the environment so as to, for example, prevent the spoiling of fruit in transit due to delay.

Details concerning individual shipments are transmitted before the ship arrives at port, allowing customs authorities time to profile and sometimes pre-approve incoming cargo. To the same end, eFreight initiatives cut down the paperwork associated with freight: the International Air Transport Association (IATA) once estimated that there can be up to 25 separate documents accompanying an air freight shipment.

Containerisation: a revolution in a box. Port by hxdyl/shutterstock.com

Better data brings more automation

An improvement is the “single window” online portal approach to which all those involved have access. This is regarded as the key to making freight movement easier, giving those across the supply chain the data they need, meaning product deliveries can be sped up or slowed down so that cargoes arrive at the time and place they’re needed – the “gearbox approach”. Why have a warehouse at the arrival end? Just deliver the product exactly when required, straight into waiting trucks.

On arrival, container handling is done by automated cranes, which are cheaper and safer than human operators. Better cargo tracking means freight is traceable from setting off through to final destination. Less product is lost and damaged thanks to more accurate handling, which means lower incidental costs. Many warehouses now use automated pick-and-pack systems based on barcodes and RFID radio transmitting smart tags rather than human handlers. Once considered science fiction, robots in the warehouse are becoming affordable, while warehouse staff are equipped with augmented reality applications and smart glasses.

Technology improvements have also found uses in the trucking industry – and convoys of wireless-linked, semi-autonomous, driverless trucks (known as “platooning”) are a possibility in the near future.

So next time you receive the wrong delivery it’s quite likely to be your fault (for ordering the wrong thing), and the company who sold the item will have the data trail to prove it. On the other hand, whether we need and can afford the vast array of goods that global logistics systems deliver cheaply and efficiently to our door – fresh fruits and flowers all year round, for example – is another conversation worth having.

The Conversation

Tuesday, August 11, 2015

Google becomes Alphabet in effort to keep the innovative spark alive

Google: no longer just a search engine. mwichary/flickr, CC BY

In the corporate world you learn quickly that if small companies want to collaborate, it tends to happen, while efforts to collaborate with large companies may involve many meetings and involve many people with no guarantee anything will come of it. Small companies innovate as they need to; big companies are often risk averse.

Google’s announcement that it is to reorganise under a new parent company, Alphabet, is a step towards overcoming this sort of bureaucracy and maintaining the fiercely innovative and daring streak that has until now been its trademark.

Large companies have more freedom to ignore their end users, preferring secrecy from fear of having their ideas stolen, and instead focus on large stakeholders. This means that they often create products that are too wide in scope and which fail to address specific needs.

For smaller businesses, innovations are part of the way they engage with customers. Rapid prototypes are released, and assessed to see what works and what doesn’t. These prototypes are then scaled up and made relevant to a wider range of potential customers. Despite its enormous size and wealth, this is also the approach that Google favours.

Too often large companies don’t trust their engineers to make sensible judgements on business decisions. This probably shouldn’t be the case, as often the most successful technology companies are run by those who worked up through a technical role. Companies such as Hewlett Packard, Apple and Google made their names through being technically excellent, rather than a narrow focus on business objectives.

Google’s move effectively splits one monolithic company into several smaller companies wholly owned by Alphabet, of which Google is the largest. In this way, Google (or should we say, Alphabet) hopes to keep each of its areas of focus small, fast, and innovative.

G is for Google. Let’s hope M isn’t for mistake. Alphabet

Risk averse

After all, Google is not just a search engine any more. It has expanded in many directions, from mobile phone design and operating systems, to smart home control kits, automotous cars, geomapping, and off-the-wall projects. It is comfortable trying things out and dedicating the resources to ideas with potential.

This risk-taking is a key part of Google’s innovation infrastructure, giving independence of thought to staff and technical leaders without over-burdening them with business issues. In fact, it’s similar to a traditional academic research model, where academics with good ideas get the resources that allow them to drive them forward. Done well, the university becomes a leader in the field, just as Google has become a technology giant.

Small works in software

Google wants to attract the best staff into research labs, and achieves this by creating a small-company infrastructure where engineers are not burdened by bureaucracy. However, unlike smaller businesses, Google has the deep pockets to support its staff. A rising star can be given responsibilities without the need to progress through a formal hierarchy.

After all, the structure of large companies may limit their ability to produce useful software – take for example the many major government IT contract disasters, such as the £10 billion spent on an NHS IT system that ultimately never worked.

What would a small company have done differently? It would have invested time in searching for the best solution, created and tested prototypes, and used those as a basis for the final product. The large companies involved in the NHS contract had off-the-shelf solutions, which they pushed without questioning their suitability. Too much money was spent on design and requirements analysis, and it was years before the product reached the clinical staff, by which point it was a computer programmer’s dream but a nightmare for the intended user.

Reputations built on people

Leading universities generally have individuals to thank for their success – for examples cryptography at Royal Holloway, led by Professor Fred Piper, and the University of Edinburgh’s Informatics Group that thrived under the guidance of Professor Sidney Michaelson.

So big companies need to act like small ones and provide opportunities for innovation and risk-taking to thrive, where individuals who do not want to conform to strict rules and procedures can take on their vision of the future. After all, Apple was a garage company once, and Microsoft had to borrow someone else’s operating system (known as 86-DOS and purchased from Tim Paterson of Seattle Computer Products) to get a foot on the ladder.

Google’s enormous impact is mostly down to the creativity of individuals, its image still one of a bunch of software developers who just love to write code – not easy for a company whose products increasingly find places in almost every web user’s life. Let’s hope that the creation of Alphabet protects the small-company ethos that has made Google great.

The Conversation

Tuesday, August 4, 2015

Forget the Silicon Valley revolution: the future of transport looks remarkably familiar

Lokan Sadari/flickr, CC BY-NC-SA

From autonomous vehicles and the rapid rise of Uber to the global diffusion of bike-sharing schemes, transport is changing. Developments in information technology, transport policy and behaviour by urban populations may well be causing a wholesale shift away from conventional cars to collective, automated and low-carbon transport.

Yet there are still many uncertainties in technology development, finance and trends in user practices and expectations about the scale of these changes may well be inflated.

Perhaps the most significant development is “peak car” use – the stalled growth or modest decline in car ownership and use since around 1990 across the developed world. As well as economic reasons and the returning popularity of city living, this seems to be driven in part by a move away from pro-car planning. Metropolitan governments in particular are increasingly reallocating road space away from private cars and concentrating office and housing developments around public transport stations.

They are even supporting a wide range of innovations in local transport, including self-driving pod cars called up by a smartphone app. All these initiatives aim to reinvent “old” transport systems – metro, tram and cycling – as efficient, fashionable and healthy, enabling both economic growth and a better quality of life.

Public transport problems

However, public transport still faces significant challenges. Research consistently shows that satisfaction with tripmaking is lower on bus and rail than on other forms of transport. The industrial-era logic of only offering services at particular stops or stations at specific times sits uncomfortably with the changing rhythms of work, shopping, care-giving and leisure in post-industrial societies.

These service provision problems are particularly acute in suburbs (and of course rural areas) where the flexibility afforded by private cars continues to be the norm. Yet, even in the densest parts of cities, public transport only meets everyone’s needs when there are more flexible options as well. This is why greater public transport use is linked to, and to some extent triggers, increased use of cycling and, more recently, smartphone-enabled taxi services.

These forms of transport are available (almost) everywhere at all times – and therefore better compatible with the individualised lifestyles of people accustomed to the convenience that private car use epitomises. But even bike and car-sharing schemes with fixed docking stations and parking bays suffer from some of the same limitations as public transport does. The future may well be brighter for smartphone-dependent “free-floating” schemes, whereby cars can be picked up and left at any location within a designated zone that stretches across a city or parts thereof.

Pod to the future. Department for Transport/flickr, CC BY-NC-ND

There are also big obstacles to a public transport revolution in the form of entrenched government patterns and vested interests. Past planning decisions, in particular, constrain current and future changes in transport systems. This is because the construction of road infrastructure, sprawling suburbs, car-dependent retail/leisure complexes and mono-functional business areas since the 1950s is largely irreversible, at least for the coming decades.

Industry fightback

The car industry remains powerful and does not sit still. In many countries, car manufacturing continues to be important to the national economy and can therefore count on considerable support from local, national and supranational (EU) governments. This is exemplified by the UK government’s Office for Low Emission Vehicles which was set up to stimulate the uptake of electric and other low-carbon vehicles.

Car manufacturers may now be experiencing competition from powerful technology companies such as Google and Apple, but this is catalysing their own development of innovations. The Google driverless car may be the most famous but the first such vehicles from conventional manufacturers are expected to hit the market by 2017-2018.

Many hurdles still need to be overcome. The technology needs substantial refining, major issues around insurance and liability need to be resolved, it is not clear how adaptations to road infrastructure will be financed, and public opinion is divided. Based on experiences with electric and fuel cell cars in recent decades, current expectations about commercialisation and consumer uptake are (vastly) over-optimistic.

Unexpected and unforeseeable events may radically reshape current development trajectories, but there are good reasons to expect that transport systems in 30 years will not be drastically different from today.

The Conversation

Monday, August 3, 2015

The autonomous killing systems of the future are already here, they're just not necessarily weapons – yet

(Potentially) killer AI tech is already here, built into many less ominous sounding everyday objects. zen_warden, CC BY-NC-ND

When the discussion of “autonomous weapons systems” inevitably prompts comparisons to Terminator-esque killer robots it’s perhaps little surprise that a number of significant academics, technologists, and entrepreneurs including Stephen Hawking, Noam Chomsky, Elon Musk, Demis Hassabis of Google and Apple’s Steve Wozniak signed a letter calling for a ban on such systems.

The signatories wrote of the dangers of autonomous weapons becoming a widespread tool in larger conflicts, or even in “assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group”. The letter concludes:

The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.

It’s hard to quibble with such concerns. But it’s important not to reduce this to science-fiction Terminator imagery, narcissistically assuming that AI is out there to get us. The debate has more important human, political aspects that should be subjected to criticism.

The problem is that this is not the endpoint, as they write; it is the starting point. The global artificial intelligence arms race has already begun. The most worrying dimension of which is that it doesn’t always look like one. The difference between offensive and defensive systems is blurred just as it was during the Cold War – where the doctrine of the pre-emptive strike, for example, that attack is the best defence, essentially merged the two. Autonomous systems can be reprogrammed to be one or the other with relative ease.

Autonomous systems in the real world

The Planetary Skin Institute and Hewlett-Packard’s Central Nervous System for the Earth (CeNSE) project are two approaches to creating a network of intelligent remote sensing systems that would provide early warning for such events as earthquakes or tidal waves – and automatically act on that information.

Launched by NASA and Cisco Systems, the Planetary Skin Institute strives to build a platform for planetary eco-surveillance, capable of providing data for scientists but also for monitoring extreme weather, carbon stocks, actions that might break treaties, and for identifying all sorts of potential environmental risks. It’s a good idea – yet the hardware and software, design and principles for these autonomous sensor systems and for autonomous weapons is essentially the same. Technology is ambivalent to its use: the internet, GPS satellites and many other systems used widely today were military in origin.

As an independent non-profit, the Planetary Skin Institute’s goal is to improve lives through its technology, claiming to provide a “platform to serve as a global public good” and to work with others to develop other innovations that could help in the process. What it doesn’t mention is the potential for the information it gathers to be immediately monetised, with real-time information from sensors automatically updating worldwide financial markets and triggering automatic buying and selling of shares.

The Planetary Skin Institute’s system offers remote, automated sensing systems providing real-time, tele-tracking data worldwide – its slogan is “sense, predict, act” – the same sort of principle, in fact, on which an AI autonomous weapon systems would work. The letter describes AI as a “third revolution in warfare, after gunpowder and nuclear arms”, but the capacity to build such AI weapons has been around since at least 2002, when drones transitioned from remote-control aircraft to smart weapons, able to select and fire upon their own targets.

The future is now

Instead of speculating about the future, we should deal with the legacy of autonomous systems from the Cold War, inherited from World War II and Cold War-era complexes between university, corporate and military research and development. DARPA, the US Defence Advanced Research Projects Agency is a legacy of the Cold War, founded in 1958 but still pursuing a very active high-risk, high-gain model for speculative research.

Research and development innovation spreads to the private sector through funding schemes and competitions, essentially the continuation of Cold War schemes through private sector development. The “security industry” is already tightly structurally tied to government policies, military planning and economic development. To consider banning AI weaponry is to point out the wider questions around political and economic systems that focus on military technologies because they are economically lucrative.

Relating the nuclear bomb to its historical context, the author EL Doctorow said: “First, the bomb was our weapon. Then it became our foreign policy. Then it became our economy.” We must critically evaluate the same trio as they affect autonomous weapons development, so that we discuss this inevitability not by obsessing on the technology but on the politics that allows and encourages it.

The Conversation

Wednesday, July 29, 2015

Windows 10: Microsoft's universal system for an increasingly mobile world

Windows 10, a bit of the new, a bit of the old. Microsoft

With Windows 10, Microsoft is trying to turn the tide against the proliferation of operating systems across desktops, servers, tablets and smartphones by creating a single operating system that will run on them all.

Currently the world’s billions of Windows users are spread across its older versions, with Windows XP, released in 2001, still boasting the same installed base of users (around 12% market share) as the two-year-old Windows 8.1 (at 13%). The bulk of Windows users (61%), are still using Windows 7, released in 2009. And that’s not to mention the various incompatible Windows versions designed for tablets or smartphones.

Trying to consolidate different versions isn’t a new idea, although it’s much easier said than done. Recent versions of Apple OS X operating system for desktops and laptops have drawn inspiration from iOS designed for iPad and iPhone, while Canonical, the company behind the Ubuntu Linux distribution, has also produced a version for phones.

However, with Windows 10, Microsoft is taking the idea to its logical conclusion, producing not just a single OS for all devices, but a framework for apps that run on all of them, making the move between devices seamless.

One app to rule them

If we believe the Microsoft marketing machine, this will be the start of the era of Windows universal apps. There are many clever things in Windows 10, such as the integration of the digital assistant Cortana, but universal apps are what really excites me. This will allow developers to write code once and deploy it to all the different devices Windows 10 supports. It’s not quite as easy as Microsoft would have us believe though: there would still need to be some code that’s written specifically for each type of device, only some of it would be shared.

This is exciting because Microsoft is hoping to entice developers and bridge the “app gap” on Windows devices. As of May 2015, the Google Play Store has 1.5m apps, the Apple App Store has 1.4m, while the Windows Phone Store a mere 340,000. Applications, and therefore available developers to create them, are key. Getting developers on board is the best way for Microsoft to make headway in the race to get their devices into our pockets.

Mixing the new and the old

I’ve spent some time with the technical and insider previews of Windows 10 for the desktop. The latest builds are speedy and show a lot of promise, so much so that every one of my Windows tablets and desktops are now signed up and awaiting the free upgrade. As predicted, it blends the traditional desktop experience of Windows 7 with the apps-based approach of Windows 8. It feels like a new desktop experience but is also familiar, an evolution rather than a revolution.

We’ve come a long way. Microsoft

Some of the key improvements are less headline grabbing than a talking digital assistant like Cortana or the return of the start menu. A key market as personal PC sales decline is the enterprise, and under the hood changes in security have been a heavy focus for Microsoft to ensure businesses are open to upgrading from Windows 7. But other than the front-end “bells and whistles” there aren’t too many obvious internal changes.

This familiarity should entice those Windows 7 users still holding out, those who found the new Metro UI interface of Windows 8.1 too much of a culture shock. Gone are the two interfaces, now merged into a single mix of traditional start menu with start screen stuck on the side. Gone too is the charms bar (popup menu) that was so heavily reliant on touch.

In another new move Windows 10 is being given away as an upgrade for free. With successive Android, iOS, Linux and OS X updates now offered free I think it was inevitable that Microsoft would eventually go the same route.

Although Windows 10 for desktop is available now, we’ll have to wait until September for the mobile version and to experiment with universal apps. Of course it’ll be a bit longer still to see what impact a unified OS platform has, and whether Windows 10 is the fresh start Microsoft is banking on.

The Conversation

Thursday, July 2, 2015

Virtual reality tech may make 'going shopping' in real life a thing of the past

'Too much Call of Duty, not enough shopping'. pestoverde, CC BY-SA

High street shops are well-established online these days and provide new opportunities for interaction between shop and shopper. Consumers have become accustomed to shopping using a range of devices and the immense popularity of smartphones and mobile devices has led to the rise of mobile or m-retailing, with new communication and distribution channels created with these in mind. Perhaps this mix of the real and online worlds are helpful precursors for what may be the “next big thing”: virtual reality shopping.

Virtual reality (VR) experiences are typically provided through wearable headgear or goggles that block out the real world and immerse the user in a virtual one. This is distinguished from augmented reality (AR), where layers of digital content can be overlayed on the real world, providing access to both. For example, the digital information displayed on the visor of Google Glass.

Apps can provide ‘live’ augmented reality to try on superimposed accessories and clothes. Eawentling, CC BY-NC-SA

While AR can work with mobile devices and is already included in some apps, for VR to succeed the headgear needs to be comfortable, stylish and powered by sufficiently capable software so that the immersive visual effects are credible – and useful. It’s possible to add deeper engagement with the virtual world by incorporating other senses, for example tactile hand controls for handling and manipulating objects.

In-store tech

Magic mirrors, where how you’d like to look is projected onto your actual appearance. Intel, CC BY-SA

However, the use of technology by retailers in-store has been patchy. The availability of in-store Wi-Fi has increased, and some stores offer touchscreens and tablets for customers to browse and search for items and look up information. More common are video screens displaying fashion collections, often connected to apps offering inspirational looks. However more cutting edge tech, such as magic mirrors that overlay the image of the shopper with the clothes they’ve selected, allowing them to switch style and colour options, are less widespread. Sometimes they’re also less than reliable.

In any case, shoppers tend to appreciate functionality over more playful or whimsical means of interacting with the retailer. New additions are welcome when they are informative and save the shopper time, helping them locate products in the store or at another. Not surprisingly consumers would rather not pay for these services, and prefer to be engaged rather than marketed to. Young fashion shoppers simply use their phones to share photos of potential purchases through Snapchat and Instagram. Image is everything, with the retailer providing the backdrop.

Present trends point to the expansion of interactive shop window displays and in-store communication that uses a combination of GPS, transmitters such as the Apple’s iBeacon and other devices using Bluetooth transmissions to interact with shopper’s smartphones. These will take personalisation and micro-marketing to a new level with real-time offers and information dispatched to their phone as they pass near product displays.

To support their brand, retailers will increasingly look at their customer relationships, so stories, images, videos and news – fashion and cosmetic blogs have been particularly successful – is where many new opportunities will arise. However, while creative and technologically novel, these are all at best examples of augmented rather than virtual reality.

Making a (virtual) impression

Where does this leave the use of virtual reality? We can expect to see trials as retailers become more comfortable offering content through them. New VR headsets such as from Oculus Rift and Sony will offer more and more realistic immersive environments. Sony, drawing on its Playstation expertise aims to to add movement to the user experience. Some brands have already piloted virtual stores, where VR-equipped shoppers could one day have the same experience of browsing through racks and shelves waiting for something to catch their eye – without needing to leave their home.

VR will provide an opportunity to re-visit and experience retailers' and desigers’ fashion shows of the past, events and exhibitions. For example, Top Shop recently transmitted London Fashion week as it happened through Oculus Rift headsets to customers in its Oxford Street store. It may also provide a means for retailer to extend the lifespan of certain promotions to individual customers.

Immersion is particularly promising in the creation or re-creation of 3D environments, which could be especially helpful for those buying furniture, furnishings, paint and decoration for their homes to envisage how it would look. The recently developed Virtuix virtual reality platform provides a motion controller that translates the users physical movements into equivalents in the virtual environment – a means to, literally, walk around a virtual world.

However, any major step forward will need to make the retailer’s investment worthwhile, and as neither the technology nor shoppers' complete acceptance of VR is where it needs to be today, there’s some way to go before VR becomes the next big thing in shopping.

The Conversation

Friday, June 26, 2015

Miniaturisation will lead to 'smart spaces' and blur the line between on and offline

A computer-on-a-stick is the start, but they'll get smaller and smarter yet. Lenovo

Lenovo, the Chinese firm that has bought up IBM’s cast off PC business, has announced a miniaturised computer not much larger than a smartphone, which can be connected to any screen via an HDMI connection.

Advances in electronic components manufacturing processes and integration have resulted in large-scale miniaturisation of computer systems. This has enabled the latest system-in-package and system-on-a-chip approaches, where the processor and other necessary functionality usually provided by many microchips can be incorporated into a single silicon chip package.

Lenovo’s Ideacenter Stick 300 runs Windows 8 or Linux, is powered by a micro-USB connector and comes fitted with a new Intel Bay Trail CPU, 2GB RAM, 32GB flash storage, an SD card reader, Wi-Fi – even speakers.

Lenovo isn’t the first to shrink the PC down to pocket size. Intel’s Compute Stick is another dongle-sized computer with similar specs released this year.

Intel’s Compute Stick is another effort to shrink the PC to pocket size. Intel

The Raspberry Pi, now upgraded to its second major release, was probably the first to provide the functionality of a desktop or laptop computer in a credit card sized electronic board. Over five million Raspberry Pi computers have been sold since launch in 2012.

Google has used its stripped-down Chrome OS based on its Chrome browser to reduce a Chromebook (Chrome OS-powered laptop) down to the Chromebit. While the Chromebit is no larger than a USB memory stick, it’s markedly less powerful than Intel’s offering, as it is powered by the Rockchip RK3288, an ARM processor, which makes it comparable in power to a smartphone.

Google’s Chromebit, in more colours than black. Katie Roberts-Hoffman/Google

There are other stick-sized, computers running low-power ARM processors capable of running Android, such as Cotton Candy or Google Chromecast. These plug into a digital television to play video directly to the TV or from internet streaming services such as Netflix – but not much else.

The appeal of small

Computers this small are attractive for many organisations, such as schools and universities who need to equip functional computer laboratories at minimum cost while taking up as little space as possible. Low power devices also consumer less power which keeps costs down.

A typical desktop computer uses about 65-250 watts (plus 20-40 watts for an LCD monitor) – considerably higher than a typical PC-on-a-stick at about 10 watts. There are obvious business uses, such as digital signage and advertising when connected to screens or projectors.

This new round of computer miniaturisation marks a third wave of computerisation. First there were room-sized computers, shared between many users – the mainframe era. These time-sharing systems gradually disappeared as computers were miniaturised, replaced by the one computer per user of the personal computer or PC era. Today one person could have many computers, whether recognisable as desktop and latop PCs or smartphones or compute sticks, but which are accessible everywhere and anywhere. Known as ubiquitous or pervasive computing, this is the third wave in computing.

A smart, mobile future

As all computing devices grow smaller, the aim is that they are more connected and more integrated into our environment. The computing technology fades into our surroundings until only the user interface remains perceptible to users. It is an emerging discipline that brings computing to our living environments, makes those environments sensitive to us and have them adapt to the user’s needs. By enriching an environment with appropriate interconnected computing devices, the environment would be able to sense changes and support decisions that benefit its users.

There is a growing interest in these smart spaces using miniaturised computing technologies to support our daily lives more effectively. For example, smart offices, classrooms, and homes that allow computers to monitor and control what is happening in the environment.

Apple’s HomeKit and Google’s Nest are a start in this direction, providing the hardware and software to allow home automation. A smart home that monitors temperature and movement could allow elderly to remain self-sufficient and independent in their own home, for example, and voice activated devices could help everyday tasks such as ordering the shopping. A smart office could remind staff of information such as meeting reminders. It could turn the lights on and off, or control heating and cooling efficiently. A smart hospital ward will monitor patients and warn doctors and nurses of any potential problem or human errors.

The Smart Anything Everywhere vision of the European Commission drives research and development in this area. The evolution and disruptive innovation across the field of computing, from the Internet of Things, smart cities and smart spaces down to nano-electronics – the applications and benefits of greater miniaturisation of computers are endless.

The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...