Showing posts sorted by relevance for query google. Sort by date Show all posts
Showing posts sorted by relevance for query google. Sort by date Show all posts

Thursday, April 16, 2015

Google and Android in the firing line as EU pulls trigger on competition inquiry

Offers of candy won't prevent the European Commission's scrutiny now. Google by Asif Islam/Shutterstock.com

The are some specific words that are not particularly popular with the European Commission: “hi-tech”, “anti-competitive” and “bundling”, to name a few. Throw “US firms” into the mix, and the result is as many expected: Google has been accused of anti-competitive practices in Europe.


The culmination of a three-year investigation, the commission will now examine not only the prominence of Google’s own services in its search results, but also launches an inquiry into Android, Google’s mobile phone operating system.


The European Commission’s competition chiefs have sent a Statement of Objections to Google, requiring the search giant to respond to allegations of anti-competitive behaviour in online shopping, where “Shop with Google” links – paid for by advertisers – are promoted over other search results.


Concerns of anti-competitive behaviour will similarly form the heart of the commission’s investigation into Android, which is expected to focus on Google’s agreements with tablet and smartphone manufacturers which might fall under Article 101 of the Treaty of the Functioning of the European Union (TFEU).


These sorts of contractual arrangements include exclusivity agreements, such as where manufacturers are required to pre-install Google’s applications and services exclusively in their tablets and phones – for example, apps such as Google Maps, Gmail, Play, Music, Search and the other elements of the Google-branded ecosystem. They also include agreements whereby manufacturers are restricted from developing and marketing rival products to those Google offers.


The commission will also investigate Google’s practice of bundling its applications and services. Tying and bundling occurs when the supplier requires that two or more products are purchased together, even though they might have not been requested. This practice can be equated to abuse of dominance, especially when the supplier is a market giant the size of Google – and particularly in Europe where its dominance in search is greater than in the US and other markets.


This anti-competitive behaviour is likely to trigger Article 102 TFEU, which prohibits the abuse of a dominant position due to its likelihood to prevent or restrict competition. Similar issues have dogged Microsoft, which was dragged through the European courts for anti-competitive practices involving, among other things, software bundling and designing its products in such a way that it was difficult for third parties to create compatible products.


Bundling Google’s many products is one bone of contention. logos by Yeamake/Shutterstock.com


Google comes out fighting


In anticipation of the investigation, Google issued a memo presenting its basic argument against the commission’s allegations and aiming to reinforce its brand as a promoter of innovation and an investor in new ideas.


Google points to the open-source nature of the Android system, the pricing of its products, as well as the existence of a vibrant competing market for apps and services – worth US$7 billion in revenue for developers and content publishers last year alone. The point Google is trying to make is that in a market where innovation thrives and consumers have wide choice characterised by low prices, there cannot be a negative or anti-competitive effect on trade.


Practically speaking, this investigation is likely to lead to a highly protracted court case – the EU case against Microsoft took 16 years. If and when Google is charged with breach of EU Competition Laws, the firm could face fines up to US$6 billion. But the bigger problem for a company the size of Google is the legal costs such a protracted case will incur. Distracted by arguing its case against the European Commission, Google risks falling behind in its highly competitive and fast-moving industry.


Proceed with caution


A lesson from the Microsoft saga is the importance of timing – Microsoft was ultimately forced to unbundle software such as its media player from Windows many, many years after the case was brought – at a time when it no longer mattered. The pace of technological progresses far outstrips the European Commission’s ability to keep pace, and the grounds for a lengthy investigation launched in 2015 may no longer be relevant a few years from now. Markets can change overnight, something of which the European Commission is well aware.


Ultimately, the technology industry and associated markets have unique characteristics in respect of competition law – the pace of innovation means no one can be sure today what tomorrow’s big products will be. Consequently a dominant firm today may be last in line tomorrow. Competition specialists have long identified this fact and called for caution when intervening, as competition in the field of innovation takes place not in today’s markets, but for the markets of tomorrow.


The Conversation

Thursday, April 30, 2015

After years of talk, a regulator is willing to take on Google

In Monopol-e-Commerce, who plays the hat, and who gets the boot? danielbroche, CC BY

The European Commission’s decision to charge Google with abuse of its dominant market position in the search business in order to favour its own services has been criticised as too narrow in focus, too superficial for not dealing with the bigger problem of digital competition, ill-conceived for messing with the market, or not focused on the real problem of who owns our personal data.

While these are valid criticisms in their own way, they miss the most important point – that legal action has been taken at all. Whatever the result, this is a seismic and seminal move.

The US Federal Trade Commission (FTC) flirted with legal action in 2012 but withdrew, despite the conclusions of an leaked internal investigation that found that Google had “unlawfully maintained its monopoly over general search and search advertising”.

The European Commission worked closely with the FTC on its investigation and, like the FTC, decided against launching action by 2013. Joaquin Almunia, head of the European Competition Commission between 2010 and 2014, tried and failed to reach acceptable negotiated settlements with Google on three occasions. But his successor, Margrethe Vestager, has chosen action over discussion.

When the FTC launched an antitrust case against Microsoft in 1998 it dragged on for years, cost the organisation huge amounts of money and effort, and arguably opened up the space for Google to expand and eat much of Microsoft’s lunch. As journalist Charles Arthur writes in his book Digital Wars, the FTC’s action had a devastating impact on Microsoft’s self-esteem and “reached into the company’s soul”.

The case against Microsoft also shows why the FTC and the commission were reticent to launch a case against Google. It was legally and technologically complex, with courts struggling to apply 19th century antitrust law to the digital 21st century. Many people ended up dissatisfied with the result.

Hurdles could trip up either side

The case against Google has the potential to be even more complex and legally challenging. To demonstrate Google has abused its dominance the commission may need to call upon economists, engineers, investigative journalists and perhaps even sociologists.

It will need to define the markets in which Google acts. General search may be a relatively established market, but what about vertical search, or social search? It will need to translate competition law to a digital environment, to understand how algorithms work, and the extent to which Google’s algorithms favour the company, and to show evidence of abuse. It will also need to establish whether Google’s actions have damaged “consumer welfare”.

The European Commission will need to do all this while being intensively lobbied by some of the world’s largest and most powerful corporations, for example through the Microsoft-sponsored Initiative for a Competitive Online Marketplace (ICOMP).

It’s not a great surprise, therefore, that the commission is charging Google on narrow grounds, in this case on favouring its own comparison shopping product. Shopping ought to be relatively low-hanging fruit: a reasonably well-defined market that Google has tried (unsuccessfully) to enter on more than one occasion with previous products Froogle, Google Product Search, and Google Shopping. There are a number of vocal, disgruntled competitors such as Yelp, Expedia and TripAdvisor. And there is evidence upon which to build a case, compiled by the commission and the FTC since 2010.

The commission hopes that by narrowly focusing its action in the first instance it can create a precedent from which to build. It has already signalled where it may go next, having announced a formal investigation into Android, Google’s mobile operating system, on the same day. Concerns over Google’s web content scraping and its exclusivity agreements with advertising partners have also been highlighted as potential areas of inquiry.

Legal ramifications

Whichever way the result falls, the repercussions will be pivotal. If the commission wins it will create a precedent with which the commission may choose to take on the dominance of other digital giants such as Amazon and Facebook. It may also trigger action by other governments and private actions. For Google it could lead to a crisis of confidence and loss of market lead similar to that experienced by Microsoft.

The consequences could be even more significant if the commission loses. Some will see it as evidence of the unchallengeable power of the global tech titans. Some will see it as confirmation that the legal action was merely European anger at US tech success. Few other democratic governments will be likely to take up cudgels and follow the commission’s lead.

However, the most likely result is that Google will settle. Though, as has been pointed out in reference to previous attempts to negotiate with the firm Google, settlements could create a precedent too, which could make it difficult in the future to pursue Google for anti-competitive behaviour in one field having settled for the same in another.

In his landmark book The Master Switch, Tim Wu outlined the stages of each information cycle. First a period of openness characterised by innovation, entrepreneurship and relative confusion. Then consolidation, in which a small number of organisations grow dominant. And finally monopolisation of markets – and often subsequent government intervention. For the web, the commission’s antitrust action against Google may well signify the start of the final stage of the cycle.

The Conversation

Monday, August 17, 2015

Four problems the revamped Google should tackle now it's free to innovate

Reuters/Steve Marcus

Google is seen as a world leader in innovation, an important backer of tech start-ups and a pioneer in all our futures. The corporation, which is financially the size of a mid-range country, just reorganised its structure so that it can continue to invest in experimental technologies – such as drones, driverless cars and unusual medical devices – without worrying shareholders.

But many of Google’s current publicly reported innovations seem to be aimed at encouraging us to spend even more time connected to the internet. They are “technology-push” innovations, products that require the creation of a new market because there isn’t an obvious existing demand. Google Glass, the wearable optical computer that has now been discontinued is a good example. It didn’t appear to be rooted enough in a genuinely understood need.

On the other side there are “need-pull” innovations that respond to existing needs and are the result of humble enquiry. Developments by Google in security devices, and modular smart phones all appear, on the surface to meet needs. But are they the genuine result of humble enquiry?

The problem with Google’s moonshots is that they are fired at the Moon. And there’s no one on the Moon (not yet anyway). Many real needs are social, cultural and environmental, not rooted only in a hunger for the next wearable gizmo. Here are some real-need challenges that Google could put its mighty innovation machine to work tackling and improve the world in the process.

Digital dealmaker Shutterstock

1. Making money more secure

In a world of identity theft and online fraud, there is a huge need for more secure ways to transfer money and carry out transactions. Various ways to simply move money around, for example between smartphones, are emerging but other innovations could vastly improve security. “Smart contract” programs could ensure both parties stick to their side of a deal. For example, if you buy something online then a smart contract could take the money from your bank account only when it receives notification from the delivery company the product has arrived.

Virtual or cryptocurrencies such as Bitcoin are starting to incorporate such technology but these systems still carry suspicion due to their use by black markets. Google has so far just hovered around the edges of Bitcoin but it has the opportunity to lead development and help make the technology mainstream.

To do so, however, it may also have to fundamentally rethink its approach to privacy, which is an inherent part of Bitcoin but largely absent from the way Google currently operates thanks to its widespread data-gathering operation.

Online jungle. Shutterstock

2. Creating a safer online world

Google’s Project Vault will give us a digital safe in which to securely store our smartphone’s personal data and messages. Another useful gadget no doubt. But instead of developing security devices and making gadgets less stealable, I’d like to see Google support us in becoming more secure in ourselves.

Existing innovations came about as a reaction to the insecurities of a hacked world. But there are opportunities not only for creating new digital safes and padlocks, alarms and security guards but also to begin an exploration of how to create preventive and naturally safe virtual and physical environments. These environments would be less about protection and defence and more about assurance and trust.

The new windows Shutterstock

3. Making technology less intrusive

Smartphones are constantly diverting our attention from the real world. Integrating technology more seamlessly into our lives could free us from their grip. Wearable technology and smart clothing could be one way of doing this, but better would be technologies that rely on and develop our tactile relationships with the world and each other.

This may well involve finally dispensing with the “screen” and the gadget as the required focus of our attention. A big question is how can Google create technology that doesn’t require us to “look”, instead of having us squint at screens of different sizes, flashing us into trance states and harming our eyesight.

Some experiments in less noticeable technology may involve an initial intrusion, for example, digital implants for communication, enhancing our senses or even curing physical conditions. But it is not guaranteed people will want to become cyborgs. A big opportunity is to create technologies that arise and pass away as needed, that are temporary, emergent and that enter our lives when we truly need them and leave when we don’t.

Flying turbines Makani/Google

4. Changing the way we produce energy

Energy is one of the biggest challenges for the whole planet. What if Google turned its weighty innovation might towards generating truly clean energy? Others in Silicon Valley have already started making inroads into the energy sector – see this gadget that allows consumers to access solar energy through smart tech, without buying expensive panels. Electric vehicle and battery technology such as Tesla is making also continues to grow and innovate.

But country-sized corporations such as Google could do even more (perhaps they are behind closed doors). There are some crazy-sounding, alternative forms of energy emerging that might just work. Solar roads, sewage waste and even high altitude wind energy might benefit from some Google kickstart resource (the latter just has). Ok, Google! While you are up high in the sky, installing wifi balloons, why not harness some free energy for us all?

The Conversation

Tuesday, August 11, 2015

Google becomes Alphabet in effort to keep the innovative spark alive

Google: no longer just a search engine. mwichary/flickr, CC BY

In the corporate world you learn quickly that if small companies want to collaborate, it tends to happen, while efforts to collaborate with large companies may involve many meetings and involve many people with no guarantee anything will come of it. Small companies innovate as they need to; big companies are often risk averse.

Google’s announcement that it is to reorganise under a new parent company, Alphabet, is a step towards overcoming this sort of bureaucracy and maintaining the fiercely innovative and daring streak that has until now been its trademark.

Large companies have more freedom to ignore their end users, preferring secrecy from fear of having their ideas stolen, and instead focus on large stakeholders. This means that they often create products that are too wide in scope and which fail to address specific needs.

For smaller businesses, innovations are part of the way they engage with customers. Rapid prototypes are released, and assessed to see what works and what doesn’t. These prototypes are then scaled up and made relevant to a wider range of potential customers. Despite its enormous size and wealth, this is also the approach that Google favours.

Too often large companies don’t trust their engineers to make sensible judgements on business decisions. This probably shouldn’t be the case, as often the most successful technology companies are run by those who worked up through a technical role. Companies such as Hewlett Packard, Apple and Google made their names through being technically excellent, rather than a narrow focus on business objectives.

Google’s move effectively splits one monolithic company into several smaller companies wholly owned by Alphabet, of which Google is the largest. In this way, Google (or should we say, Alphabet) hopes to keep each of its areas of focus small, fast, and innovative.

G is for Google. Let’s hope M isn’t for mistake. Alphabet

Risk averse

After all, Google is not just a search engine any more. It has expanded in many directions, from mobile phone design and operating systems, to smart home control kits, automotous cars, geomapping, and off-the-wall projects. It is comfortable trying things out and dedicating the resources to ideas with potential.

This risk-taking is a key part of Google’s innovation infrastructure, giving independence of thought to staff and technical leaders without over-burdening them with business issues. In fact, it’s similar to a traditional academic research model, where academics with good ideas get the resources that allow them to drive them forward. Done well, the university becomes a leader in the field, just as Google has become a technology giant.

Small works in software

Google wants to attract the best staff into research labs, and achieves this by creating a small-company infrastructure where engineers are not burdened by bureaucracy. However, unlike smaller businesses, Google has the deep pockets to support its staff. A rising star can be given responsibilities without the need to progress through a formal hierarchy.

After all, the structure of large companies may limit their ability to produce useful software – take for example the many major government IT contract disasters, such as the £10 billion spent on an NHS IT system that ultimately never worked.

What would a small company have done differently? It would have invested time in searching for the best solution, created and tested prototypes, and used those as a basis for the final product. The large companies involved in the NHS contract had off-the-shelf solutions, which they pushed without questioning their suitability. Too much money was spent on design and requirements analysis, and it was years before the product reached the clinical staff, by which point it was a computer programmer’s dream but a nightmare for the intended user.

Reputations built on people

Leading universities generally have individuals to thank for their success – for examples cryptography at Royal Holloway, led by Professor Fred Piper, and the University of Edinburgh’s Informatics Group that thrived under the guidance of Professor Sidney Michaelson.

So big companies need to act like small ones and provide opportunities for innovation and risk-taking to thrive, where individuals who do not want to conform to strict rules and procedures can take on their vision of the future. After all, Apple was a garage company once, and Microsoft had to borrow someone else’s operating system (known as 86-DOS and purchased from Tim Paterson of Seattle Computer Products) to get a foot on the ladder.

Google’s enormous impact is mostly down to the creativity of individuals, its image still one of a bunch of software developers who just love to write code – not easy for a company whose products increasingly find places in almost every web user’s life. Let’s hope that the creation of Alphabet protects the small-company ethos that has made Google great.

The Conversation

Saturday, April 25, 2015

Is your website mobile-friendly? Google update will hit you badly if not

Just how common this sight is shows how big mobilegeddon's impact will be. hipsters by View Apart/shutterstock.com

There are no four riders of the apocalypse, but mobilegeddon is here: Google is due to roll out its latest search ranking algorithm update. Following the way in which mobile phones are driving internet uptake and innovation, this update will focus on favouring mobile devices – and websites that aren’t mobile-friendly are likely to find their search results ranking gets clobbered.

The commercially sensitive secret formulae Google uses to calculate and rank search results are constantly refined. Minor refinements to this algorithm go unnoticed and only appear if Google confirms them. On the other hand major updates such as Google Panda and Google Penguin have had a big impact on the search engine optimisation (SEO) industry.

SEO no longer an optional extra

Search engines are the main means of finding our way through the vast amounts of information online. As illustrated by the Customer Journey to Online Purchase study, organic search (that is, googling for something) plays a major role in bringing a customer to an organisation.

The growing importance of search engines means that SEO is becoming a hygiene factor in digital marketing. In other words, it’s an essential, like washing or brushing your teeth, rather than a nice extra if you’ve the time and resources. Any change in the search ranking algorithm can have drastic effects on their search page ranking – and consequently a major impact on the organisation’s finances.

No company is immune from SEO problems. Case studies such as JC Penney, Interflora, and the BBC and other major brands are just a few examples of big sites that have found themselves penalised. Those that rely entirely on online customers rely essentially rely on search engines for their revenue. The auction and e-commerce site eBay was affected last year by losing ranking for an estimated 120,000 web pages – reasons for which have not been confirmed by Google, but could be related to the search ranking updates at the time.

This suggests that size and market penetration is no defence against poor SEO. The search engine penalty was estimated to cost eBay US$200m – acknowledged in its financial reporting.

While the latest Google search algorithm change only affects search engine users on mobile devices such as phones and tablets, the repercussions can be devastating to some businesses. Imagine you’re a restaurant owner in Manchester, who might be interested in ranking well for search terms such as “restaurant in Manchester”. Using Google’s Display Planner tool we can see that for this keyword that mobile and tablet combined make up more than 70% of users. The message is clear: a restaurant without a mobile friendly website is going to lose out, while those that do are going to benefit.

Searches for ‘restaurant in Manchester’ are overwhelmingly from mobile devices Google AdWords Display Planner

It’s all about first rank

While not being on the first page of Google’s search results might not be a problem for every business, the findings of repeated studies show just how skewed the proportion of click-throughs from search results are – not just towards the first page, but even to the first result. This makes a big difference to visitor numbers, which makes a big difference to sales.

Click-throughs fall rapidly away from the top-ranked search result. Google Analytics for Businessculture.org, Author provided

This image shows Google Analytics data showing visitors arriving at the businessculture.org website. Although the keywords they searched for aren’t visible (hence “not provided”), the numbers demonstrate the rapid falling-off of click-throughs as the businessculture.org website appears at a progressively lower rank on the search results page. Where a link to businessculture.org appeared as the first search result, click-throughs were almost triple that of the second placed link.

Another year of mobile

We’ve predicted this shift by conducting industry surveys of digital marketing firms. Most of the agencies and web designers who took part in our research had already moved towards a mobile-first website design approach, where priority in design and usability is given to mobile users.

Google has not kept quiet about this change, having long offered the tools to help website owners get their website in order. Items to check for include having large text that’s visible on a small mobile screen and links that are not so close to each other than they’re difficult to navigate with using a finger.

There have been frequent claims over the years that this was “the year of mobile”. But in 2014 this claim was backed by the fact that for the first time more visitors to Google came from phones and tablets than from desktops and laptops. With Google’s mobilegeddon update, in 2015 those repercussions are going to be felt in earnest.

The Conversation

Monday, August 24, 2015

Privacy watchdog takes first step against those undermining right to be forgotten

It's not erasing the past, just making memories fuzzier. chalboard by sergign/shutterstock.com

The UK’s data privacy watchdog has waded into the debate over the enforcement of the right to be forgotten in Europe.

The Information Commissioner’s Office issued a notice to Google to remove from its search results newspaper articles that discussed details from older articles that had themselves been subject to a successful right to be forgotten request.

The new reports included, wholly unnecessarily, the name of the person who had requested that Google remove reports of a ten-year-old shoplifting conviction from search results. Google agreed with this right to be forgotten request and de-linked the contemporary reports of the conviction, but then refused to do the same to new articles that carried the same details. Essentially, Google had granted the subject’s request for privacy, and then allowed it to be reversed via the back door.

The ICO’s action highlights the attitude of the press, which tries to draw as much attention to stories related to the right to be forgotten and their subjects as possible, generating new coverage that throws up details of the very events those making right to be forgotten requests are seeking to have buried.

There is no expectation of anonymity for people convicted of even minor crimes in the UK, something the press takes advantage of: such as the regional newspaper which tweeted a picture of the woman convicted of shoplifting a sex toy. However, after a criminal conviction is spent, the facts of the crime are deemed “irrelevant information” in the technical sense of the UK Data Protection Act.

The arrival of the right to be forgotten, or more accurately the right to have online search results de-linked, as made explicit by the EU Court of Justice in 2014, does not entail retroactive censorship of newspaper reports from the time of the original event. But the limited cases published by Google so far suggest that such requests have normally been granted, except where there was a strong public interest.

Stirring up a censorship storm

It’s clear Google does not like the right to be forgotten, and it has from early on sent notifications to publishers of de-listed links in the hope they will cry “censorship”. Certainly BBC journalist Robert Peston felt “cast into oblivion” because his blog no longer appeared in search results for one particular commenter’s name.

It’s not clear that such notifications are required at all: the European Court of Justice judgment didn’t call for them, and the publishers are neither subject (as they’re not the person involved) nor controller (Google in this case) of the de-listed link. Experts and even the ICO have hinted that Google’s efforts to publicise the very details it is supposed to be minimising might be viewed as a privacy breach or unfair processing with regard to those making right to be forgotten requests.

The Barry Gibb effect

De-listing notifications achieve something similar to the Streisand effect, where publicity around a request for privacy leads to exactly the opposite result. I’ve previously called the attempt to stir up publisher unrest the Barry Gibb effect, because it goes so well with Streisand. So well, maybe it oughta be illegal.

Some publishers are happy to dance to Google’s tune, accumulating and publishing these notifications in their own lists of de-listed links. Presumably this is intended to be seen as a bold move against censorship – the more accurate “List of things we once published that are now considered to contain irrelevant information about somebody” doesn’t sound as appealing.

In June 2015, even the BBC joined in, and comments still show that readers find salacious value in such a list.

Upholding the spirit and letter of the law

While some reporters laugh at the idea of deleting links to articles about links, this misses the point. The ICO has not previously challenged the reporting of stories relating to the right to be forgotten, or lists of delisted links – even when these appear to subvert the spirit of data protection. But by naming the individual involved in these new reports, the de-listed story is brought straight back to the top of search results for the person in question. This is a much more direct subversion of the spirit of the law.

Google refused the subject’s request that it de-list nine search results repeating the old story, name and all, claiming they were relevant to journalistic reporting of the right to be forgotten. The ICO judgement weighed the arguments carefully over ten pages before finding for the complainant in its resulting enforcement notice.

The ICO dealt with 120 such complaints in the past year, but this appears to be the only one where a Google refusal led to an enforcement notice.

The decision against Google is a significant step. However, its scope is narrow as it concerns stories that unwisely repeat personally identifying information, and again it only leads to de-listing results from searches of a particular name. It remains to be seen whether other more subtle forms of subversion aimed at the right to be forgotten will continue to be tolerated.

The Conversation

Sunday, May 31, 2015

Oracle vs Google case threatens foundations of software design

Copyright keeps appearing where it's not wanted. Christopher Dombres, CC BY

The Java programming language, which has just turned 20 years old, provides developers with a means to write code that is independent of the hardware it runs on: “write once, run anywhere”.

But, ironically, while Java was intended to make programmers' lives easier, the court case between Oracle, Java’s owner, and Google over Google’s use of Java as the basis of its Android mobile operating system may make things considerably more difficult.

Google adopted Java for Android apps, using its own, rewritten version of the Java run-time environment (the Java virtual machine or VM) called Dalvik. The Oracle vs Google court case centres around the use of Java in Android, particularly in relation to Application Program Interface (API) calls.

An API is a standard set of interfaces that a developer can use to communicate with a useful piece of code – for example, to exchange input and output, access network connections, graphics hardware, hard disks, and so on. For developers, using an existing API means not having to reinvent the wheel by accessing ready-made code. For those creating APIs, making them publicly and freely accessible encourages developers to use them and create compatible software, which in turn makes it more attractive to end users.

For example, OpenGL and Microsoft’s DirectX are two APIs that provide a standardised interface for developers to access 3D graphics hardware, as used in videogames or modelling applications. Hardware manufacturers ensure their hardware is compatible with the API standard, the OpenGL Consortium and Microsoft update their APIs to ensure the latest hardware capabilities are addressed and games developers get a straightforward interface compatible with many different types of hardware, making it easier to create games.

Java RTE and Android ART Author provided

Fight for your right to API

Google designed Android so that Java developers could bring their code to Android by recreating (most of) the standard Java API calls used in the Java libraries and supported by the standard Java VM. The case revolves around whether doing this – by essentially re-creating the Java API rather than officially licensing it from Oracle – is a breach of copyright. If the case finds in favour of Oracle it will set a precedent that APIs are copyrightable, and so make developers lives a lot more legally complex.

To be clear, the case doesn’t revolve around any claim that Google reused actual code belonging to Oracle, but that the code it produced mimicked what Oracle’s Java run-time environment was capable of.

The initial finding came in May 2012, when a US court agreed with Google’s claim that using APIs them falls under fair use, and that Oracle’s copyright was not infringed. Then in May 2014, the US Federal Circuit reversed part of the ruling in favour of Oracle, especially related to the issue of copyright of an API. Now, at the US Supreme Court’s request, the White House has weighed in in Oracle’s favour.

Can you ‘own’ an API?

For most in the industry, a ruling that it’s possible to copyright an API would be a disaster. It would mean that many companies would have to pay extensive licence fees, and even face having to write their own APIs from scratch – even those needed to programmatically achieve only the simplest of things. If companies can prevent others from replicating their APIs through recourse to copyright law, then all third-party developers could be locked out. Also the actual call to the API and its functionality could be copyrighted too, so that the functionality would have to be different too, otherwise it would be a copy.

In the initial trial, District Judge William Alsup taught himself Java to learn the foundation of the language. He decided that to allow the copyrighting of Java’s APIs would allow the copyrighting of an improbably broad range of generic (and therefore uncopyrightable) functions, such as interacting with window menus and interface controls. The Obama administration’s intervention is to emphasise its belief that the case should be decided on whether Google had a right under fair use to use Oracle’s APIs.

It’s like the PC all over again

Something like this has happened before. When IBM produced its original PC in 1981 (the IBM 5150), a key aspect was access to the system calls provided by the PC BIOS, which booted the computer and managed basic hardware such as keyboard, monitor, floppy disk drive and so on. Without access to the BIOS it wasn’t possible to create software for the computer.

One firm, Compaq, decided to reverse-engineer the BIOS calls to create its own, compatible version – hence the term “IBM PC compatible” become standard language to describe a program that would run on an IBM model or any of the third-party hardware from other manufacturers that subsequently blossomed. IBM’s monopoly on the PC market was opened up, and the PC market exploded into what we see today – would this have happened had IBM been able to copyright its system calls?

So 20 years after the birth of Java, through the groundwork laid by its original creator, Sun Microsystems, Java has become one of the most popular programming languages in the world through being cross-platform and (mostly) open. But now it seems it ends in a trap. The wrong decision in this case could have a massive impact on the industry, where even using a button on a window could require some kind of licence – and licence fees. For software developers, it’s a horrible thought. Copyrighting APIs would lock many companies into complex agreements – and lock out many other developers from creating software for certain platforms.

For Google, there’s no way of extracting Java from Android now; its runaway success is bringing Google only a whole lot of problems. But as we go about building a world built on software, be assured that one way or another this ruling will have a massive effect on us all.

The Conversation

Friday, September 18, 2015

Six easy ways to tell if that viral story is a hoax

Pull the other one. from www.shutterstock.com

“And so it begins … ISIS flag among refugees in Germany fighting the police,” blared the headline on the Conservative Post; “with this new leaked picture, everything seems confirmed”. The image in question purported to show a group of Syrian refugees holding ISIS flags and attacking German police officers.

For those resistant to accepting refugees into Europe, this story was a godsend. The photo quickly spread across social media, propelled by far-right groups such as the English Defence League and Pegida UK. At the time of writing, the page claims to have been shared over 300,000 times.

The problem is, the photo is three years old, and has precious little to do with the refugee crisis. In fact, it seems to be from a confrontation between members of the far-right Pro NRW party and muslim counter-protesters, which took place in Bonn, back in 2012. A number of news outlets tried to highlight the hoax, including Vice, the Independent and the Mirror, as did numerous Twitter users.

But news in the digital age spreads faster than ever, and so do lies and hoaxes. Just like retractions and corrections in newspapers, online rebuttals often make rather less of a splash than the original misinformation. As I have argued elsewhere, digital verification skills are essential for today’s journalists, and academic institutions are starting to provide the necessary training.

But ordinary people are also starting to take a more sophisticated approach to the content they view online. It’s no longer enough to read the news – now, we want to understand the processes behind it. Fortunately, there are a few relatively effective verification techniques, which do not require specialist knowledge or costly software. Outlined below are six free, simple tools that any curious news reader can use to verify digital media.

Reverse image search

Not only is a reverse image search one of the simplest verification tools, it’s also the one that showed the “leaked” ISIS refugee photo was a fake. Both of the most popular services, Google Images and TinEye, found pages containing this image dating back to mid-2012. As the screenshot below shows, the “ISIS refugee” story could be debunked in less than a second.

When the a link to the story was posted to Reddit, sceptical users swiftly took to Google to query it. Soon, one reported back: “Google Image Search says the photo is from 2012”.

Any way this can be thinner?

YouTube DataViewer

When watching the latest viral video on YouTube, it’s important to be on the look-out for “scrapes”: a scrape is an old video, which has been downloaded from YouTube and re-uploaded by someone who fraudulently claims to be the original eyewitness, or asserts that the video depicts a new event.

Amnesty International has a simple but incredibly useful tool called YouTube DataViewer. Once you’ve entered the video’s URL, this tool will extract the clip’s upload time and all associated thumbnail images. This information – which isn’t readily accessible via YouTube itself – enables you to launch a two-pronged verification search.

If multiple versions of the same video are hosted on YouTube, the date enables you to identify the earliest upload. This is most likely to be the original. The thumbnails can also be used in a reverse image search to find web pages containing the video, offering a quick and powerful method for identifying older versions or uses of the same video.

Jeffrey’s Exif Viewer

Photos, videos and audio taken with digital cameras and smartphones contain Exchangeable Image File (EXIF) information: this is vital metadata about the make of the camera used, and the date, time and location the media was created. This information can be very useful if you’re suspicious of the creator’s account of the content’s origins. In such situations, EXIF readers such as Jeffrey’s Exif Viewer allow you upload or enter the URL of an image and view its metadata.

Below is the EXIF data of a photograph I took of a bus crash in Poole in August 2014. It’s very comprehensive; had I claimed the photo was taken, say, last week in Swanage, it would be very simple to disprove. It is worth noting that while Facebook, Instagram and Twitter remove EXIF data when content is uploaded to their servers, media shared via platforms such as Flickr and WhatsApp still contain it.

FotoForensics

FotoForensics is a tool that uses error level analysis (ELA) to identify parts of an image that may have been modified or “photoshopped”. This tool allows you to either upload, or enter the URL of a suspicious image and will then highlight areas where disparities in quality suggest alterations may have been made. It also provides a number of sharing options, which are useful for challenging the recirculation of inaccurate information, because they allow you to provide a direct link to your FotoForensics analysis page.

WolframAlpha

WolframAlpha is a “computational knowledge engine”, which allows you to check weather conditions in at a specific time and place. You can search it using criteria such as “weather in London at 2pm on 16 July, 2014”. So if, for example, a photo of a freak snowstorm has been shared to your timeline, and WolframAlpha reports that it was 27 degrees and clear when the photo was purportedly taken, then alarm bells ought to be ringing.

Online maps

Identifying the location of a suspicious photo or video is a crucial part of the verification process. Google Street View, Google Earth (a source of historical satellite images) and Wikimapia (a crowd-sourced version of Google Maps, featuring additional information) are all excellent tools for undertaking this kind of detective work.

You should identify whether there are any reference points to compare, check whether distinctive landmarks match up and see if the landscape is the same. These three criteria are frequently used to cross-reference videos or photos, in order to verify whether or not they were indeed shot in the location the uploader claims.

Google Earth, in particular, has been put to incredible use use by Elliot Higgins AKA Brown Moses, of Bellingcat – a site for investigative citizen journalism.

The Conversation

Thursday, June 11, 2015

AI 'cheating' scandal makes machine learning sound like a sport – it isn't

Under an uncomfortable spotlight. Baidu image via Gil C/Shutterstock.com

News that Baidu, the Google of China, cheated to take the lead in an international competition for artificial intelligence technology has caused a storm among computer science researchers. It has been called machine learning’s “first cheating scandal” by MIT Technology Review and Baidu is now barred from the competition.

The Imagenet Challenge is a competition run by a group of American computer scientists which involves recognising and classifying a series of objects in digital images. The competition itself is no Turing test, but it is an important challenge, and one of commercial importance to many firms.

The cheating by Baidu was nothing sophisticated, more akin to an initial stolen glimpse at the answers, which was followed by more of the same when it went unnoticed. Even that makes it sound worse than it was. Part of the competition involved looking at the answers anyway: someone in the Baidu team simply did it more than they were officially allowed to.

In their paper about the submission, Baidu themselves weren’t claiming anything more than an engineering advance: they built a large supercomputer that could handle more data than previous implementations. A necessary advance, but very much a “scaling up” of existing solutions – one that would be financially outside the reach of a typical academic research group. They participated in the competition as an attempt to demonstrate that, after such significant investment in hardware, their new supercomputer was able to perform. They have since apologised for breaking the rules of the competition.

In any case, the significant breakthrough in the area had already been achieved by Geoff Hinton’s group at the University of Toronto. They produced the machine learning equivalent of the high jump’s “Frosbury Flop” to win the 2012 version of the competition with such a significant improvement that all leading entries are now derived from their model. That model itself also built on a two-decade-long program of research by Yann LeCun, then of New York University.

Blown out of proportion

The result of Baidu’s entry into the competition was posted as an “e-print” publication. E-prints are articles that are unreviewed. They are a slightly more formal versions of a “technical blog post”. The problem was identified by the community quickly, within three weeks, and a corrected version was published. This is science in action.

The “cheating scandal” was labelled as such by the very same prestigious technical publication that broadcast the initial results to its readers within two days of the e-print’s publication: MIT Technology Review.

Singling out MIT Technology Review in this case may be a little unfair, because this is part of a wider phenomenon where technical results are trumpeted in the press before they are fully tasted (let alone digested) by the scientific community. E-print publication is a good thing, it allows ideas to be spread quickly. However, the implications of those ideas need to be understood before they are presented as scientific fact.

Ideally knowledge moves forward through academic consensus, but in practice that consensus itself is swayed by outside forces. This raises questions about who is the ultimate arbiter of academic quality. One answer is opinion: the opinion of those that matter, such as governments, businesses, other scientists or even the press. Success in machine learning has meant it is attracting such attention.

Getting on with it for decades

Ironically, the developments that enabled recent breakthroughs in AI all took place outside of such close scrutiny. In 2004 the Canadian Institute for Advanced Research (CIFAR) funded a far-sighted program of research. An international collaboration of researchers was given the time, intellectual space and money that they needed to make these significant breakthroughs. This collaboration was led by Geoff Hinton, the same researcher whose team achieved the 2012 breakthrough result.

This breakthrough led to all the major internet giants fighting for their pound of academic flesh. Of those researchers involved in CIFAR, Hinton has been hired by Google, Yann LeCun leads Facebook’s AI Research team, Andrew Ng heads up research at Baidu and Nando de Freitas was recently recruited to Google DeepMind, the London start-up that Google lavished £400m on acquiring.

The Baidu cheating case is symptomatic of a big change in the landscape for those who work in machine learning and who drove these advances in AI. Until 2012, ideas from researchers in machine learning were under the radar. They were widely adopted commercially by companies like Microsoft and Google, but they did not impinge much on public consciousness. But two breakthrough results brought these ideas to the fore in the the public mind. The Imagenet result by Hinton’s team was one. The other was a program that could learn to play Atari video games. It was created by DeepMind, triggering their purchase by Google.

However, just as Deep Blue’s defeat of Kasparov didn’t herald a dawn in the age of the super-intelligent computer, neither will either of these significant accomplishments. They have not arrived through better understanding of the fundamentals of intelligence, but through more data, more computing power and more experience.

Who follows in whose wake?

These apparent breakthroughs have whetted the appetite. The technical press is becoming susceptible to tabloid sensationalism in this area, but who can blame them as companies and universities ramp up their claims of scientific advance? The advances are somewhat of an illusion, they are the march of technologists following in a scientific wake.

The wake-generators are a much harder to identify or track, even for their fellow scientists. But the very real danger is that expectations of significant advance or misunderstanding of the underlying phenomenon will bring about an AI bubble of the type we saw 30 years ago. Such bubbles are very damaging. When high expectations aren’t immediately fulfilled then entire academic domains can be dismissed and far-seeing proposals like CIFAR’s go unfunded.

Academics make those first waves. Boat wake via Dennis Tokarzewski/www.shutterstock.com

Even if Baidu’s result were valid, it would have been just the type of workaday scientific development that most of us spend most of our time trying to cook up. It did not merit a pre-publication announcement in MIT Technology Review and the pre-publication withdrawal should have been just a footnote to add to the diverse collection that keep all astute academics scientifically wary. Rather boringly, the only true marker of scientific advance is repeatability. Whether that is within the scientific community or by transfer of ideas to the commercial world.

When reporting on the scandal MIT Technology Review refers to participation in these competitions as a “sport”. I feel sporting analogies give a wrong idea of the spectacle of scientific progress. It is more like watching a painter at work. It is very rare that any single brushstroke reveals the entire picture. And even when the picture is complete, it may only tell us a limited amount about what the next creation will be.

The Conversation

Wednesday, July 29, 2015

Auto industry must tackle its software problems to stop hacks as cars go online

Not what anyone wants to see while driving. Bill Buchanan, Author provided

Many companies producing software employ people as penetration testers, whose job it is to find security holes before others with less pure motives get a chance. This is especially common in the finance sector, but following the recent demonstration of a drive-by hack on a Jeep, and parent company’s Fiat Chrysler’s huge recall of 1.4m vehicles for security testing, perhaps it’s time the auto industry followed its lead.

The growing number of software vulnerabilities discovered in cars has led to calls for the US Federal Trade Commission and National Highway Traffic Safety Administration to impose security standards on manufacturers for software in their cars. Cars are likely to require a software security rating so consumers can judge how hack-proof they are.

In the past, cars have generally avoided any form of network connectivity, but now consumers want internet access to stream music or use apps such as maps. If a car has a public IP address then, just as with any computer or device attached to the internet, a malicious intruder can be potentially connect to and hijack it – just as the Jeep hack demonstrated.

Andy Davis, a researcher from NCC Group, has shown that it may be possible to create a fake digital radio (DAB) station in order to download malicious data to a car when it tries to connect. While the Jeep hack was performed on a running car, the NCC Group researchers demonstrated that an off-road vehicle could be compromised, including taking control of steering and brakes. As the malicious data was distributed through a broadcast radio signal, it could even result in a nightmare situation where many cars could be compromised and controlled at the same time. More details on how the hack works will be revealed at the Black Hat conference this summer.

Tuning into the wrong station could give you more than you bargained for. Bill Buchanan, Author provided

More devices, more bugs, more problems

In the last few weeks Ford has recalled 433,000 of this year’s Focus, C-MAX and Escape models because of a software bug which leaves drivers unable to switch off their engine, even when the ignition key is removed. Recently, it was shown that BMW cars would respond to commands sent to open their doors and lower their windows – hardly the height of security. The firm had to issue a security patch for more than 2m BMW, Mini and Rolls-Royce vehicles.

As more and more software appears in cars, the problems of patching them will grow. Our desktop and laptop computers can be set to auto-update, but with embedded systems it’s not so easy. The next wave of the internet, the internet of things where billions of devices will be network-connected, will evidently bring a whole lot more security problems in terms of finding and fixing bugs – on many more devices than just cars.

Crowdsourcing debugging

Some companies take this seriously, while others try and distance themselves from flaws in their products. Google runs a Vulnerability Reward Program with rewards from US$100-$20,000. For example, Google will pay a reward of US$20,000 for any exploit that allows the remote takeover of a Google account.

Google even has a Hall of Fame, for which it awards points for the number of bugs found, their severity, how recent, and whether the bounty recipient gives their reward to charity – Nils Juenemann is currently in top place. Google also awards grants up to US$3,133.7 as part of its Vulnerability Research Grants scheme.

Microsoft and Facebook also operate Bug Bounty schemes to encourage digging out bugs in its own internet software, with a minimum bounty of US$5,000. But while these companies actively seek people to improve software by fixing bugs, companies such as Starbucks and Fiat Chrysler take a negative approach to those who find bugs in their products, unhelpfully describing such efforts as criminal activity.

Change of approach needed

I don’t mean to alarm, but software is one of the most unreliable things we have. Imagine if you were in the fast lane of the motorway when a blue-screen appears on your dashboard saying:

Error 1805: This car has encounter a serious error and will now shutdown and reboot

It would be back at the dealer in no time. We have put up with bugs for decades. We can’t trust these embedded software systems to be bug-free, yet they’re increasingly appearing in safety-critical systems such as speeding one-tonne vehicles. When was the last time your microprocessor suffered a hardware breakdown? Compare this to the last time Microsoft Word crashed and you can see it’s not the hardware’s fault. This is generally because software suffers from sloppy design, implementation and testing. So while a word processor crash is annoying, a car crash is clearly much worse. can we say: Potentially in both senses of the word. (?)

Car owners of the future will need to be a lot more savvy about keeping their vehicles updated. Consider that you are on the motorway one evening and the car informs you:

You have a critical update for your braking system, please select YES or NO to install the update. A reboot of the car is not required, and the update will be installed automatically from your Wi-Fi enabled vehicle

Would you answer YES or NO? If you choose NO, you don’t trust the software; if you choose YES you are entrusting it to execute without problems while driving at speed along a motorway. Neither of these are good places to be.

The auto industry has a long way to go to prove that it grasps the risks posed by network-enabled vehicles and to then tackle them with our safety at all costs in mind. An independent safety rating for cars would provide some incentive for manufacturers to get this right. As for penetration testers, the industry may find that bug bounty schemes can help do this difficult work for them for less money than it costs in fines and recalls when undiscovered bugs make it to their products on the market.

The Conversation

Friday, June 26, 2015

Miniaturisation will lead to 'smart spaces' and blur the line between on and offline

A computer-on-a-stick is the start, but they'll get smaller and smarter yet. Lenovo

Lenovo, the Chinese firm that has bought up IBM’s cast off PC business, has announced a miniaturised computer not much larger than a smartphone, which can be connected to any screen via an HDMI connection.

Advances in electronic components manufacturing processes and integration have resulted in large-scale miniaturisation of computer systems. This has enabled the latest system-in-package and system-on-a-chip approaches, where the processor and other necessary functionality usually provided by many microchips can be incorporated into a single silicon chip package.

Lenovo’s Ideacenter Stick 300 runs Windows 8 or Linux, is powered by a micro-USB connector and comes fitted with a new Intel Bay Trail CPU, 2GB RAM, 32GB flash storage, an SD card reader, Wi-Fi – even speakers.

Lenovo isn’t the first to shrink the PC down to pocket size. Intel’s Compute Stick is another dongle-sized computer with similar specs released this year.

Intel’s Compute Stick is another effort to shrink the PC to pocket size. Intel

The Raspberry Pi, now upgraded to its second major release, was probably the first to provide the functionality of a desktop or laptop computer in a credit card sized electronic board. Over five million Raspberry Pi computers have been sold since launch in 2012.

Google has used its stripped-down Chrome OS based on its Chrome browser to reduce a Chromebook (Chrome OS-powered laptop) down to the Chromebit. While the Chromebit is no larger than a USB memory stick, it’s markedly less powerful than Intel’s offering, as it is powered by the Rockchip RK3288, an ARM processor, which makes it comparable in power to a smartphone.

Google’s Chromebit, in more colours than black. Katie Roberts-Hoffman/Google

There are other stick-sized, computers running low-power ARM processors capable of running Android, such as Cotton Candy or Google Chromecast. These plug into a digital television to play video directly to the TV or from internet streaming services such as Netflix – but not much else.

The appeal of small

Computers this small are attractive for many organisations, such as schools and universities who need to equip functional computer laboratories at minimum cost while taking up as little space as possible. Low power devices also consumer less power which keeps costs down.

A typical desktop computer uses about 65-250 watts (plus 20-40 watts for an LCD monitor) – considerably higher than a typical PC-on-a-stick at about 10 watts. There are obvious business uses, such as digital signage and advertising when connected to screens or projectors.

This new round of computer miniaturisation marks a third wave of computerisation. First there were room-sized computers, shared between many users – the mainframe era. These time-sharing systems gradually disappeared as computers were miniaturised, replaced by the one computer per user of the personal computer or PC era. Today one person could have many computers, whether recognisable as desktop and latop PCs or smartphones or compute sticks, but which are accessible everywhere and anywhere. Known as ubiquitous or pervasive computing, this is the third wave in computing.

A smart, mobile future

As all computing devices grow smaller, the aim is that they are more connected and more integrated into our environment. The computing technology fades into our surroundings until only the user interface remains perceptible to users. It is an emerging discipline that brings computing to our living environments, makes those environments sensitive to us and have them adapt to the user’s needs. By enriching an environment with appropriate interconnected computing devices, the environment would be able to sense changes and support decisions that benefit its users.

There is a growing interest in these smart spaces using miniaturised computing technologies to support our daily lives more effectively. For example, smart offices, classrooms, and homes that allow computers to monitor and control what is happening in the environment.

Apple’s HomeKit and Google’s Nest are a start in this direction, providing the hardware and software to allow home automation. A smart home that monitors temperature and movement could allow elderly to remain self-sufficient and independent in their own home, for example, and voice activated devices could help everyday tasks such as ordering the shopping. A smart office could remind staff of information such as meeting reminders. It could turn the lights on and off, or control heating and cooling efficiently. A smart hospital ward will monitor patients and warn doctors and nurses of any potential problem or human errors.

The Smart Anything Everywhere vision of the European Commission drives research and development in this area. The evolution and disruptive innovation across the field of computing, from the Internet of Things, smart cities and smart spaces down to nano-electronics – the applications and benefits of greater miniaturisation of computers are endless.

The Conversation

Wednesday, June 17, 2015

Who really benefits from the 'internet space race'?

Solar-powered drones could fly for years at a time delivering internet access. Titan Aerospace

In the film Elysium, the ultra-rich have left an apocalyptic Earth ravaged by global warming and overpopulation. Their utopian colony orbits high above Earth which festers below. Science fiction, but Silicon Valley techno-utopians also dream of rising above the planet’s problems.

The Seasteading Institute, for example, seeks to create floating cities far enough from land as to be outside of any regulatory jurisdiction. There, farseers such as the likes of Google CEO Larry Page might be able to innovate, untethered by regulations. At Google’s annual developers’ conference in 2013, Page said: “I think as technologists we should have some safe places where we can try out some new things and figure out.”

The seas of Earth appeal to some while the dry seas of Mars attract others: Elon Musk, CEO of Tesla Motors, is at the forefront of commercial space travel for the ultra-rich. At a cost of US$36 billion he hopes his company SpaceX can start a Mars colony. Space tourist tickets come in at a mere US$500,000. He also plans to provide planet-wide internet access, beamed from 4,000 satellites.

Facebook and Google have shelved similar plans for satellite internet access for those it has yet to reach. Instead, Facebook has opted for a less lofty approach, targeting not space but the stratosphere: its Connectivity Lab is tasked with bringing about an internet-saturated planet. To do this, they have invested in solar-powered drones capable of providing internet to underserved and disconnected areas. Google on the other hand, through its secretive X lab, devised Project Loon to provide internet via high-flying balloons.

Why are some of the world’s most powerful technologists so focused on providing internet access by hook, crook, drones, balloon or satellite?

Above the Facebook flag at Facebook HQ flies another, bearing the symbol of Facebook’s non-profit organisation, Internet.org. The internet-dispersing drones under development are designed to bring about the objectives of Internet.org – connecting up the next three billion people yet to join the internet. But it isn’t the “internet” as we know it today, instead, Internet.org allows users to access only Facebook and select other sites, not the entire internet. In an open letter to Facebook CEO Mark Zuckerberg, 65 organisations from 31 countries criticised the project, claiming it violated the principle of network neutrality, that no site should be favoured over others. Security, privacy, censorship, and freedom of expression were among the other concerns voiced over Facebook’s growing control.

It may seem axiomatic to those in the West, but what if people don’t want access to the internet – of the type provided by Facebook, Google and SpaceX, or any other? There are well over a billion people living in states under governments that resist Western-style internet connectivity in order to preserve that country’s status quo.

Technical approaches towards national internet sovereignty including IP address blocking, domain names, key words, and packet filtering. Non-technical forms of censorship include laws, regulations, threats, bribes, and arrests of publishers, ISPs, and authors. Reporters without Borders identifies 19 countries – including the US and the UK – along with Cuba, China, Iran, and North Korea, all of which use one or several of these tactics to create a distinct national internet.

Certainly, what governments want for their people and what the people want for themselves frequently diverge. But while we may agree that internet censorship by authoritarian dictatorships is an affront to free communication, can we really put our faith in Facebook’s drones? It is possible to overthrow a government and depose a dictator but it is nearly impossible to revolt against corporate drones and extraterritorial CEOs.

With solar powered balloons raining internet down where it wasn’t before, from inaccessible places such as high in the atmosphere or beyond, is resistance to the internet even an option? As US president Ronald Reagan knew when he initiated his Star Wars defence programme in the 1980s, space is the ultimate high ground. In the stratosphere and in space, the techno-liberal social engineering ideal – that the internet is inherently good – meets the desire to be above the fray of terrestrial, democratic regulation.

In the dramatic conclusion of Elysium, Max Da Costa (played by Matt Damon) flies a pod of illegal immigrants from Earth and crash-lands it into the luxurious orbiting utopia, rebooting the computer that keeps the citizens of Earth and Elysium in inequality. Those who do not want the internet may need a similar radical approach, because when the ultra-rich take to the skies it becomes nearly impossible to protest their decisions.

The Conversation

Wednesday, May 6, 2015

'Windows 10 on everything' is Microsoft's gambit to profit from its competitors

Windows on anything means revenue from everything, at least that's the idea. gadgets by aslysun/shutterstock.com

Microsoft’s aim to make Windows 10 run on anything is key to its strategy of reasserting its dominance. Seemingly unassailable in the 1990s, Microsoft’s position has in many markets been eaten away by the explosive growth of phones and tablets, devices in which the firm has made little impact.

To run Windows 10 on everything, Microsoft is opening up.

Rather than requiring Office users to run Windows, now Office365 is available for Android and Apple iOS mobile devices. A version of Visual Studio, Microsoft’s key application for programmers writing Windows software, now runs on Mac OS or Linux operating systems.

Likewise, with tools released by Microsoft developers can tweak their Android and iOS apps so that they run on Windows. The aim is to allow developers to create, with ease, the holy grail of a universal app that runs on anything. For a firm that has been unflinching in taking every opportunity to lock users into its platform, just as with Apple and many other tech firms, this is a major change of tack.

From direct to indirect revenue

So why is Microsoft trying to become a general purpose, broadly compatible platform? Windows' share of the operating system market has fallen steadily from 90% to 70% to 40%, depending on which survey you believe. This reflects customers moving to mobile, where the Windows Phone holds a mere 3% market share. In comparison Microsoft’s cloud infrastructure platform Azure, Office 365 and its Xbox games console have all experienced rising fortunes.

We’re way into the post-PC era. Blake Patterson, CC BY

Lumbered with a heritage of Windows PCs in a falling market, Microsoft’s strategy is to move its services – and so its users – inexorably toward the cloud. This divides into two necessary steps.

First, for software developed for Microsoft products to run on all of them – write once, run on everything. As it is there are several different Microsoft platforms (Win32, WinRT, WinCE, Windows Phone) with various incompatibilities. This makes sense, for a uniform user experience and also to maximise revenue potential from reaching as many possible devices.

Second, to implement a universal approach so that code runs on other operating systems other than Windows. This has historically been fraught, with differences in approach to communicating, with hardware and processor architecture making it difficult. In recent years, however, improving virtualisation has made it much easier to run code across platforms.

It will be interesting to see whether competitors such as Google and Apple will follow suit, or further enshrine their products into tightly coupled, closed ecosystems. Platform exclusivity is no longer the way to attract and hold customers; instead the appeal is the applications and services that run on them. For Microsoft, it lies in subscriptions to Office365 and Xbox Gold, in-app and in-game purchases, downloadable video, books and other revenue streams – so it makes sense for Microsoft to ensure these largely cloud-based services are accessible from operating systems other than just their own.

The Windows family tree … it’s complicated. Kristiyan Bogdanov, CC BY-SA

Platform vs services

Is there any longer any value in buying into a single service provider? Consider smartphones from Samsung, Google, Apple and Microsoft: prices may differ, but the functionality is much the same. The element of difference is the value of wearables and internet of things devices (for example, Apple Watch), the devices they connect with (for example, an iPhone), the size of their user communities, and the network effect.

From watches to fitness bands to internet fridges, the benefits lie in how devices are interconnected and work together. This is a truly radical concept that demonstrates digital technology is driving a new economic model, with value associated with “in-the-moment” services when walking about, in the car, or at work. It’s this direction that Microsoft is aiming for with Windows 10, focusing on the next big thing that will drive the digital economy.

The revolution will be multi-platform

I predict that we will see tech firms try to grow ecosystems of sensors and services running on mobile devices, either tied to a specific platform or by driving traffic directly to their cloud infrastructure.

Apple has already moved into the mobile health app market and connected home market. Google is moving in alongside manufacturers such as Intel, ARM and others. An interesting illustration of this effect is the growth of digital payments – with Apple, Facebook and others seeking ways to create revenue from the traffic passing through their ecosystems.

However, the problem is that no single supplier like Google, Apple, Microsoft or internet services such as Facebook or Amazon can hope to cover all the requirements of the internet of things, which is predicted to scale to over 50 billion devices worth US$7 trillion in five years. As we become more enmeshed with our devices, wearables and sensors, demand will rise for services driven by the personal data they create. Through “Windows 10 on everything”, Microsoft hopes to leverage not just the users of its own ecosystem, but those of its competitors too.

The Conversation

Tuesday, April 7, 2026

Google is playing a busy game of bug Whack-A-Mole to keep Chrome safe!

The Great Digital Bug Hunt

Oops, They Did It Again: The Great Chrome Bug Squashing Extravaganza!

A friendly robot holding a giant wrench over a glowing computer screen

Welcome back to the wild, wacky, and sometimes slightly terrifying world of the World Wide Web! If you’ve been clicking around the internet lately, you might have noticed that your trusty sidekick, Google Chrome, has been acting a little bit like a housecat that accidentally swallowed a bumblebee. It turns out, our favorite shiny browser has been playing a high-stakes game of hide-and-seek with some digital gremlins. And not just once, not twice, but three times in a single month! It’s like a summer blockbuster movie where the monsters just keep coming back for the sequel before the first one is even out of theaters.

Now, don’t panic and throw your laptop into the nearest swimming pool just yet. In the tech world, we call these little surprises "zero-day vulnerabilities." It sounds like something out of a spy thriller, doesn't it? "Zero-Day: The Reckoning." But in reality, a zero-day just means that the clever folks who build the browser found a hole in the digital fence at the exact same time—or sometimes slightly after—the naughty hackers found it. It’s a race against the clock where the prize isn't a gold medal, but rather making sure your private data doesn't end up on a billboard in the middle of nowhere.

Imagine your browser is a giant, majestic castle. You’ve got high walls, a deep moat filled with digital alligators, and a shiny gate. Usually, this keeps all the internet ruffians out while you’re busy looking at pictures of capybaras or shopping for neon-colored socks. But every now and then, a sneaky little termite finds a tiny crack in the foundation. This month, it seems the termites have been particularly busy, finding three separate secret tunnels into the castle. It’s like a digital game of Whac-A-Mole, where Google’s engineers are the ones holding the big foam hammers.

So, what exactly is happening behind the scenes? Well, the digital wizards at Google HQ have been working overtime, fueled by gallons of coffee and probably some very high-quality snacks. When a third major bug popped up recently, they didn't just sit around and sigh. They leaped into action, coding at lightning speed to brew up a magical potion—otherwise known as a security patch. This patch is essentially a very high-tech band-aid that covers up the hole and tells the hackers, "Not today, friends! Move along!"

You might be wondering why this is happening so much lately. Is the internet getting scarier? Are the browsers getting tired? Not exactly. It’s more like a game of cat and mouse that has evolved into a game of cyborg-cat and laser-mouse. As our browsers become more powerful and capable of doing incredible things—like running 3D games or managing your entire life—they also become more complex. And in the world of code, complexity is like a big, beautiful mansion with a thousand windows; occasionally, someone is going to forget to lock one of them.

The good news is that you, the brave internet explorer, have a superpower. It’s a small, unassuming button that often pops up in the top right corner of your screen. It’s the "Update" button! Clicking that button is like giving your browser a suit of shiny new armor and a fresh sword. When you see that little green, orange, or red circle pleading for your attention, don't ignore it. It’s not just Chrome trying to be annoying; it’s Chrome asking for a quick nap and a makeover so it can keep protecting you from the spooky stuff lurking in the shadows of the web.

When you hit that update button, the browser does a quick "relaunch." It’s like a digital "Etch A Sketch"—it shakes everything up, clears out the cobwebs, and starts fresh with all the newest defenses. It only takes a few seconds, which is a small price to pay for the peace of mind that comes with knowing your digital castle is secure once again. Think of it as a spa day for your software. It comes back refreshed, rejuvenated, and ready to tackle another million tabs of research, shopping, and cat videos.

While the engineers are busy playing defense, it's a good reminder for all of us to stay sharp. The internet is a wonderful place, but it's always good to have your wits about you. Beyond just keeping your browser updated, remember to keep your passwords unique—no, "password123" is not a fortress—and maybe don't click on links that promise you’ve won a free private island from a long-lost cousin you’ve never heard of. A little bit of common sense goes a long way in keeping the digital gremlins at bay.

In the end, the fact that these bugs are being found and fixed so quickly is actually a good sign. It means the people who build our tools are watching over us like digital guardian angels. They are constantly scanning for trouble, even when we’re sound asleep. So, let's raise a metaphorical glass to the bug hunters, the code-smiths, and the security experts who keep the internet spinning. And remember, the next time you see that update notification, give it a click. Your browser will thank you, your data will thank you, and those sneaky digital termites will have to go find somewhere else to hang out!

Stay safe, stay curious, and keep those browsers shiny and chrome!

Thursday, February 19, 2015

Upgrade to core HTTP protocol promises speedier, easier web

Now with added "2". Download Now/Shutterstock

Hypertext Transfer Protocol, HTTP, is a key component of the world wide web. It is the communications layer through which web browsers request web pages from web servers and with which web servers respond with the contents of the page. Like much of the internet it’s been around for decades, but a recent announcement reveals that HTTP/2, the first major update in 15 years, is about to arrive.


The original HTTP protocol was the protocol first used by Sir Tim Berners-Lee at CERN where the web was created in 1991. This was improved over many years and finalised as HTTP 1.1 in 1999, the current standard used worldwide. Over the years the web has changed dramatically, introducing images, complex style sheets and Javascript code, Flash and other embedded elements and more. The original HTTP was a simple protocol for a simple web, it was not designed to handle increasingly media-rich websites.


For example, Google handles 40,000 web searches per second every day. To handle the pressure of serving billions of internet users, the company’s technicians launched a project in 2009 called SPDY (pronounced “speedy”) to improve HTTP. Originally only for internal use, other sites fielding heavy traffic such as Twitter, Facebook, Wordpress and CloudFlare also implemented SPDY having seen its performance improvements.


This caught the attention of the Internet Engineering Task Force (IETF), which develops and promotes internet standards. IETF decided to use SPDY as the basis for HTTP/2 in 2012 – and the two protocols were developed in parallel. Even though Google spearheaded the protocol’s development, the work is continued by the IETF’s open working groups as it has done for other protocols for more than 30 years.


Google recently announced it was dropping SPDY in favour of the soon-to-arrive HTTP/2.


The drawbacks of HTTP 1.1


Web pages today can generate many requests for images, CSS style sheets, video and other embedded objects, off-site adverts, and so on – perhaps a hundred of these per page. This adds unnecessary strain to the web server and slows the web page loading time because HTTP 1.1 only supports one request per connection.


HTTP 1.1 is sensitive to high latency connections – those with a slow response time. This can be a big problem when working on a mobile device using cellular networks, where even a high-speed connection can feel slow. HTTP pipelining allows the browser to send another request while waiting for the response of a previous request. While this would go some way to tackling high latency, it is susceptible to problems of its own and is disabled by default in most browsers.


The benefits of HTTP/2


Rather than using clear text, HTTP/2 is now a binary protocol which is quicker to parse and more compact in transmission. While HTTP 1.1 had four different ways to handle a message, HTTP/2 reduces this to one. To tackle the multiple request issue HTTP/2 allows only one connection per site but using stream multiplexing fits many requests into a single connection. These streams are also bi-directional, which allows both the web server and browser to transmit within a single connection. Each stream can be prioritised, so browsers are able to determine which image is the most important, or prioritise a new set of streams when you change between browser tabs.


HTTP is a stateless protocol – every connection comprises a request-response pair unconnected to any connections before or after. This means every request must also include any relevant data about the connection – this is sent in HTTP headers. As HTTP 1.1 evolved, the headers have grown larger as they incorporate new features. HTTP/2 uses header compression to shrink this overhead and speed up the connection, while improving security.


A final addition is server push. When a web page is requested, the server sends back the page, but must wait for the web browser to parse the page’s HTML and issue further requests for things it find in the code, such as images. Server push allows the server to send all the resources associated with a page when the page is requested, without waiting. This will cut a lot of the latency associated with web connections.


Web version 2?


Once web servers and web browsers start implementing HTP/2 – which could be as soon as a few weeks from now – the web-browsing experience will feel quicker and more responsive. It will also make developers' lives easier by not having to work around the limitations of HTTP 1.1.


In fact, some of the latest versions of popular browsers (Firefox v36, Chrome v40 and Internet Explorer v11) already support HTTP/2. For Chrome and Firefox, HTTP/2 will be used only over encrypted connections (SSL) – this, along with the Let’s Encrypt initiative, will probably boost the adoption of encryption more widely.


The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...