Showing posts sorted by date for query samsung. Sort by relevance Show all posts
Showing posts sorted by date for query samsung. Sort by relevance Show all posts

Tuesday, September 1, 2015

Shift from electronics to spintronics opens up possibilities of faster data

levoodoo, CC BY-NC

Electronics is based on measuring the tiny electrical charge of electrons passing through electronic circuits. An alternative approach under development is spintronics, which instead relies not on electrons’ charge, but on another of their fundamental quantum-mechanical properties: spin.

Spin can be visualised as the Earth turning on its own axis while rotating around the sun. In the same way, an electron spins on its own axis while rotating around an atom’s nucleus. Spin is either “up” or “down”. In the same way traditional electronics uses charge to represent information as zeros and ones, the two spin states can be used to represent the same binary data in spintronics.

Spin can be measured because it generates tiny magnetic fields. Ferrous metals such as iron become magnetic, for example, when enough particles have their spin set in the same direction, generating a magnetic field of the same polarity as the spin.

Spintronics has several advantages over conventional electronics. Electronics require specialised semiconductor materials in order to control the flow of charge through the transistors. But spin can be measured very simply in common metals such as copper or aluminium. Less energy is needed to change spin than to generate a current to maintain electron charges in a device, so spintronics devices use less power.

Spin states can be set quickly, which makes transferring data quicker. And because electron spin is not energy-dependent, spin is non-volatile – information sent using spin remains fixed even after loss of power.

Upgrading hard disks using spin

The first application of spintronics to computers saw Professors Albert Fert and Peter Grünberg awarded the 2007 Nobel Prize in Physics for their discovery of giant magnetoresistance (GMR). They realised it was possible to use electron spin to increase the rate at which information could be read from a hard disk drive and developed ground-breaking technology to harness this feature.

A hard drive, showing circular platters and read/write head mounted at the tip of the arm. drive by mike mols/shutterstock.com

A hard disk drive stores data as ones and zeros encoded magnetically on rotating disk platters within the drive. The magnetic field is generated when electrons flow through wire coils mounted in the drive write heads which move across the face of the platters, changing the alignment of the magneto-sensitive particles on the platter surface. Reversing the electron flow reverses the field; the two directions represent one and zero. To read from the disk the process works in reverse.

A hard disk drive read/write head. amagill, CC BY

A GMR drive head consists of two ferromagnetic layers, one with a fixed magnetic field direction and the other free to align with the magnetic field encoded on the disk, with a non-magnetic layer sandwiched in between.

When an electron passes through a magnetic field its spin state may change, known as scattering. Where electrons have random, scattered spin states this creates greater resistance to electric current. By aligning electrons’ spin state to that of the magnetic field in the layers of the drive head, GMR technology dramatically reduces resistance, speeding up data transfer. First introduced by IBM in 1997, GMR technology has led to faster and higher-density drives than was previously possible.

Putting a fresh spin on memory

Spintronics researchers have since been working on introducing the same technology to computer memory, aiming to replace electric current-based dynamic random access memory (DRAM) with magnetic RAM (MRAM). The first commercial product by Everspin has been used in Airbus aircraft and BMW motorbikes due to its reliability under heat stress or cosmic-ray exposure – something that affects aircraft cruising at high altitudes.

MRAM exploits the same spin-based magnetic field approach, but uses a magnetoresistance cell to store data rather than a spinning disk platter as in a hard drive. While it is not as fast as DRAM, magnetic cells are able to maintain their stored spin orientations, and so the data they represent, without power. MRAM is likely to replace commonly used flash memory such as SD cards and compact flash first, as it is faster and doesn’t suffer from flash memory’s limited lifespan.

Other manufacturers such as Intel, Qualcomm, Toshiba and Samsung are developing MRAM to use as processor cache memory, where by virtue of their smaller size MRAM chips of greater capacity can be incorporated into smaller packages that will be faster, and use up to 80% less power than current cache memory.

As electronics approaches the limits of silicon, spintronic components will play an important role in ensuring we enjoy steady performance gains, and faster, higher-capacity storage at lower power and cost.

The Conversation

Wednesday, July 8, 2015

BBC micro:bit aims to turn children from digital consumers into digital creators

Good things come in small packages, but are all small packages a good thing? BBC

The way computing is taught in schools is going through its greatest upheaval since the subject was first introduced at the turn of the century. After considerable lobbying by the industry, professional societies, universities and schools, the national curriculum has been re-oriented towards establishing computing as the “fourth science” for schools.

Out go interminable lessons on how to use specific word processor or spreadsheet applications. In comes more rigorous teaching about the scientific principles of technology and how to put it to use creatively – to be taught, importantly, by example rather than by rote.

Plugged in to this change of tack is the recently announced BBC micro:bit, a tiny, inexpensive pocket-sized computing device. The BBC plans to give away a million of these devices free to every Year 7 child (11 to 12-year-old) in the country this autumn, to encourage children to become a generation of digital creators.

Conceived by the BBC, the micro:bit has been developed by organizations including ARM, Microsoft, Freescale, Nordic Semiconductor, Element 14, Samsung and Lancaster University.

One of the goals of this initiative is to create a greater number of students that go into computer science and related fields of study with a better understanding of technology, transforming the UK from a nation of digital consumers into a creative powerhouse. To place this in context, a recent House of Lords select committee report suggested some 35% of current jobs in the UK could be lost to automation over the next 20 years. In my personal opinion, that’s an understatement. The strategy for the nation here is made clear: to create new jobs through digital innovation. This will only be possible if today’s children are adequately skilled and motivated to rise to this challenge.

This is an ambitious strategy, and one that requires an equally ambitious approach to delivering it. The response to this call to arms has been quite simply staggering. Not from the government, but from enthusiasts, volunteers and evangelists. To name but a few such organisations picking up the challenge: Computing at Schools support almost 20,000 teachers throughout over 600 regional UK hubs, Code Club organise after school programming clubs throughout the UK and Teen Tech who expose teenagers to the wide range of career possibilities in science, engineering and technology.

The BBC’s first foray into computers was more than 30 years ago. Stuart Brady

Not the BBC’s first computer

But we’ve been here before. It’s hard not to draw a parallel to the Model B, a computer released by the BBC in the 1980s for the same reason – to forge a new technologically-savvy generation of future students, innovators and entrepreneurs. The BBC Model B typified the computer of its day, a modestly-specified desktop machine with integrated keyboard that could connect to a home television. At just 4cm by 5cm, the micro:bit is a very 21st-century computer: compact, with built-in sensors and wireless communication, and an ARM Cortex M0 processor around 18 times more powerful than its forerunner.

Packed full: what the Micro:bit comes with. BBC

The micro:bit supports Bluetooth Low Energy, which means it can interact wirelessly with other nearby devices such as mobile phones and tablets. This pitches it more as part of the emerging internet of things, made up of small low-powered devices that can provide services or data to other, more powerful devices such as smartphones.

It has a simple display of 25 LEDs arranged in a 5x5 matrix, just enough for simple graphics and text, and it is also equipped with an electronic compass and three-axis accelerometer so it can detect its orientation, and standard connectors that provide an easy way for children to integrate the micro:bit into their own creative electronics projects. It’s not a motivational toy – it’s a computing case study, a simple demonstrator for how complex computers can be used.

Building a community, not just a device

Running the code simulator on a tablet. BBC

The device is only part of the package. The micro:bit can be programmed through the web, in a variety of programming languages tailored for different levels of ability. It will support highly visual drag-and-drop languages ideal for beginners, Microsoft’s Touch Develop, Python and JavaScript for intermediate users, and C++ for older, experienced programmers. The accompanying website also provides teachers access to pre-written learning resources and also a platform to create and share their teaching materials with other teachers, or to publish their pupil’s work if appropriate to inspire others.

After the initial 1m units are delivered, the devices will be available for commercial purchase with proceeds directed to a not-for-profit foundation. All the micro:bit hardware and software will be open-sourced, allowing others to build on the foundations laid.

The micro:bit is aimed at fostering an ecosystem to support digital creativity, balancing motivation and education while reducing the barrier to entry for both children and teachers. Will it be successful, inspiring fond memories in a generation like the Model B? We’ll see.

The Conversation

Tuesday, July 7, 2015

Do 3D films make you dizzy – or is it just your imagination?

3D films had a strange effect on Jason. Shutterstock

The realism of today’s 3D blockbusters can blow audiences away. By using 3D glasses to present different images to the two eyes, stereoscopic 3D technology fools the brain into believing it is viewing a real scene rather than a flat image on a screen. Now 3D televisions enable viewers to experience the effect at home as well.

Yet 3D has not become as popular as some might have hoped. Many people say watching 3D gives them unpleasant side-effects such as headache or nausea. Scientists don’t fully understand why this is. It’s true that badly made 3D effects can cause discomfort. However, makers of 3D content are well aware of the possible issues and work hard to avoid them.

A more fundamental problem may be conflict between different senses. When we watch a film such as Avatar, our visual system may tell us that we are wheeling high in the skies of a distant moon, but other senses tell us that we are sitting motionless in a chair. Of course, 2D films present this kind of conflict as well, but our brains may simply be more used to accepting that 2D content is not “real”.

Some people have suggested that 3D content may cause more serious side effects. For example, Samsung’s safety leaflet links its 3D TV set to a vast range of possible symptoms – not only headache, fatigue, motion sickness and eye strain, but also decreased postural stability, altered vision, dizziness, cramps, convulsions and even loss of awareness. Clearly if 3D TV has such effects, there are important safety implications. But to date, very little work has been done to assess this.

We recently invited 433 volunteers, aged from 4 to 82 years, into my lab to watch the film Toy Story on either a 2D or 3D TV. We used two common types of 3D TV, known as “active” and “passive”. Participants carried out a battery of tests designed to assess their balance and coordination, both before and after viewing. They wore two triaxial accelerometers – small devices to record their body movements – as they walked around a simple obstacle course. To assess eye-hand coordination, participants played a “buzz the wire” game, guiding a hoop along a convoluted wire track without allowing the two to come into contact.

We argued that, if viewing 3D made participants dizzy, they would take longer to complete the obstacle course, and/or the accelerometers would show that their body movements were less stable. If it affected their vision, they would take longer to complete the “buzz the wire” game, and/or make more mistakes.

Some people have suggested that adverse effects with 3D reflect underlying visual problems. So we also had our volunteers’ vision thoroughly assessed by eye care professionals before they visited the lab.

Of course, Holly’s nausea had nothing to do with the 1kg of popcorn she’d just eaten. Shutterstock

On our objective tests of balance and coordination, we couldn’t detect any effects of 3D at all. Not surprisingly, people tended to perform a little better the second time round. But it didn’t seem to matter whether they had watched the film in 2D or 3D, or whether the 3D was active or passive. We also couldn’t find any links between age or eyesight and whether people were affected by 3D.

We did find that people who had viewed the 3D movie reported that the depth was more realistic. They also reported more adverse effects, mainly headache and eye strain, but also including dizziness or nausea. However, it’s not clear that the dizziness was really due to 3D.

Craftily, we gave some of our volunteers 3D glasses, making them think they were viewing in 3D, but showed them the film in 2D. These people reported dizziness at about the same rate (3%) as those viewing real 3D. In contrast, people viewing real 3D were much more likely to report headache or eyestrain (around 10%) than people who just thought they were viewing 3D. This suggests that while 3D gives some people a headache, it doesn’t really make people dizzy – people just expect it to.

Of course, it’s possible that 3D caused an impairment that was so subtle or transient that our tests failed to detect it. On the other hand, that also implies less cause for concern in everyday life. We also tested only one 3D film, choosing Toy Story as something fun and engaging for all age-groups. Even if computer-generated 3D from the experts at Pixar doesn’t cause dizziness, it remains possible that less carefully-controlled 3D content -– say, live-action football –- could do so.

Nevertheless, given the lack of previous work in this area, our study provides welcome reassurance. Can 3D effects give you a headache? Yes, for some people. Can they make you dizzy? Probably not. Do they make Toy Story more exciting? That depends who’s watching.

The Conversation

Wednesday, May 6, 2015

'Windows 10 on everything' is Microsoft's gambit to profit from its competitors

Windows on anything means revenue from everything, at least that's the idea. gadgets by aslysun/shutterstock.com

Microsoft’s aim to make Windows 10 run on anything is key to its strategy of reasserting its dominance. Seemingly unassailable in the 1990s, Microsoft’s position has in many markets been eaten away by the explosive growth of phones and tablets, devices in which the firm has made little impact.

To run Windows 10 on everything, Microsoft is opening up.

Rather than requiring Office users to run Windows, now Office365 is available for Android and Apple iOS mobile devices. A version of Visual Studio, Microsoft’s key application for programmers writing Windows software, now runs on Mac OS or Linux operating systems.

Likewise, with tools released by Microsoft developers can tweak their Android and iOS apps so that they run on Windows. The aim is to allow developers to create, with ease, the holy grail of a universal app that runs on anything. For a firm that has been unflinching in taking every opportunity to lock users into its platform, just as with Apple and many other tech firms, this is a major change of tack.

From direct to indirect revenue

So why is Microsoft trying to become a general purpose, broadly compatible platform? Windows' share of the operating system market has fallen steadily from 90% to 70% to 40%, depending on which survey you believe. This reflects customers moving to mobile, where the Windows Phone holds a mere 3% market share. In comparison Microsoft’s cloud infrastructure platform Azure, Office 365 and its Xbox games console have all experienced rising fortunes.

We’re way into the post-PC era. Blake Patterson, CC BY

Lumbered with a heritage of Windows PCs in a falling market, Microsoft’s strategy is to move its services – and so its users – inexorably toward the cloud. This divides into two necessary steps.

First, for software developed for Microsoft products to run on all of them – write once, run on everything. As it is there are several different Microsoft platforms (Win32, WinRT, WinCE, Windows Phone) with various incompatibilities. This makes sense, for a uniform user experience and also to maximise revenue potential from reaching as many possible devices.

Second, to implement a universal approach so that code runs on other operating systems other than Windows. This has historically been fraught, with differences in approach to communicating, with hardware and processor architecture making it difficult. In recent years, however, improving virtualisation has made it much easier to run code across platforms.

It will be interesting to see whether competitors such as Google and Apple will follow suit, or further enshrine their products into tightly coupled, closed ecosystems. Platform exclusivity is no longer the way to attract and hold customers; instead the appeal is the applications and services that run on them. For Microsoft, it lies in subscriptions to Office365 and Xbox Gold, in-app and in-game purchases, downloadable video, books and other revenue streams – so it makes sense for Microsoft to ensure these largely cloud-based services are accessible from operating systems other than just their own.

The Windows family tree … it’s complicated. Kristiyan Bogdanov, CC BY-SA

Platform vs services

Is there any longer any value in buying into a single service provider? Consider smartphones from Samsung, Google, Apple and Microsoft: prices may differ, but the functionality is much the same. The element of difference is the value of wearables and internet of things devices (for example, Apple Watch), the devices they connect with (for example, an iPhone), the size of their user communities, and the network effect.

From watches to fitness bands to internet fridges, the benefits lie in how devices are interconnected and work together. This is a truly radical concept that demonstrates digital technology is driving a new economic model, with value associated with “in-the-moment” services when walking about, in the car, or at work. It’s this direction that Microsoft is aiming for with Windows 10, focusing on the next big thing that will drive the digital economy.

The revolution will be multi-platform

I predict that we will see tech firms try to grow ecosystems of sensors and services running on mobile devices, either tied to a specific platform or by driving traffic directly to their cloud infrastructure.

Apple has already moved into the mobile health app market and connected home market. Google is moving in alongside manufacturers such as Intel, ARM and others. An interesting illustration of this effect is the growth of digital payments – with Apple, Facebook and others seeking ways to create revenue from the traffic passing through their ecosystems.

However, the problem is that no single supplier like Google, Apple, Microsoft or internet services such as Facebook or Amazon can hope to cover all the requirements of the internet of things, which is predicted to scale to over 50 billion devices worth US$7 trillion in five years. As we become more enmeshed with our devices, wearables and sensors, demand will rise for services driven by the personal data they create. Through “Windows 10 on everything”, Microsoft hopes to leverage not just the users of its own ecosystem, but those of its competitors too.

The Conversation

Tuesday, April 7, 2015

Amazon Dash is a first step towards an internet of things that is actually useful

From 1-click to 1-push ordering with Amazon's Dash Button. Amazon

The internet of things has attracted a lot of attention and generated considerable column inches; and yet, despite all the attention, has remained pretty much absent – an internet of vaporware.


Samsung wants to internet-connect all the items in your home and major firms such as Cisco, IBM, and Apple are all keen to get involved in… in whatever it is.


Sometimes “smart” devices have really been about proof of possibility rather than producing any significant improvement in functionality or additional benefit to the consumer.


Finally Amazon, very much a real, non-vaporous company, has produced an internet of things device called the Dash Button. The marketing hype that Amazon is building around Dash goes some way to hiding its mundane nature: while the hand-held Dash device can automatically place orders for household goods by scanning barcodes or through speech recognition, the cut-down Dash Button is a small, push-button fob to keep next to, for example, the washing machine in order to order a single product such as washing powder with a single press. Using the household Wi-Fi network to connect to Amazon’s website, the device places the order and deliver is arranged, with payment and address details already in place as part of the householder’s Amazon Prime account.


Amazon Dash, a wand to order household goods. Amazon


To keep an edge, move quickly


Dash is not necessarily a surprising move for Amazon. This is after all the same e-commerce company that “created” the “1-Click” ordering system and even patented it in the US in 1997 (to great scorn), although the patent was rejected in Europe 14 years later.


As a further weapon in the Amazon armoury that includes competitive pricing, efficient delivery supply chains and huge choice of stock, shrinking the purchasing process to its most simple is an obvious element in Amazon’s competitive advantage. Maintaining that advantage requires the company to have sufficient vision, preparedness and ability to take risks in order to implement new technological developments in a retail context when opportunities arise.


This risk can sometimes bring significant rewards for the company. For example Amazon was quick to respond to the growth of the cloud for business, with its Amazon Web Service now one of the leading cloud computing and storage services – confirming Amazon as a leading technology company, not just a shopfront. On the other hand the Amazon recommendation system Grapevine was also proof that sometimes the company is slow to act, or can miss the market altogether.


A ‘future’ that’s older than you’d think


An internet of things device such as the Dash button is relatively mundane. There have been more ambitious and visionary attempts to simplify grocery shopping in the past. For example LG’s Internet Digital DIOS – the original internet fridge – arrived back in 2000, but the “smart fridge” failed to interest consumers. This proves that the technology has existed for some time but the willingness for consumers to accept this level of automation has taken much longer to evolve.


cheezburger.com


Nor does Dash really represent the full potential for the internet of things in that it still requires human interaction – pressing the button – to place the order. Ultimately, shouldn’t devices automate, not just simplify, such mundane necessities as restocking washing powder? Although technically possible, this degree of automation (a promise of internet fridges) remains a step too far for the majority of consumers and the Dash is the acceptable compromise.


Sometimes the first examples of products demonstrating a new and innovative technology may prove to be beyond the wants and even skills of would-be consumers, or beyond their preparedness to engage with it. More recent devices are certainly more simple and accessible than an internet fridge, and perhaps more commercially viable too. First steps – small steps, but steps nonetheless – towards a more fully-evolved potential. But if you want insight into what the future will look like, just scroll back to the past.


The Conversation

Monday, March 9, 2015

Apple may have arrived late to the party, but with Watch it's brought a gun to a swordfight

It's arrived: Tim Cook's watch of many colours. Kay Nietfeld/EPA

While all eyes and ears were trained on news of its smartwatch, Apple also used its spring Keynote to introduce changes to Apple TV, revisions to its laptop lineup, and a new service that builds on the health monitoring aspects of smartwatches to perform data collection for medical research.


As one digital TV service after another launches many have been left wondering when HBO, whose television dramas are highly sought and widely watched properties, would play its hand. And here it is: a partnership with Apple that makes the entire HBO back catalogue available through the new HBO Go digital streaming service, available exclusively through Apple TV. So while the Apple TV hardware hasn’t been updated for years, the partnership with HBO (and a price drop to £59) is a nice reminder for those who may have overlooked it.


Apple has extended its reach into car dashboards with CarPlay, into home automation with HomeKit, and into health monitoring with HealthKit. Apple hopes that ResearchKit, a new open-source API and service, will form the foundation for apps that can collect health data from larger numbers of volunteers, increasing sample sizes and frequency of data collection, making the data more useful for researchers. Five apps have been developed so far, to investigate Parkinson’s Disease, asthma, diabetes and cardiovascular disease with research groups in leading hospitals. There is an emphasis on privacy, with the user controlling the degree of information that is being shared.


The Macbook Air finally gets updated to a retina display, a faster, more energy efficient processor, and a trackpad that can supply tactile feedback. It is lighter, thinner, has a re-engineered keyboard and somewhat controversially rolls many ports into just one: the USB-C standard port, which will handle HDMI video, external hard drives and other USB peripherals. Inevitably this is going to mean buying another set of cables.


The new Macbook Air with shiny retina screen. Kay Nietfeld/EPA


Watch my watch


In any other keynote this reveal would have been the main news item. But of course the main event was the watch. Seven months since Tim Cook first revealed the device, it’s been a long wait for more technical details. Opinion is still split on whether it will be a hard sell. With fewer people wearing watches anyway, the market is split between those who want a fitness tracker and those that want a beautiful luxury object. Is there a need for a device which essentially duplicates the functionality of a smartphone? Apple has to convince us that the watch offers more, in clear terms of where glancing at a watch is preferred to pulling out a phone.


Usually reserved to only one or two colours, this time Apple offers 20 different combinations of ways to customise the watch in size, colour, watch and strap material – probably a necessity in order to sell a device that by nature of being frequently visible is more fashion than function.


One watch to rule them (but in many colours). Martyn Landi/PA


The styling of the watch itself is reminiscent of the first iPhone, with three versions in two different sizes, 38mm or 42mm high: the cheapest Apple Sport at £299 with an aluminium body and plastic straps, the middle tier Apple Watch from £479 in stainless steel and wrist bands in leather, steel or plastic, and the gold Apple Watch Edition, which starts at £8,000 – perhaps more expensive even than the Apple Lisa from 1983, which sold at US$15,000 at the time. All the information, but smaller and nearer. Apple


Most of the functionality of the watch requires an iPhone within a few metres – maps, messages, Siri and other apps are relayed from the phone using WiFi or mobile data. Apple suggests that the battery will last 18 hours in a typical day.


Not first to market, but best?


Apple invests heavily in research and development to create new devices and interfaces that differentiate its products, at least, until competitors release their responses. Apple’s watch uses an Ion-X glass or Sapphire crystal screen which is pressure-sensitive to varying degrees. The side-mounted dial, which Apple terms a digital crown, enables scrolling and clicking, and a button below it jumps to frequent contacts. It has a “Taptic” engine which provides vibration feedback for certain apps, for example suggesting directions in Maps. The sensors on watch’s underside detect heartbeat and combine with the accelerometer to measure physical activity, something Apple is pitching as a major selling point.


Developers are already creating software that will extend their iPhone apps to interact with and be accessible from the watch, as Apple has with its Apple Pay contactless payment system. Miniature messages appear on the device in what Apple calls Glances, giving the impression of dealing with such messages quickly without the hassle of pulling out a phone.


From watches to smartwatches, with only a little relief. XKCD, CC BY-NC


Will it sell? In the past 18 months customers have bought 5m smartwatches or fitness bands, with Samsung flooding the market with many smartwatch devices, but with fitness bands accounting for the majority of sales. Current estimates suggest that Apple could sell more than 8m watches, eight times as many as its largest competitor.


While many of its features will appear in competitor’s smartwatches in the subsequent years, for the moment the eponymous watch is best in class. To sound a note of caution: like the first generation iPhone, the second generation device will probably be half as deep and run twice as long. You may be unfazed about the risks of being an early adopter, but if the idea of paying another few hundred pounds for the latest model next year isn’t appealing, it may be sensible to wait.


The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...