Blue is the colour: why do we see #TheDress so differently?

Not since Liz Hurley's safety pin number has a dress caused such a stir. Tumblr

There has been passionate debate on the internet over a blue and black dress that to some people – perhaps even the majority – appears white and gold. But what is the reason behind a discrepancy that has caused such division within households and offices?


The story began when Caitlin McNeill posted a picture of a dress that was worn to a wedding, but no one appeared to be able to agree on what colour it was. The picture, and debate, went viral.



The physical colour of light coming from an object is made up of two components: the colour of the object and the colour of the light that shines upon it. The combined colour can vary widely. For example, daylight is much redder in the evening than at midday. Most of the time, however, we need to know the colour of the object, not the colour of the illumination – we don’t want to think that fruit is ripe in the evening just because the sun is going down. Our visual system has therefore developed to discount the colour of the illumination.


Colour vision in particular works in three main stages. We have receptors in the eye for just three colours, roughly, blue, green and red. We then make comparisons between the colours in a further three ways: red versus green, blue versus yellow (the sum of red and green for lights) and bright versus dark.


These comparisons are made over different times and different points in space. So if we stare at a blue screen for a long time, when we then view a white object it may appear yellow. Similarly, an orange spot will appear red when surrounded by green, and green when surrounded by red.


Finally, the colour signals are passed to the visual cortex, where the brain tries to work out the colour of the illumination from all the colours available and then calibrates the colours of the objects according to this illumination colour. This is called colour constancy, and it’s what stops the apparent colours of objects changing during the day.


The two-tone dress that sparked global debate. Joe Giddens/PA Wire


With the blue dress, it’s possible that some people have more blue photo-receptors than other people, or that their colour contrast system is biased for or against blue. But the most likely explanation is that colour constancy is responsible for the differences of opinion. When the dress occupies most of the image, some people take the blue cloth as their reference for the illumination and so see the dress as white cloth in blue light and the black frills as gold or yellow.


Yellow things don’t reflect much blue and so ought to look dark in a blue light. People correcting for a blue light will therefore see the dark frills as yellow. But other people may use the background objects as their white reference and so see the dress as a blue garment in a yellowish-white light. These people will see the frills as dark because dark frills remain dark when compared with a white light.


The Conversation

How some frogs survive killer fungus may reveal new weapon in fight to save amphibians

A Madagascan bright-eyed frog (Boophis rappiodes), one of more than 400 species on the island. Axel Strauß, CC BY-SA

The loss of amphibian species across the world from chytridiomycosis, an infectious disease caused by the fungal pathogen Batrachochytrium dendrobatidis (Bd), has been described as “the most spectacular loss of vertebrate biodiversity due to disease in recorded history”. So it’s of grave concern that the pathogen has been discovered in Madagascar, an incredibly biodiverse region previously thought free of the fungus.


Madagascar has the 12th highest rate of amphibian species richness in the world, with more than 400 species, 99% of which are indigenous to the region. But this biodiversity hotspot is already under severe pressure – a quarter of its species are under threat, according to the latest Global Amphibian Assessment. It’s rightly feared that the arrival of Bd, as reported in the journal Scientific Reports, could bring about mass amphibian decline – and even extinctions – as has been seen elsewhere.


An scanning-electron micrograph image of a Chytrid fungus (Bd) spore. Alex Hyatt/CSIRO, CC BY


Testing of the samples of the Bd fungus found in Madagascar reveals the strain is closely related to BdGPL, the hyper-virulent lineage behind all the known outbreaks of the chytrid fungus pathogen that have decimated amphibian populations. However what’s interesting is that the rate of infection is extremely low and there’s no clinical signs of chytridiomycosis: the frogs have the fungus, but they haven’t developed the disease.


What could this mean?


This discovery presents us with a number of scenarios, which need further investigation.


Perhaps the comprehensive monitoring plan put in place by A Conservation Strategy for the Amphibians of Madagascar (ACSAM) has worked as planned, in that the presence of the Bd pathogen has been detected – for the first time in 2010 – before amphibian declines have occurred.


Perhaps the strain of Bd detected in Madagascar is not a virulent kind that poses a serious threat to amphibians. This was seen with the introduction of the BdCape fungus lineage into Mallorca, where it had little effect on the population of Alytes muletensis toads there.


It’s possible that the Bd detected in Madagascar has been present on the island for a long time, but undetected. It may be an endemic, non-virulent lineage as seen in Brazil and Asia, where certain lineages endemic to the regions appear to have evolved alongside the native amphibians.


Or perhaps there is an endemic, previously undetected chytrid fungus on the island, related or not to Bd, which could be acting as a buffer for local amphibians against the invasion of BdGPL – acting, in effect, as a natural vaccine.


Alternatively, Malagasy amphibians may have developed some intrinsic resistance to Bd, for example through protective bacteria in their skin. This could explain the low infection rates and the ambiguous test results reported in the paper showing that some Bd-positive samples did not conform to any known lineage of the fungus. Although rare, resistance to BdGPL is not unprecedented – this has been seen and documented in Brazil.


The last known surviving Rabb’s Fringe-limbed Treefrog, a species ravaged by the Bd fungus. briangratwicke, CC BY


A potential threat or a potential benefit


The first scenario would be a disaster – and should be a priority. If this turns out to be the case, the survival of Malagasy amphibians could depend on the conservation and scientific groups involved in ACSAM managing to restrict the spread of the disease. Tackling invasive species such as the Asian Toad that might spread the disease and ensuring tourists and researchers stick to strict hygiene protocols would be necessary. Perhaps even more drastic conservation measures, such as capturing animals from particularly vulnerable species for raising in captivity.


On the other hand, the fourth scenario presents an intriguing possibility: if it’s the case that Malagasy amphibians are resisting a fungal invasion, discovering how this works could provide crucial information to help save amphibians elsewhere from the disease.


The research on the amphibian skin microbiome, for example, and its role in the creature’s immune system is producing some exciting results. It’s also apparent that the diversity of the Chytrid fungus species as a whole, and in particular of Bd, has not been appreciated. It’s possible there are many types of chytrid fungus associated with amphibians that we’re not yet aware of which provide some protection against BdGPL.


So without a doubt, this report will sound warning bells loud and clear for conservationists, and Bd’s appearance in Madagascar could still result in a huge loss of amphibians. However, the lack of chytridiomycosis symptoms also suggest there’s something special in Madagascar that could yield a breakthrough in how the disease spreads – something that may not only benefit Malagasy amphibians, but those throughout the world.


Superfast evolution observed in soil bacteria

Pseudomonas fluorescens

Two mutations allowed soil bacteria (Pseudomonas fluorescens) to re-evolve flagella: One pumped up levels of a protein than controls nitrogen uptake, and one that switched that protein’s job to flagella production control.


Ninjatacoshell/Wikimedia Commons (CC BY-SA 3.0)


You can take the flagella out of the bacteria, but you can’t take the flagella out of the bacteria’s genetic arsenal.


By deleting a gene that controls flagella growth, Tiffany Taylor of the University of Reading in England and colleagues engineered the soil bacteria Pseudomonas fluorescens so they lacked their tiny tails. Bacteria that can move around and find food are more likely to survive, and after a mere 96 hours in a low-food environment, the bacteria were once again growing flagella.


How did the bacteria manage such a swift feat? Two mutations in regulatory genes jump-started flagella production, suggesting that natural selection can rewire genetic networks with a few key mutations, the researchers report February 26 in


UK has little to be proud of as survey reveals sorry state of European cybersecurity

That sinking feeling of inaction ... geralt

The European Commission’s annual Eurobarometer Cyber Security Survey, the third edition of which was recently released, is a substantial survey of more than 27,000 respondents from 28 countries. It contains interesting and, more often than not, disappointing revelations about the state of Europe’s security.


As specialists in the field, we look forward to the report’s release. But as we wrote a year ago, the complete lack of media and expert interest in the study is amazing. Heaven help the survey authors if they have to justify its impact based on media coverage.


Falling on deaf ears


The UK government has adopted a bizarrely triumphalist discourse around cybersecurity, one that is clearly at odds with the experience of the 1,329 survey participants from the UK. In fact, year on year the survey results reflect that the UK is not in a good position, particularly in comparison to some of our more advanced neighbours. This is probably not what Downing Street wants to hear or publicise – particularly in an election year – as it seems that providing some sort of external or independent accountability for the impact of the hundreds of millions of public money spent is not a top priority.


The UK is not alone in its disdain for the survey’s results, which were similarly disregarded by most other Europeans. It’s a sad outcome for the only large, non-commercial, unbiased, and independent survey on this important topic.


Eurobarometer survey results


There are lots of facts in the report, including some that are very apparent to most people: internet use is up, mobile internet use is leading the way, Europe shows a marked digital divide between nations like Sweden and The Netherlands and others like Bulgaria, Romania and Greece. Other findings include how more than half (57%) of Europeans shop online, 23% sell online, and 54% use online banking. That last figure is relatively large, in our view, taking into account the associated risks.


The UK is among the worst EU countries for identity theft. Eurobarometer 2015


The two most common concerns of European citizens are the misuse of personal data and the security of online payments – responders were significantly more worried about both than they were last year. At least, good practices such as installing antivirus software (61%), not opening suspicious-looking emails (49%), and being careful not to give away personal information (38%) seem to be increasingly popular.


UK almost tops the charts for fraud from goods bought online. Eurobarometer 2015


Not only are people more concerned with the risks of cybercrime but 47% believed they were well informed, up from 44% last year. They claimed to avoid disclosing personal information online (89%), believed the risk of cybercrime is increasing (85%), and were concerned their personal information is not kept sufficiently secure by websites (73%) or public authorities (67%). This last point is worth emphasizing: two thirds of the citizens don’t trust the government or any other public authorities to keep their personal data safe – there is a large margin for improvement here.


Citizens are worried about identity theft (68%), malware infection (66%), online banking or bank card fraud (63%), having email or social media accounts hacked (60%), receiving scam phonecalls or emails (57%), or coming across racial or religious hate material (46%) or child pornography (52%) online. Interestingly, 47% are concerned with cyber-extortion and ransomware – a relatively new method that’s been very profitable for cybercriminals of late. In all cases, concern is up on last year.


UK number one in Europe for bank card and online banking fraud Eurobarometer 2015


Quite shocking is the finding that, despite being apparently aware of the many risks they face online, an incredible 74% of respondents thought they were able to protect themselves sufficiently from cybercriminals. We simply haven’t the words to express what overconfidence this demonstrates, and how unrealistic and dangerous it is. Computers and network security are complex matters – most people’s understanding of them, including ours, is at best incomplete and at worst practically absent. How people can believe they can protect themselves after, for example, having already discovered malware on their devices (as reported by 47% of respondents) is beyond us.


What needs to be done


Denmark, the Netherlands and Sweden are the three leading European countries for internet use. That might naturally imply correspondingly higher levels of cybercrime – but the survey findings suggest not. Whatever these nations are doing in terms of education, investment and technology development, we can do much worse than learning from then – or at the very least imitating their good practises.


As ever the UK results are discouraging. Britain misses the leading group by a large margin, and despite well-publicised government campaigns and huge investment in cybersecurity, we show very little overall improvement. Britain leads the way in misplaced confidence: 89% feel we can protect ourselves against cybercrime, which is a bad omen. It experienced the largest yearly increase on accidentally finding materials promoting racial hatred or religious extremism. And the UK also tops European tables of bank card and online bank fraud with 17% of citizens affected. The average is 8%, and in Germany for example the rate is 2%. The UK performs poorly in other areas too, casting a cloud not only on the UK but on crime rates for the whole of Europe.


More positively, the UK seems to be good at changing passwords and feeling well-informed about cybercrime, is among the leading countries where citizens are concerned over the use of their personal data, and also enjoyed the largest fall in scam emails and phone calls. Despite the large increase from last year, it’s also still extremely rare for UK users to encounter child pornography or racial or religious extremism materials online.


One problem is that the government’s information campaigns are focused largely on companies rather than individuals – some may argue that in this respect it’s no exception to Tory policy in other areas. Thus the Eurobarometer survey is probably not doing justice to the current UK government’s considerable, but possibly misguided, efforts.


People, not companies, should be prioritised; legislation and incentives should be aimed at protecting citizens and helping them to protect themselves. The main response to mistrust of government use of their data, in particular, should be to give them back more control. There have been some positive moves from Labour and the Liberal Democrats in that direction – but for now they are merely pre-election promises.


At the very least, could future governments please copy whatever it is they’re doing right in Sweden, Denmark, the Netherlands and some of our other more competent neighbours?


The Conversation

Molecular 'GPS' helps stem cells navigate inside the body

Saving lives, one ear at a time. Mirko Sobotta//Shutterstock

Recent research has identified a novel molecule that could help localise stem cells within the body. Cell therapy holds significant promise for treating a wide range of diseases and tissue defects including arthritis, cardiovascular disease, multiple sclerosis and Crohn’s disease. But in current therapies, most cell types do not reach diseased or damaged tissues efficiently.


Controlling cells once they have been introduced into the body is a key challenge to overcome. There are all kinds of tools and techniques that can be used to manipulate cells outside of the body in a petri dish and get them to do almost anything we want. But once cells have been transplanted, it is difficult to control them. We have now been able to identify small molecules that can be used to treat cells before injection into the body, programming them to target blood vessels in diseased or damaged tissue once inside the body.


This molecular targeting is especially important in the case of adult mesenchymal stem cells (MSCs), which are known to secrete several therapeutic factors and are being explored in more than 450 clinical trials. A major challenge has been getting MSCs to target – and stay at – sites of damage within the body, where they can secrete high levels of therapeutic factors to suppress inflammation and promote recovery.


Our team of bio-engineers from Brigham and Women’s Hospital and imaging experts at the Massachusetts General Hospital (led by Charles Lin), with collaborators at the pharmaceutical company Sanofi, has identified small molecules that can be used to program stem cells to home in on sites of damage, disease and inflammation. We tested more than 9,000 compounds for their ability to send stem cells in the right direction. We used a multi-step approach – including a sophisticated micro-scale set up and a novel imaging technique – to select and test the most promising compounds.


8,888…8,999…9,000!! science photo/Shutterstock


A molecular navigation system


We had previously found that it is possible to use bioengineering techniques to chemically attach molecules to the surface of a cell, to act as a GPS, guiding the cell to the site of inflammation.


Screening thousands of compounds, looking for ones that activated key molecules on the surface of the MSCs, we found six promising molecules, including one known as Ro-31-8425, the most potent of the group. We treated cells with each of these promising molecules and then flowed the cells into microscale glass channels, to simulate the flow of cells in the bloodstream. The glass channels were coated with a protein which is also found on the surface of blood vessels at inflamed tissue within the body. Cells pre-treated with Ro-31-8425 stuck to the coated channels – a sign that they might be able to home in on sites of inflammation.


The next step was to test our cells in an animal. We injected cells that had been pre-treated with Ro-31-8425 into the blood stream of a mouse with one inflamed ear. We then examined both ears using unique real-time microscopy, a technique that allows researchers to capture images of tissue in live animals. We observed that the cells treated with the compound not only homed in on the inflamed ear, but also reduced inflammation.


These findings, along with the multi-step screening platform we developed, have the potential to improve delivery of injected stem cells to sites of disease, where they can release their therapeutic cargo at high levels. This will greatly boost the clinical impact of cell-based therapies in treating life-threatening diseases.


The Conversation

CDC panel gives thumbs up to vaccine against nine HPV types

Gardasil 9

Gardasil 9, a vaccine that offers protection against nine types of HPV, has been recommended for use in 11- and 12-year-old girls and boys and in females ages 13 to 26 and males ages 13 to 21 who have not previously been vaccinated or treated with the three-dose series.


Business Wire


A federal vaccine advisory committee voted February 26 to recommend use of an expanded version of the human papillomavirus shot marketed as Gardasil.


The move, by the Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices, clears the way for the broader-coverage vaccine, called Gardasil 9, to be used in the clinic. Current vaccines offer protection against four types of HPV, which causes cervical cancer and is linked to other cancers. The new shot expands protection to nine types of HPV.


While the Food and Drug Administration’s licensure of Gardasil 9 was granted in December, doctors need CDC guidance on any new vaccine’s dosage and scheduling of shots before putting it into use. The new recommendations add Gardasil 9 to shots routinely scheduled for girls and boys age 11-12, but can be given as early as age 9. Gardasil 9 is also recommended for females age 13-26 and for males ages 13 to 21 who have not completed a three-shot series of an HPV vaccine.


Do Russia's flying Bears really pose a risk to civilian air traffic?

Move along now. Ministry of Defence, CC BY

There’s an element of sabre-rattling to Russia’s Tu-95 “Bear” aircraft probing the boundary of European nations' airspace, something that had long gone on during the Cold War and which has restarted under Vladimir Putin. But while their probing of air defences and the subsequent response is well-rehearsed, aircraft flying stealthy around some of the world’s busiest airspace holds the potential for disaster.


The large, propeller-driven Tupolev Tu-95 aircraft were introduced in the 1950s as long-range strategic nuclear bombers, but most of the remaining Bears are reconfigured for either maritime reconnaissance or for gathering electronic intelligence (Elint) – almost certainly what the Bears intercepted by RAF Typhoons off Cornwall were doing.


The flights' aim is to inspect as much of the electromagnetic emission spectrum around UK airspace as possible. This includes emissions from air defence surveillance radars, fighter aircraft radars and command and control communications. The information gathered is used to update Russian electronic warfare systems. In times of war or conflict this information would be used to program electronic jamming systems on-board Russian aircraft in an attempt to interrupt UK air defence radar and communications. The same techniques are used in relation to warships and for land operations.


This electronic eavesdropping activity is not confined to the borders of UK airspace – within the last year, fighter aircraft have intercepted Tu-95s around Turkey, Portugal, Germany, Denmark, Finland and Sweden and, in late 2014, a major Russian reconnaissance exercise was conducted off the US west coast. The Russian Air Force reconnaissance programme is particularly active during NATO exercises as the electromagnetic spectrum is rich with military information.


Invisible to civilian aircraft


To ensure safety in designated air corridors, commercial and civilian aircraft employ what is called secondary surveillance radar (SSR) to identify themselves to air traffic control (ATC). This is a transponder that periodically transmits location, bearing, altitude and other information to ATC. Military aircraft employ similar but more secure systems known as Identification Friend or Foe (IFF). In peacetime or when military aircraft fly in designated air corridors, IFF is operated in a civilian-compatible mode for safety in order to remain “visible” to air traffic controllers.


Russia’s Bears, on the other hand, turn off their IFF transponders so as to maintain the element of surprise. This prompts British air defences, using active radar to sweep the skies, to detect and respond to them as an unknown potential threat. It also means they are invisible to civilian air traffic control and invisible to other aircraft in the sky – unless close enough to be seen by pilots and crew themselves.


While the aircraft design may be more than 50 years old, the Bears are fitted with sophisticated reconnaissance and navigation systems that enable them to adhere to air safety standards by avoiding controlled airspace and busy air corridors. Nevertheless there have been reports of near misses and violations of sovereign airspace, but it’s difficult to separate fact from fiction.


New Cold War? Ministry of Defence, CC BY


Near misses


For example, in December 2014 there was a close encounter between a passenger flight taking off from Copenhagen carrying 132 passengers bound for Rome and a Russian reconnaissance aircraft, with transponders turned off, 50 miles south-east of Malmo. A collision was apparently avoided thanks only to good visibility and the alertness of the airliner’s pilots.


Another came in April 2014 when a Russian reconnaissance aircraft entered Dutch airspace before being intercepted by Dutch fighter aircraft. Again, the objective was Elint collection on NATO air defence systems.


Although there have been reported near misses the actual risk to commercial air traffic is considered minimal – but vigilance is necessary. NATO air defence systems are well integrated into civilian air traffic control systems so once an intruding Bear has been identified controllers can be alerted to the presence and aircraft in the vicinity can be warned.


Not just the Russians


However this is not an activity reserved for the Russians. NATO, and particularly the UK and US, also undertake extensive reconnaissance Elint operations against Russia and other countries deemed hostile.


Throughout the Cold War, extensive operations were undertaken against the Soviet Union in areas stretching from the Scandinavian Kola Peninsula, through the Baltic and Germany to the Black Sea, with similar operations in the Far East. This continues today, with incidents in which NATO surveillance aircraft tracked by Russian defences. The Israeli Air Force proved that Elint and electronic warfare was vital during the Lebanon War in 1982 against Russian-built air defence systems supplied to Syria.


But it has not been without casualties – early in the Cold War several NATO reconnaissance Elint aircraft were shot down by the Soviets around the Baltic and Black seas, so these missions were sometimes considered dangerous. Today the Royal Air Force and the USAF employ Boeing RC-135 Rivet Joint reconnaissance aircraft to fulfil the same role for NATO as the Tu-95 Bear, and operate in a similar manner. Perhaps such surveillance, although concerning to some, actually benefits both sides: by keeping the military on their toes and discovering more about each other’s military capabilities, it makes the prospect of war less likely.


The Conversation

Can a zen-like state of mind power super cyclist to one of sport's great world records?

Sarah Storey in the zone David Davies/PA

A golden age of British cycling appears to be coming to an end. In the recent World Championships in Paris, the country’s cyclists performed below expectations, recording their poorest showing at that level since 2001.


Yet the World Championships don’t include the event that many purists regard to be the most demanding challenge in the sport: the one-hour time trial. The current women’s world record stands at 46.065km, set by Dutch rider Leontien Zijlaard-Van Moorsel in Mexico City in 2003. The men’s record under the current rules was set several weeks ago in Granges, Switzerland, by Australia’s Rohan Dennis, who achieved 52.491km.


On February 28 at the Lee Valley Velodrome in London, Dame Sarah Storey will aim to break the women’s mark. Only a handful of British riders – and none from the current generation – have held these records.


It is worth noting – but not central to the narrative – that Sarah Storey is a paralympic champion. Born without a functioning left hand, she is a 20-times world champion in swimming and cycling. She has been at six Olympics and has competed both against disabled and able-bodied athletes at the highest level. But more than any other challenge perhaps, the one-hour time trial is her opportunity to establish a position in the history of the sport.


In sport, preparation is everything. The venue will be warmed to around 25 degrees to ensure the minimum of air resistance. The sport-science team behind Storey will have data to ensure the optimal gearing and weight of the bike. In the only concession to Storey’s paralympian status, one side of her handlebars will be slightly shortened to accommodate her left limb.


The bike and her riding position will have been tested in a wind tunnel to identify the optimal position for reducing drag, thereby translating muscle power into velocity. And over the past months, data will have been collected on physiological parameters leading to the wattage required to go one metre beyond the record, such as blood lactate accumulation.


Mind v matter


Storey and her team will know that every 250m lap must be covered in an average time of around 19.3 seconds to give her a chance. But no rider, Storey included, is a pedal-pounding android. The factor which will ultimately determine whether she achieves her goal will be her 60-minute battle with fatigue, a battle in her mind.


In sports science, there is currently a debate about the extent to which fatigue is about mind over matter. Traditional theories argue that fatigue is a physiological state, independent of the central nervous system, and cannot be consciously overridden.


Fatigue: all in the mind? Lukiyanova Natalia / frenta


This is being challenged by what is called the “central governor hypothesis”. It argues that the brain reaches an emotional decision that it is fatigued based on information from the body.


One of the main consequences of this debate is that the role of psychology in discussions about fatigue has been rehabilitated. Certainly the newer theory sounds intuitively right: anyone who has played sport will appreciate that fatigue is qualitatively different when you are winning as opposed to when you are losing.


What psychology can do


From a psychological perspective, Storey will need to effectively balance the paradox of keeping the brain comfortable while the body suffers increasing discomfort. As the hour progresses, the brain’s signals of fatigue will become more overt and require conscious action to manipulate the central nervous system to keep the record attempt on track.


The margins in this event are so fine that psychologically the rider must be engaged from the first second to the last. Storey has acknowledged that there will be “grippy” points – meaning that the little voice in her head will be intruding and presenting her with things that don’t help.


Primarily these will be about how comfortable she feels. As the hour clicks by, the answer will progressively become “not very”. For a physically well prepared performer, it is likely that they will feel “comfortable” for maybe the first five minutes of the event at most.


One of the most effective strategies for managing a large and difficult task is to chunk it down into manageable sections, with goals for each. The team will be giving feedback on where she is in relation to these.


This feedback is both vital in itself and will also serve to fuel her internal dialogue and the “what ifs” involved – “I’m ahead of schedule, what if I’m going too fast and will exhaust myself before the hour?”; “I’m behind schedule, what if I can’t recover the required tempo?”; or even “I’m bang-on but I’m going at my limit and I’ve still got 40 minutes to go – what if I can’t sustain this?”


She may dissociate from the discomfort through carefully monitoring her thinking, filtering out the unhelpful and reconnecting with positive cues. She may engage in an internal conversation with her legs, as Jens Voigt (a previous holder of the men’s hour record) famously did. He coined the phrase “shut up legs”, which is known to all top riders.


The pain barrier


Ultimately, Storey is searching for a method of coping with profound and potentially overwhelming pain. We know that the body produces its own natural painkillers – endorphins. Recent studies have shown that painkilling drugs do allow riders to ride harder, reporting improved lactate tolerance and maximum safe heart rate.


Yet the solution ought to be natural. The ideal for Storey would be reaching the “zone”, a “zen-like” state in which performers report altered perceptions of effort and pain tolerance.


The search for this state has its roots in some of the earliest research in modern sport psychology. Work as far back as 1977 linked peak performance to “loss of fear” and “ability to execute basic skills”, together with “no thought about the activity being undertaken” and “total immersion in the activity”.


These findings and later research on the concept of “flow” offer some ideas about how riders like Storey should approach the mental side. Flow is about total absorption and engagement in an activity, to the point where normal perceptions of effort and time become distorted. This is often linked to the production of endorphins.


The problem for time-triallists such as Storey is that this so-called “runner’s high” appears spontaneous. Yet some recent work using advanced brain-scanning techniques gives some clues to what a rider can do to capture and harness it.


Storey in Mexico in April 2014 Christian Palma/PA


It suggests for example that Storey should avoid higher-level cognitive thoughts such as calculating lap times or pacing. Better to mindfully focus on attention and awareness cues, such as engaging with the sprinters' line on the velodrome track, the feeling of her feet on the pedal and every face in the crowd. And she should relax, as much as one can when your heart is going at around 190bpm and you are moving at just over 28 miles per hour.


There will be a moment, at around three-quarters distance, where Storey’s emotion will change. This could be in a positive way, where she will know that the record is hers, which will re-engage her and lift her emotionally. Or it could be negative, knowing that it is highly unlikely. At this point she may just get off her bike. The interesting thing to the sport scientist is that physiologically, these two completely different outcomes are identical. What better way to illustrate the vital part that the mind plays in this kind of challenge.


GM regulation 'not fit for purpose', says Commons committee – and it's right

Vitamin A-enhanced GM Golden Rice has become a flashpoint for campaigners despite its health benefits. IRRI

As a scientist who has spent the past 32 years using genetic modification to improve crops and make biological discoveries, the report published by the House of Commons Science & Technology Committee on GM technology is a joy to read. Others, particularly campaigners against the technology, will be dismayed at their failure to convince ten independent-minded MPs of their concerns.


The report is a carefully written assessment of the arguments for and against a controversial method, with many sensible recommendations for what should happen next. It’s a good read for anybody with an interest in new technologies to improve crops, or in how public misunderstanding (often encouraged by campaigners) can result in disproportionate regulation that can hamper innovation.


Reviewing regulations


The report starts by remarking on the scale of the food security challenge, notes that GM has already been widely adopted and points to published findings regarding its safety. The question is whether UK and EU regulations regarding GM food are fit for purpose – and what changes, if any, are required.


Crucially, the MPs endorsed the view that it is wrong to think of GM as a single, generic technology – as the government’s chief scientific adviser Sir Mark Wolpert said:



Whether GM technology is a good or bad thing is not a sensible question; it depends on how it is applied. The question in every case is: what gene, what organism and for what purpose?



The report quite rightly strongly recommends that the government re-frames the debate away from an overly simple notion of “GM”.


Land area in 2011 used for GM crops was 160m hectares, or 1.6m sq km. Fafner, CC BY-SA


Reasoning with the opposition


Those opposed to GM tend to claim it is represented as a “silver bullet” that could alone provide food security, or that it is a technology that could lock its users into a method that “cannot peacefully co-exist with other methods”, or is one that has “squeezed out” other approaches to agricultural innovation. After careful examination of the evidence, the committee found none of these criticisms to be valid.


Paul Burrows, the executive director of BBSRC, told the committee that, of BBSRC’s nearly £500m annual research budget (including £70m spent on plant science), only £4m is allocated to GM research. By this measure GM, far from displacing other research, accounts for only a tiny fraction of it.


The report highlights more complex concerns over intellectual property rights. Industry representatives argued that the long regulatory process and large costs involved in the EU meant that without a competition-free period (through patents) to exploit the inventions, nobody would invest. Such investment is indispensable if we want to meet the food security challenge. On the other hand, the absolute position of campaign group GM Freeze is that “genetic resources are a public good and should not be owned by anybody”. The report is right to recommend that this issue should be examined in depth after the election.


Sidestepping the ‘precautionary principle’


European regulation and the “precautionary principle” (which can be paraphrased as “look before you leap”) have had a major influence on the import and cultivation of GM crops. The committee urges the European Commission to “clearly and publicly state when it has drawn on the precautionary principle in the policy formation process” since there is lack of clarity on this issue. The report is right to “remind the commission that any legislation guided by the precautionary principle must allow for an exit from precautionary measures once there is strong scientific consensus that any risks are low”.


There have been many studies of GM crops in the past. Chris Young/PA


Not fit for purpose


Among the evidence cited in the report is that from Eric Poudelet, the safety director of the European Commission, about the influence of politics on whether or not the European Commission and the Council of Ministers decide to act on the recommendations of the European Food Standards Agency. “Dysfunctional” EU regulation has led to abandonment by major companies of GM-based crop improvement in Europe. Professor Joyce Tait of the University of Edinburgh pointed out that “the more onerous the regulatory system, the more difficult it is for small companies to get through to the market”. This only reinforces the tendency towards domination of the sector by a few large companies.


A crucial finding of the committee is that:



A regulatory system under which it takes many years – sometimes decades – to reach a decision cannot possibly be considered fit for purpose.



The recommendations include several very important points. For example, those campaigning against the technology, such as Greenpeace against pro-Vitamin A-enhanced Golden Rice, should “review their public communication materials to ensure that they are evidence-based and honest in setting out the reasons for opposition to this technology".


The Committee considered alleged health and other concerns about GM crops and concluded that:



The current EU legislative framework for novel plants is founded on the premise that genetically modified plants pose inherently greater risk than their conventional counterparts. The weight of peer-reviewed scientific evidence, collected over many years, has shown this to be unjustified. Where genetically modified crops have been shown to pose a risk, this has invariably been a result of the trait displayed – for example, herbicide tolerance – rather than the technology itself.


We are disappointed that the government has not more publicly argued this fact. We recommend that the government publicly acknowledge that genetically modified crops pose no greater inherent risk than their conventional counterparts.



Bravery in controversy


In summary, ten MPs from three parties currently seeking re-election have written a brave report on a controversial technology. Their recommendations are indisputable. There is nothing intrinsically risky about GM. Current regulation is not fit for purpose; we should regulate specific traits, not the method by which they are delivered, in each member state.


As they themselves conclude: “Regulatory reform is no longer merely an option, it is a necessity.” The report recommends the government makes a commitment to argue for major reform of EU regulation of genetically enhanced novel crops. Legislators must grasp this nettle and remove the regulations that prevent science and technology from improving our crops and providing solutions to longstanding crop problems of weeds, pests and disease.


The Conversation

Baby sea turtles starved of oxygen by beach microbes

Gasp. Magnus Manske, CC BY

On a small stretch of beach at Ostional in Costa Rica, hundreds of thousands of sea turtles nest simultaneously in events known as arribadas. Because there are so many eggs in the sand, nesting females frequently dig up previously laid nests, leaving the beach littered with broken eggs. But these endangered sea turtles are facing a new threat: sand microbes encouraged by the decomposing eggs.


Results from a new study we’ve published in PLOS ONE show how these sand microbes cause low levels of oxygen in the nests that interfere with the embryonic development of the sea turtles.


Despite the large number of nesting females, hatching success at Ostional beach is particularly low. Scientists have long thought that the problem is due to high microbial activity in the sand caused by the decomposing eggs. In a previous study, we found that nests at Ostional have lower oxygen levels than other sea turtle nests do. This suggested that microbial activity did indeed impact nest oxygen – but it wasn’t until now that this was tested and confirmed. It means that we can now use the results to aid the conservation of these endangered turtle species.


Head in the sand


To understand how microbes affect sea turtle nests, we used different treatments to alter the number of microbes within the nest sand. We monitored nest temperature and oxygen levels throughout the incubation period. We also quantified the number of microbes in the sand and the microbial decomposition of organic matter.


Our results allowed us to look at how all of these factors were associated with sea turtle hatching success. We found that removing and replacing the sand, much like the agricultural practice of tilling, was the most successful treatment for increasing hatching success and decreasing microbial numbers. As we suspected, higher numbers of microbes in the sand were associated with lower hatching success as well as lower oxygen and higher temperatures in the nest.


Essentially, microbial activity in the sand at Ostional beach is so high that microbes are taking up all of the oxygen that the sea turtle embryos need to develop. Additionally, just like a compost pile, the microbial decomposition also increases nest temperatures.


Is it safe to come out yet? Author provided


Hatching success of sea turtles is a primary conservation concern, given the current threatened and endangered status of these species. In addition to the disruption of nest oxygen levels by microbial activity, human activity above ground also affects hatching success.


Conservation implications


Increasing temperatures due to climate change could increase microbial decomposition rates even more, further impacting sea turtle hatching success. Temperature increase is a particularly important factor because it determines the sex of the sea turtle and high temperatures are also lethal to embryos. Another human impact on sea turtle hatching success comes from the use of fertilisers and beach re-nourishment programmes, which introduce extra organic materials to the beach. These could fuel higher microbial activity, thus also impacting sea turtle hatching success.


While arribadas occur at very few beaches, sea turtle conservation programmes around the world use hatcheries to protect nests from threats such as beach erosion and poaching. These enclosures into which nests are relocated often have problems with microbial infestations, just like those studied in Costa Rica. Our results will therefore help these conservation programmes by providing sand treatment options to manage microbial infestations and increase hatching success.


The Conversation

Behaviour study shows rats know how to repay kindness

"Remember, we're all in this together." Kuttelvaserova Stuchelova/Shutterstock

Embargoed until 12.00am Wednesday Feb 25


If I scratch your back and you scratch mine, then we’re both better off as a result – so goes the principle of reciprocity, one of the most popular explanations for how co-operative behaviour has evolved. But what if one partner provides a better service than another? A paper by Dolivo and Taborsky shows that Norway rats will only give as good as they get.


As humans, we are familiar with the concept of helping those who help us, whether it is by buying rounds of drinks or expelling diplomats. But demonstrating reciprocity in other species has proved more challenging. Part of the reason for this may be that reciprocity is rarer than might be imagined. But a major factor is the difficulty of establishing an objective means of measuring the costs and benefits of apparently helpful behaviour in the field.


Do as you would have done to you


This is where the laboratory rat comes in. If the economics of behaviour elude field measurement, an attractive alternative is to perform controlled lab experiments. Dolivo and Taborsky trained rats to pull on a stick that drew a food item within reach of a rat in an adjoining cage separated from them by wire mesh.


They then introduced a further treatment in which an experimental rat was placed in a cage with other caged rats on either side. On one side the rat pulled a stick that provided pieces of carrot to the rat in the central cage, while the other pulled a stick that produced banana pieces. In subsequent trials the focal rat had the opportunity to repay the other rats using the same stick apparatus to deliver food items.


Now, the rats had typically turned their noses up at the carrot and showed a strong preference for the more desirable banana. On the basis that the banana-providing rat should therefore be remembered as the superior partner, the authors predicted that in the test phase the focal rat would more readily help provide for banana-purveying rats than for carrot-offering rats. This proved to be the case, so it did seem that the rats that had provided better help in the past received greater rewards, as expected if they were behaving reciprocally.


Behavioural scientists have questioned the extent to which non-human animals have the capacity to engage in reciprocity without being exploited by “cheats” who take advantage of their kindness. It seems that this is cognitively demanding, in terms of bringing together the memories of who did what and judging how to respond.


Dolivo and Taborsky’s latest results show that rats can recall the quality of help provided and by which rat, and adjust their subsequent behaviour so as to invest more time and energy in helping those that helped them. Taken together with the Taborsky group’s prior findings that rats are more likely to help a partner that had helped them before than one that had not helped them at all, these results provide interesting insight into how animals are able to manage the challenges of conditional co-operation.


Not just rats


It is increasingly apparent that we shouldn’t underestimate the ability of animals to engage in reciprocity. For example, a 2006 study by Alicia Melis and colleagues reported that chimpanzees took into account their experience with potential partners when choosing which to recruit for a collaborative venture.


A similar effect is seen among client fish – those species that are co-operatively served by other species of cleaner fish – which will preferentially associate with cleaner fish they have observed behaving in a co-operative manner. So there is evidence that other animals can assess the quality of partners and behave conditionally – a requirement for reciprocity to work.


The latest paper fits within a resurgence of interest in reciprocity, as researchers take up the challenge laid down by critiques questioning its occurrence in non-human animals. For example, the classic case of blood donation among vampire bats has been revisited with a demonstration that the best predictor of donations received was whether donations had been made.


Meanwhile recent experiments with pied flycatchers also appear to demonstrate that birds will help those that have helped them mob owls in their territories.


Good examples of reciprocity in non-human animals may be uncommon but Dolivo and Taborsky’s work supports the view that, where reciprocity does pay, animals can make it work through co-operating conditionally and favouring those partners which provide the best quality help.


The Conversation

Earth's other 'moon' and its crazy orbit could reveal mysteries of the solar system

Cruithne's wacky orbit around the sun YouTube, CC BY-SA

We all know and love the moon. We’re so assured that we only have one that we don’t even give it a specific name. It is the second-brightest object in the night sky, and amateur astronomers take great delight in mapping its craters and seas. To date, it is the only other heavenly body with human footprints.


What you might not know is that the moon is not the Earth’s only natural satellite. As recently as 1997, we discovered that another body, 3753 Cruithne, is what’s called a quasi-orbital satellite of Earth. This simply means that Cruithne doesn’t loop around the Earth in a nice ellipse in the same way as the moon, or indeed the artificial satellites we loft into orbit. Instead, Cruithne scuttles around the inner solar system in what’s called a “horseshoe” orbit.


Cruithne’s orbit


To help understand why it’s called a horseshoe orbit, let’s imagine we’re looking down at the solar system, rotating at the same rate as the Earth goes round the sun. From our viewpoint, the Earth looks stationary. A body on a simple horseshoe orbit around the Earth moves toward it, then turns round and moves away. Once it’s moved so far away it’s approaching Earth from the other side, it turns around and moves away again.


Cruithne from a stationary Earth position


Horseshoe orbits are actually quite common for moons in the solar system. Saturn has a couple of moons in this configuration, for instance.


What’s unique about Cruithne is how it wobbles and sways along its horseshoe. If you look at Cruithne’s motion in the solar system, it makes a messy ring around Earth’s orbit, swinging so wide that it comes into the neighbourhood of both Venus and Mars. Cruithne orbits the sun about once a year, but it takes nearly 800 years to complete this messy ring shape around the Earth’s orbit.


Cruithne close up


So Cruithne is our second moon. What’s it like there? Well, we don’t really know. It’s only about five kilometres across, which is not dissimilar to the dimensions of the comet 67P/Churyumov-Gerasimenko, which is currently playing host to the Rosetta orbiter and the Philae lander.


The surface gravity of 67P is very weak – walking at a spirited pace is probably enough to send you strolling into the wider cosmos. This is why it was so crucial that Philae was able to use its harpoons to tether itself to the surface, and why their failure meant that the lander bounced so far away from its landing site.


Given that Cruithne isn’t much more to us at this point than a few blurry pixels on an image, it’s safe to say that it sits firmly in the middling size range for non-planetary bodies in the solar system, and any human or machine explorers would face similar challenges as Rosetta and Philae did on 67P.


Possible clash: Venus J.Gabás Esteban, CC BY-SA


If Cruithne struck the Earth, though, that would be an extinction-level event, similar to what is believed to have occurred at the end of the Cretaceous period. Luckily it’s not going to hit us anytime soon – its orbit is tilted out of the plane of the solar system, and astrophysicists have shown using simulations that while it can come quite close, it is extremely unlikely to hit us. The point where it is predicted to get closest is about 2,750 years away.


Cruithne is expected to undergo a rather close encounter with Venus in about 8,000 years, however. There’s a good chance that that will put paid to our erstwhile spare moon, flinging it out of harm’s way, and out of the Terran family.


It’s not just Cruithne


The story doesn’t end there. Like a good foster home, the Earth plays host to many wayward lumps of rock looking for a gravitational well to hang around near. Astronomers have actually detected several other quasi-orbital satellites that belong to the Earth, all here for a little while before caroming on to pastures new.


Secrets: solar system Tashal


So what can we learn about the solar system from Cruithne? Quite a lot. Like the many other asteroids and comets, it contains forensic evidence about how the planets were assembled. Its kooky orbit is an ideal testing ground for our understanding of how the solar system evolves under gravity.


As I said before, it wasn’t until the end of the 20th century that we even realised that bodies would enter such weird horseshoe orbits and stay there for such a long time. The fact they do shows us that such interactions will have occurred while the solar system was forming. Because we think terrestrial planets grow via collisions of bodies of Cruithne-size and above, this is a big new variable.


One day, Cruithne could be a practice site for landing humans on asteroids, and perhaps even mining them for the rare-earth metals our new technologies desperately crave. Most importantly of all, Cruithne teaches us that the solar system isn’t eternal – and by extension, neither are we.


The Conversation

Lenovo's security debacle reveals blurred boundary between adware and malware

Who's looking after your keys? kris krüg, CC BY-SA

A widely disliked habit of PC vendors is their bundling of all manner of unwanted software into brand new computers – demo software, games, or part-functional trials. Faced with shrinking margins vendors have treated this as an alternative income stream, going so far as to include adware that generates revenue through monitoring users' surfing habits, for example.


While some software such as virus scanners can be useful, Lenovo, the world’s biggest computer seller, has discovered just how badly it can backfire when including insufficiently tested – or just plain malicious – software.


With vendors often doing little in the way of due diligence, third-party software can include those with backdoors, or which could present privacy problems, or contain ways to trick users into paying for subscriptions. More often the focus is on pushing content and advertising, based on tracking user’s web browsing habits, or targeted marketing, where search results from trusted sites such as Google are tampered with before they’re presented to the user.


SSL redirect


Lenovo’s own-goal was to include Superfish: adware that alters search results in order to inject its own, and offers competing products whenever the user mouse-overs keywords in the page.


Encrypted communications require a private and a public key, separate but mathematically linked. The public key, which is published and available, is used by others to encrypt messages and send them to the owner of the public key. The public key’s owner uses their secret, private key to decrypt them.


In order to be sure public keys belong to who they claim to, they are verified by certificates signed by trusted authorities. Superfish, however, in order to intercept encrypted search requests made over HTTPS (typically used by Google), installs a self-signed root certificate on the system. This, despite offering no checking or verification of keys, allows Superfish to takes control of encrypted traffic by masquerading as the site’s own certificate. So, for example, when connecting to the Bank of America, the Superfish certificate would claim to be from the Bank of America.



This is called a man-in-the-middle attack, where one site impersonates another in order to fool other parties into communicating with it. The user thinks they are connecting to a valid site as the browser reports it has checked the site’s identity via its certificate, but in fact traffic is going to another site, using another connection.


Can you see the problem? In an effort to pry into user’s searches in order to show more adverts, Superfish created a security hole through which others can get in too. This was done as the private key for securing the data sent to Superfish has been cracked. Doing so also allows intruders to see search queries or any other traffic, even though it appears to the user that they are communicating securely with Google.


A man-in-the-middle attack, as created by Superfish. owasp, CC BY-SA


Bad software used for bad ends


At the core of this problem is the use of SSL hijacker software developed by a firm called Komodia. As their website states:



The SSL hijacker uses Komodia’s Redirector platform to allow you easy access to the data and the ability to modify, redirect, block, and record the data without triggering the target browser’s certification warning.



So we have a piece of software that can trick the user into connecting to website that is not necessarily what it seems or claims to be, bypassing the browser’s built-in security that would alert them.


As if this wasn’t bad enough, Superfish embedded the private key used to secure the traffic sent over the encrypted link along with its public key in the certificate. This should never happen, as a private key should not be shared. Not only does the certificate contain both keys, but the private key password has been cracked (it’s “komedia”, would you believe) and is the same for each on of the millions of computers on which Superfish is installed. And not just Superfish: the same weak certificates are bundled with many other software too.


Overview of the SSL redirect


This is a spectacular security risk, meaning any intruder can access the data passing between any user with the certificate installed and any encrypted website they’re connected to. It’s like finding the best locks to secure your home, and then putting the keys under a plant pot outside the front door.


This wouldn’t be the first time that security has failed in this way – not by defeating the encryption, but through a flawed set up and weak, easily guessable password. Antivirus software firms and Microsoft are already rolling out patches in order to detect and remove this software and its certificate.


Lenovo have sold over 16m Windows computers in the last quarter of 2014 – and many of these vulnerable. Not only that, but every one of those computers could potentially eavesdrop on the secure communications of every other, as the certificate password is the same for all.


This is likely to be extremely costly for Lenovo, in brand reputation but also in legal actions which have already begun. Although the issue was raised in January on the Lenovo forums, the firm claims to have had no idea of the problem it represented – that is bad enough in itself.


The Conversation

There need not be a digital dark age -- how to save our data for the future

Floppies: storage that's about as reliable as a CD used as a frisbee. orangejack, CC BY-NC-SA

“The internet is forever.” So goes a saying regarding the impossibility of removing material – such as stolen photographs – permanently from the web. Yet paradoxically the vast and growing digital sphere faces enormous losses. Google has been criticised for failing to ensure access to its archive of Usenet newsgroup postings that stretch back to the early 1980s. And now internet pioneer Vint Cerf has warned of a “digital dark age” that would result if decades of data – emails, photographs, website postings – becoming lost or un-readable.


Millions of paper records more than 500 years old exist today. But your entire family photo collection could be lost forever with just a single hard drive failure. Stone tablets, parchment, paper, printed photographs have all lasted through the centuries. But some of our data may not. What do we do about preserving the digital deluge?


Cost v value


Technical solutions already exist, but they’re not well known and relatively expensive. How much are we prepared to pay to ensure that digital stuff today is usable in the future? Because if there’s cost involved, inevitably we have to think about what has value that makes it worth keeping.


How can we calculate that value? As an example, the holdings of the UK Data Archive include machine-readable versions of all of the General Household Surveys (GHS) carried out between 1971 and 2011. This was a continuous national survey of people living in private households conducted on an annual basis. The cost of the GHS in 2001 was reported as £1.43m, making the value of the survey and its data at least that. As it was the thirtieth year of this survey the value could be said to be higher as it was part of a series, so we could say they survey was worth more than it cost.


The Office for National Statistics transferred the 2001 data to the UK Data Archive in 2002, where we prepared them for preservation and access and published them. Up until today this survey data has been downloaded by 426 people working in government departments, 759 staff working in education, 1,331 students and 109 others for various uses. So benefits accrue from making the data available even after its creators have exhausted their primary value – re-use is a significant benefit from preserving data and adds value.


But there are also cultural and intellectual and not just economic arguments for preserving data. Survey data like these and their supplementary materials provide a window to the concerns of survey designers and, by extension, society at the time. True, cultural arguments for preservation can be expressed more forcefully for artefacts such as images, films, or written works than survey data. But these data stand a good chance of being included within Britain’s cultural and intellectual heritage precisely because they have been carefully managed and preserved.


Making digital as long-lasting as paper


How can we improve the chances of something being preserved? Professor Michael Clanchy, writing in his seminal From Memory to Written Record, discusses how the concept of records developed. Owing to the media available to scribes in the Middle Ages they made conscious choices between creating an ephemeral document (on a wax tablet) or a permanent record (on parchment). Today digital media proliferates mainly because it provides the easiest means to transmit a work, and so that distinction has to a point disappeared.


Documents and records are now both digital, but the question remains as to what should be kept for posterity and why. These are hard questions which lead to hard choices, because by their nature the cost of preserving digital materials can be much more expensive than their analogue counterparts. You can’t just put them in a box and walk away – the effort and tools required to read a 100-year-old letter is considerably less than the effort required to read a 30-year-old LocoScript popular on Amstrad computers in the 1980s-90s.


Most born-digital material is, with the right resources, recoverable. However, the chances of born-digital material being usable in, say, 100 years is considerably improved by actively taking steps to ensure that it will – just as medieval scribes made similar decisions in centuries past. Effective digital preservation relies, to some extent, on the activities of the creator as well as the archivist. Today those decisions include providing context, using standard and open file formats, organising material sensibly, and making provision for rights issues to avoid the problem of orphan works.


The future starts now


Organisations can do a better job than individuals, but require a business model and a mandate to do so. Asking someone to pay for something a long time before its value can be realised (if at all) is not a attractive business proposition. What we can do, at a minimum, is try and convince people that it is possible.


Of course neither creator nor archivist can fully understand how future users may approach digital information preserved over time. Social and cultural historians have, by necessity, used records for purposes for which they were not created and often in inventive and interesting ways. Historians are often helped by context, and the digital material we’re creating today needs the same contextual information to ensure its usefulness.


The Conversation

We're all mammals – so why do we look so different?

Family values. Mammals by Shutterstock

It is easy to distinguish a mouse from a cow. But for members of the same class of mammal, where do such differences begin? In 2011, scientists discovered there were differences in cow and mice blastocysts, the tiny hollow spheres of cells which precede the development of the embryo.


So while adult mammals are easily distinguishable, it was remarkable that the researchers were able to still tell the difference at this extremely early stage of development. This early difference was largely due to the crucial process of gene regulation.


Mammalian species are all quite different in look and size, and have colonised all ecological niches – they can be terrestrial (like humans and mice), aquatic (dolphins and whales) and even aerial (bats). Like humans, all mammals have large, complex genomes – the DNA sequences in our cells. These contain the instructions which are used to construct our bodies and brains. However, the best-understood functional units in our DNA – our genes – take up only 2% of our genome sequence, and are extremely similar across non-marsupial mammals. So what makes us so different?


Classical studies – for example, those by geneticists Mary-Claire King and Allan Wilson, have shown that the major differences between mammalian species lie not in the genes themselves, but where genes are switched on and off – that is, in gene regulation.


She’s the regulatory element. Off on switch by Shutterstock


Understanding gene regulation in mammals is very challenging. The DNA sequences that regulate our genes – so-called regulatory elements – are painstaking to identify. These sequences are spread across our vast genome, and are largely different for each of our tissues. To decipher gene regulation in mammals, we need to locate them and understand how they change as the animal evolves.


Gene regulation evolves


As evolution progresses and mammalian species diverge, various genes are switched on an off. So which aspects of our genome stay the same and where are the changes taking place?


New experimental and computational tools for DNA sequencing are now making it possible to identify regulatory elements and their activity with unprecedented accuracy and speed. These tools allow us to study gene regulation across mammalian genomes, as has been done for humans and mice, but much less so for recently sequenced genomes, such as those of species with unique adaptations - dolphins or subterranean cancer-resistant naked mole rats among them.


Not your average looker. Buffenstein/Barshop Institute/UTHSCSA, CC BY


In a recent study published in Cell, we found the extent of gene regulation differences – the “on/off” switching – across mammals was astonishing. It is rare that the DNA sequences that regulate our genes show similar activities across mammals. More commonly, gene regulatory activities change rapidly as mammals evolve (though still over millions of years – for example, humans and chimps are separated by 6m years of evolution), and such differences probably lead to different genes switching on and off.


In fact, a good fraction of the regulatory elements that we identified in each mammalian genome were active in a single mammal (out of the 20 analysed), which suggests that these regulatory elements may be associated with recent evolutionary adaptations unique to a few species.


Repurposing


So how do such vast numbers of newly active regulatory sequences arise? Our findings suggest that, rather than acquiring wholly new DNA sequences that regulate genes, mammals derive most regulatory innovations from existing DNA – sequences shared to some extent by all mammals today and likely present in the ancestral species from which they evolved – but repurposed in a particular species.


This process resembles evolutionary tinkering, where continuous tweaking of existing DNA sequences can result in new patterns of gene regulation. The prevalence of this mechanism, as opposed to the generation of regulatory elements from newly acquired DNA, could in part explain the rapid evolution we see in mammals, and may have been pivotal in allowing mammals to efficiently colonise Earth’s ecosystems. Essentially, continuous modifications in vast mammalian genomes within relatively small populations likely contributed to new evolutionary paths that allowed mammal species to diverge.


Many questions remain. Our results indicate how rapidly gene regulation can change in mammalian genomes, but further work will be required to fully understand the relative importance of the retained and new DNA sequences that regulate our genes, and how they cooperate to create species diversity while maintaining the organ functions found across vertebrates. And our findings could have profound implications for our understanding of human disease – in particular, the mechanisms by which rapidly evolving pathologies, such as cancer, hijack normal gene regulation and alter it to their advantage.


The Conversation

Vikings were pioneers of craft and international trade, not just pillaging

"Yes it's a new thing we're trying out, it's called 'international trade'." Anna Gowthorpe/PA

The connections between technology, urban trading, and international economics which have come to define modern living are nothing new. Back in the first millennium AD, the Vikings were expert at exploring these very issues.


While the Vikings are gone their legacy is remembered, such as at the annual Jorvik Viking Festival in York. The Norsemen’s military prowess and exploration are more often the focus of study, but of course the vikings were more than just bloodthirsty pirates: they were also settlers, landholders, farmers, politicians, and merchants.


Between the 8th and 11th century (the Viking Age), Europe saw significant technological advances, not all of them Scandinavian – the Anglo-Saxons, Frisians and Franks were equal players. To understand these changes, we have to see them in the context of increasing contact between Scandinavia, the British Isles, and continental Europe – in which the Vikings were key players. Technological innovations such as the potter’s wheel and the vertical loom transformed not only the types of products being manufactured in Viking settlements, but also the scale on which they were produced.


Technological developments emerged as people came together in growing coastal trading centres and market towns. The world was rapidly becoming more joined-up during this period than at any time since the heyday of the Roman Empire. Trade fostered international links across the North Sea, Baltic and beyond, and similar developments were happening as far afield as the Middle East, Africa, and Asia. This was a period in which people began to live and work in entirely new ways, and technological change was both a cause and an effect of this.


While many Viking artefacts of the period are familiar, the complex methods that lay behind their manufacture are less well-known. Each involved a specialised set of skills, tools and raw materials, which meant craftspeople were reliant not only on a market for sale, but also on a well-organised supply chain. This is why the development of specialist crafts, of growing urbanisation, and of long-distance trade are intimately connected.


The Vikings were expert shipbuilders and navigators, and while evidence for their shipwrights' skills survives to the present day, there is little detail of how they navigated their huge journeys. What is clear is that between the 8th and 11th century, viking shipping underwent significant development, beginning with the appearance of the sail, and leading to the development not only of specialist warships, but also of prototypes for the large cargo vessels that would come to dominate the waters of later medieval Europe. But Viking technology had more to offer than ships and swords.


Viking brooches were ornate, beautiful, and mass-produced. British Museum


Brooches


Among the most recognisable Viking artefacts are their brooches. Long studied by archaeologists, they signified gender, status, and ethnicity. Work is ongoing to reveal the advanced technology used in their manufacture.


Evidence for brooch manufacture in Viking towns includes the remains of moulds and crucibles. The crucibles are often found complete with residues of the metals melted down in them. Brooches were cast by pouring this metal into moulds, which were produced by pressing existing pieces of jewellery or lead models into clay, followed by minor artistic modification. This resulted in a sort of mass-production. As this craft was dependent on high-quality brass ingots from continental Europe, specialist jewellery production centres arose at ports associated with long-distance trade routes.


Glass bead jewellery


Strings of ornate glass beads are another common sight in Viking museum displays. Beads were made in Scandinavian towns by carefully manipulating coloured glass as it melted. Waste deposits prove that the raw glass used in this process came in the form of coloured tesserae : small, square blocks from the Mediterranean, where they were used to produce mosaics. Whether they were bought and sold in south-eastern Europe, before travelling west, or whether they were ripped from Byzantine churches on raids in the region is unclear.


Combmaking


Animal bones were among the most important materials in pre-modern technology: a durable, flexible, readily available raw material used for everything from knife handles to ice skates. Many such objects could be made quickly, with little training – but not the Vikings' hair combs.


These large, ornate, over-engineered objects took days to manufacture and required a trained hand. Specialised tools such as saws, rasps, and polishers were needed, and deer antler particularly was the material of choice.


Viking combs ranged from the practical to the ornate. British Museum


Combs of this type go back to the Late Roman period, but they really came into their own in the Viking Age, where they became a symbol of status and aspiration. Combmakers tended to work in towns, where they had access to periodic markets and a supply network that brought in deer antler from the local countryside, and reindeer antler from the Arctic north. They may also have moved around from town to town, in order to maximise their sales. It’s a great example of the way town, countryside, and long-distance travel were tied together in order to support the technology that was important to the everyday life of Viking-Age people.


These examples of craftmanship and technical tool work – and there are many more – demonstrate that the Vikings should be seen as more than just raiders, and more more than simple traders or merchants too. With their outward-looking society and cutting edge techniques, they were among the earliest investors in global technologies in a post-Roman world that, even then, was increasingly international. And today, as a modern recreation of a Viking vessel embarks for the first ever Viking exhibition in China, it’s clear their appeal is truly global.


The Conversation

We need to rethink the relationship between forensic science and the law

Advances in science are causing problems in courtrooms Petretei

Despite what we see on television, forensic science is not always easy to understand or simple to convey to a jury, many of whom may not have studied science since they were in school. When a case fails in the courtroom, maybe because the scientist was inexperienced, or there were flaws in the science presented, it creates the potential for a miscarriage of justice – something to be avoided at all costs.


This was illustrated recently in a violent crime case in the US when a court refused to grant admissibility to a particular type of DNA evidence because its interpretation had not yet been agreed within the scientific community and it was too complex for the jury to understand.


The judge told the court:



To have a technique that is so controversial that the community of scientists who are experts in the field can’t agree on it and then to throw it in front of a lay jury and expect them to be able to make sense of it, is just the opposite of what the [rules on admissibility of evidence are] all about.



Indeed, why should we expect lawyers or the public to understand science? The courtroom is a place where language can become severely challenging, where what is said may be at odds with what is heard. This is a particular issue for some types of evidence that rely, for example, on complex statistical analysis.


Both the scientist and the court have a duty to ensure that each party does their utmost to ensure that the jury understands the capabilities and limitations of any science presented to them. The scientists must be able to convey their often complex subject as simply as possible. Only then will the lawyers and judge be able to guide the jury to reach a secure and informed decision.


The limits of scientific influence


One core problem is that the scientist and the lawyer rarely meet before any courtroom confrontation. And the idea that a scientist might offer advice to a judge outside of the courtroom is almost uncharted territory in the UK.


Yet it is the trial judge who must decide whether there is sufficient robust underpinning in scientific evidence to let it be heard by the jury. They have to be sufficiently confident that the science establishes the fact in question and will withstand reasonable cross-examination that will assist the triers of fact.


Without training, how comfortable can the judge be to adopt this role – especially in complex cases such as those involving the interpretation of mixed-DNA profiles?


If the judiciary feel unable to do this, perhaps the scientist must assume the responsibility of teacher to convey the complexity of their science in a way that will be understood.


A better way forward


The reality is that the courtroom is the place where lawyers should be examining the case-specific science and not the basic underpinning value of the overarching scientific subject. The courtroom is not the classroom, so the time for teaching is during the preparatory stages before the business of testimony and evidence gets underway.


If all the scientific limitations could be agreed beforehand, this would leave only the details that relate to the case and the interpretation of the case-specific evidence to be addressed in the court.


The Lord Chief Justice of England and Wales last autumn called for a set of judicial primers, pieces of “plain English” that will relay core scientific principles in a way that is understandable by lawyer, judge and jury. He reiterated this call recently at a meeting hosted by the Royal Society in London that was agreed as a priority first step towards disrupting the communication logjam.


Another issue is our understanding of the scientific limitations. The US National Academy of Science published a report in 2009 that was a damning indictment of the lack of investment in forensic research and the shaky nature of basic scientific underpinning in most forensic sciences.


National Academy of Science has bemoaned the state of forensic science NAS


In the past 30 years the lion’s share of funding has been consumed by advances in DNA, while other subjects have suffered, be they trace evidence (such as hairs and fibres), ballistics, blood patterns or fires and explosions. This has meant that core research gaps in our knowledge remain.


A global strategic approach aiming to improve basic scientific underpinning must also lie at the core of any future advance to provide better science to the courts. This is vital for the health of the subject and in turn can only benefit justice in the long term.


In short, scientists must come together in partnership with the law and funders to ensure a product that is fit for purpose. This requires greater co-ordination and understanding between two ancient academic disciplines who have rarely been easy bedfellows: law and science.


Lifetimes of misunderstanding have built up around their gladiatorial arena and they no longer seem to speak a common language. It is time for a paradigm shift in their relationship, geared towards addressing areas of common and competing ground, talking about science in plain English and agreeing where the current research gaps exist and how we are best placed to fill them.


The Conversation