Thursday, April 9, 2026

Tiny cell superheroes are suiting up to give bone cancer the boot!

Tiny glowing biological superheroes with neon capes patrolling a bone-like landscape and zapping dark purple cancer clusters with golden energy beams.

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything in tip-top shape. But every now and then, a real heavy-hitter villain shows up—a bone-crunching monster called osteosarcoma. This isn’t your garden-variety cold or a pesky scrape; this is a tough-as-nails bone cancer that likes to play hide-and-seek in the skeletons of young adventurers. For a long time, our internal security guards struggled to even see this monster, let alone take it down. But hold onto your lab goggles, because scientists have just unveiled a brand-new upgrade for our microscopic defenders that’s essentially the biological equivalent of giving a knight a lightsaber and a jetpack.

In the world of modern medicine, we have these incredible things called CAR-T cells. Think of them as regular immune cells that have gone through a "Super Soldier" program. Scientists take these cells out of the body, give them a Pep talk and some high-tech genetic modifications, and then send them back in to hunt down cancer. While this has worked wonders for blood-based baddies, solid tumors like osteosarcoma are a different story. These tumors create a sort of "No-Go Zone," a swampy, toxic environment where regular CAR-T cells get tired, run out of breath, and eventually just give up. It’s like sending a superhero into a room filled with sleeping gas; they might start the fight, but they’ll be snoring before the first punch is thrown.

This is where the brilliant "Bio-Engineers" come in. They realized that if they wanted to win this bone-deep battle, they couldn’t just give the T-cells a map; they had to give them stamina that never quits. The latest breakthrough involves a double-whammy upgrade. First, they taught the T-cells to recognize a specific marker on the surface of the bone cancer cells, a little red flag known as GD2. But the real magic is the "Energy Boost" secret sauce. By tweaking the cells to produce their own specialized fuel—a protein called IL-15—these tiny warriors don't get sleepy anymore. They stay awake, stay angry, and keep swinging until the job is done.

In the laboratory trials, which are basically the "Pre-Season" for medical breakthroughs, the results were nothing short of a fireworks show. Usually, when T-cells face off against a solid bone tumor, they hit a wall. But these "Turbo-Charged" T-cells acted like they had infinite extra lives. They didn't just attack the main tumor; they patrolled the body like a high-tech security drone, making sure that any runaway cancer cells trying to set up camp elsewhere were dealt with immediately. In mouse models—our brave little research pioneers—the survival rates shot up significantly. It was like watching a video game character with an invincibility star cruise through the hardest level in the game.

What makes this so exciting isn't just the fact that the cancer is being beaten; it’s how the T-cells are learning to handle the "Toxic Swamp" of the tumor. Normally, the environment around a bone tumor is designed to shut down the immune system. It’s a chemical fortress of gloom. But these new and improved T-cells come equipped with their own internal "Air Conditioning" and "Snack Pack." By producing their own survival signals, they can ignore the tumor’s attempts to make them quit. They stay fresh, vibrant, and ready for action, turning what used to be a one-sided defeat into a glorious victory for the home team.

But wait, the science gets even cooler! The researchers found that these upgraded cells also have a better memory. In the world of biology, a "Memory T-Cell" is like a veteran soldier who remembers exactly what the villain looks like. If the cancer ever tries to make a comeback, these memory cells are already on the scene, shouting "Not on my watch!" and shutting things down before the villain can even finish its monologue. This long-term protection is the holy grail of cancer treatment, moving us away from temporary fixes and toward a world where the body can defend itself indefinitely.

Of course, we aren’t quite at the finish line yet. Moving from successful lab trials to human hospitals is a big jump, like moving from a flight simulator to flying an actual rocket ship. Doctors need to make sure these super-powered cells don’t get a little too over-excited and accidentally bump into healthy tissues. But the roadmap is clearer than ever before. This new approach provides a blueprint for tackling all sorts of stubborn solid tumors, not just the ones in our bones. We are essentially learning how to build a smarter, faster, and more resilient immune system that refuses to take "no" for an answer.

The future of medicine is looking less like a sterile pharmacy and more like a high-octane superhero comic book. With every breakthrough like this, we are giving hope to kids and families who have been waiting for a miracle. We are moving toward a day where a diagnosis of osteosarcoma isn't a terrifying roadblock, but just a really big boss fight that we already know how to win. So, let’s hear it for the scientists, the tiny cellular heroes, and the incredible technology that’s helping us turn the tide in the greatest battle of all—the fight for a long, healthy, and cancer-free life!

Wednesday, April 8, 2026

Skinny Shots and Botox Pots: The Ultimate Beauty Business Glow Up!

Welcome to the era of the Shrinking Society! If you have looked around lately, you might have noticed that everyone and their neighbor seems to be sipping on a magic metaphorical tea—or rather, getting a little help from the latest wave of weight-loss wonders. It is the talk of every brunch, the secret behind many a sudden transformation, and the hottest topic in the world of wellness. But as the pounds vanish faster than a magician’s rabbit, a brand-new mystery has emerged: what happens to the skin that used to hold all that extra joy? Enter the aesthetic wizards, who have realized that while the scales are going down, the demand for a bit of sparkle and plumping is going way, way up!

You see, when the body decides to drop weight at warp speed, it sometimes forgets to tell the skin to keep up. This has led to a phenomenon that the internet has playfully dubbed the deflated look. Imagine a balloon that was full and bright, but then someone let just a little bit of air out. The balloon is still great, but it might have a few more wrinkles than it did before. This is exactly where the beauty industry is stepping in with a cape and a syringe, ready to save the day. They have discovered that their next big growth engine is not just about changing faces, but about being the ultimate partner to the weight-loss journey.

A sparkling aesthetic treatment room with glowing skin products and a modern vibe

This is where the magic of fillers and skin-tightening comes into play. Think of it like tailoring a high-end suit. You have done the hard work of losing the weight, and now your birthday suit is a few sizes too big. You cannot just throw it in the dryer on high heat and hope for the best! You need the experts to nip, tuck, and add a little volume back into those hollowed-out cheeks or sagging jawlines. The big players in the beauty world are leaning into this trend like a supermodel against a wind machine, realizing that the journey to a new you is a multi-step process that involves both biology and artistry.

It is a match made in heaven, really. On one hand, you have these incredible medications that help people reach their health goals and feel lighter on their feet. On the other hand, you have a toolkit full of glow-inducing treatments, dermal fillers, and body-sculpting gadgets that act as the ultimate finishing touch. It is like painting a masterpiece; the medication prepares the canvas, making it smooth and ready, and the aesthetic treatments provide the vibrant colors, the fine details, and the frame that makes everything pop.

We are seeing a massive shift in how people view self-care. It is no longer just about a quick spa day or a fancy moisturizer you bought because the packaging was pretty. It has become a sophisticated, multi-layered strategy that involves chemistry, physics, and a little bit of sparkle. The industry is buzzing with excitement because they have found a whole new audience: millions of people who are successfully shedding weight but want to ensure they keep their youthful glow and structural integrity. It is not about vanity; it is about matching the outside to the vibrant, energetic person on the inside.

Let’s talk about the body side of things too, because it is not just about the face. When the weight drops, the muscles that were hiding underneath might need a little encouragement to make their grand debut. This has led to a surge in high-tech treatments that tone and strengthen muscles using fancy electromagnetic energy. It is basically like doing a thousand crunches while lying down and listening to your favorite podcast. Who wouldn’t want to outsource their gym session to a machine while catching up on celebrity gossip? It is the ultimate life hack for the modern age.

The beauty of this trend is that it is creating a more holistic approach to transformation. People are becoming much more educated about how their bodies work, from their metabolic health to the collagen production in their skin. It is a full-body renovation project, and the contractors are armed with cooling lasers and hydrating serums. The goal is no longer just to be thin; it is to be snatched, glowing, and radiant from head to toe.

Looking ahead, the synergy between health-focused medications and aesthetic procedures is only going to get stronger. We might soon see the rise of beauty bundles where your health journey is mapped out from the very first dose to the final filler touch-up. It is an exciting time to be alive if you are a fan of looking and feeling your absolute best. The industry is pivoting from just fixing problems to being a constant companion in a person’s total wellness evolution.

So, if you see someone looking suspiciously radiant and toned after a big life change, don't just ask about their new diet. They might just be the lucky beneficiaries of this new golden age of aesthetics. It is a brave new world where we can sculpt our bodies and faces with more precision than ever before, turning a weight-loss milestone into a total glow-up. The rise of the slim-down shots isn't a challenge for the beauty world—it is the ultimate booster shot, proving that as we change how we manage our health, our desire to sparkle remains as strong as ever!

Tuesday, April 7, 2026

Your Internal Clock Is Riding An Aging Roller Coaster All Day Long!

Imagine if your body was a giant, invisible mood ring, but instead of showing if you're grumpy or happy, it showed how old you were in that exact moment. You might wake up feeling like a fresh-faced toddler, full of potential and bouncy cells, but by the time you’ve dealt with a broken toaster and a mountain of emails, your biological clock has decided you’re actually a hundred-year-old wizard who needs a long nap in a cave. This isn’t science fiction; it’s the wild, daily rollercoaster of our internal clocks. While your birthday only happens once a year, your cells are basically having a mid-life crisis every single Tuesday afternoon.

Our bodies aren't static statues; they are more like bubbling cauldrons of chemical soup that changes flavor every hour. Scientists have discovered that the clocks inside our cells—the ones that measure things like stress, inflammation, and energy use—don't just tick forward in a straight line. Instead, they bounce around like a toddler on a sugar high. One minute you’re biologically younger than your driver's license says, and a few hours later, your cells are acting like they’ve seen several centuries of history. It is a form of biological bungee jumping where we stretch toward old age and then snap back to youth while we sleep.

Think of your biological age as a cellular backpack. When you wake up after a glorious night of sleep, that backpack is empty. Your cells have spent the night cleaning up the metaphorical glitter from the previous day's party. They’ve repaired the little tears, flushed out the toxins, and reset the dial. In this state, you are at your peak youthfulness. You’re fast, you’re efficient, and your DNA is tucked in neatly. If a scientist took your biological temperature at 7:00 AM, they might find you’re actually a few years younger than you were when you went to bed. You are essentially a brand-new version of yourself every single morning.

But then, life happens. You drink that first cup of coffee, which kickstarts your metabolism but also sends a tiny "get moving" shock through your system. You rush to work, deal with a colleague who eats loud snacks, and navigate the emotional gauntlet of social media. Each of these tiny stresses adds a little pebble to your cellular backpack. By lunchtime, your internal clock is starting to feel the weight. The markers of age—things like how your genes are expressed and how much "rust" is on your cellular machinery—start to creep up. You are chronologically the same age, but your internal chemistry is starting to look a bit more "vintage."

By the time the sun starts to set, you’ve hit the peak of your daily aging cycle. This doesn't mean you’ve actually grown gray hair in eight hours, but your internal chemistry is mimicking the environment of an older body. Your cells are tired, your inflammatory markers might be a bit higher, and your biological readout says you’ve been through the ringer. It’s a completely normal, daily cycle of wear and tear. You aren't actually losing years of your life; you’re just experiencing the ebb and flow of being a living, breathing creature in a fast-paced world. Your body is just reflecting the hard work it did to get you through the day.

The real magic happens when we hit the hay. Sleep isn't just for dreaming about flying or being back in high school without pants; it’s a high-tech car wash for your biology. While you’re snoring away, your body is frantically working to reverse the aging that happened during the day. It’s scrubbing those pebbles out of your backpack and resetting the clock. This nightly reset is crucial. It’s the reason why a bad night’s sleep makes you look and feel ten years older the next day—you simply didn't finish your nightly rejuvenation cycle! Your cells didn't get the memo that the workday was over, so they stayed in their "older" state.

Understanding this daily yo-yo of biological age is like having a secret map to your own body. It reminds us that we aren't just getting older; we are constantly renewing ourselves. Every single day is a fresh chance to be young again. This discovery also gives us a great excuse to be kinder to ourselves. If our biological age is this flexible, it means our habits really do matter in the short term. A quick walk, a moment of meditation, or even a good laugh can act like a tiny fountain of youth, nudging that internal clock back toward the young and spritely side of the scale. We are basically time travelers, jumping back and forth through our own biological history every twenty-four hours.

So, celebrate your inner toddler in the morning and forgive your inner grumpy grandparent in the evening. It’s all part of the magnificent, rhythmic dance of life. You are a biological masterpiece that knows how to age and un-age with the grace of a sunset. The stress of the day might make your cells feel a bit antique, but the promise of rest means you'll be back to your fresh self before you know it. Just remember to give your internal clock the rest and care it needs, and you’ll keep ticking along beautifully, no matter what the calendar says. You aren't just growing older; you are participating in a daily symphony of renewal.

A groovy new victory for treating brain-based weight gain is finally here

The Brainy Breakthrough in Weight Management

A Massive Win for the Master Controller: The FDA Just Gave the Green Light to a Brainy Weight-Loss Hero!

Imagine for a second that your brain has its own tiny, high-tech command center. This little hub, tucked deep inside your noggin, is called the hypothalamus. It’s the ultimate master of ceremonies for your body, managing everything from your internal thermostat to your sleep schedule and—most importantly—your appetite. It’s the "Smart Home" system of your biology. But what happens when that control panel gets a bit of a glitch? For folks dealing with acquired hypothalamic obesity, it’s like the "I’m hungry" button gets stuck in the "on" position, and the "I’m full" signal goes completely MIA.

For the longest time, this specific type of weight struggle was a bit of a medical mystery that was notoriously hard to solve. It wasn’t about willpower or just "eating your greens." It was a physical hardware issue! But hold onto your hats, because there is some seriously sparkly news on the horizon. Rhythm Pharmaceuticals has officially stepped into the winner's circle, securing the very first FDA approval for a treatment specifically designed to help people living with acquired hypothalamic obesity. It’s a total game-changer, and honestly, it’s worth a little happy dance!

Stylized glowing brain representing the hypothalamus and energy control

So, how did we get here? To understand the victory, we have to look at the villain of the story. Acquired hypothalamic obesity usually shows up after the hypothalamus takes a bit of a hit—maybe from a tumor, or perhaps as an unintended side effect of surgery or radiation meant to fix something else. When this command center is damaged, the body loses its ability to regulate energy. It’s like a car where the fuel gauge is broken and the engine thinks it’s always running on empty, even when the tank is full. This leads to rapid weight gain and a hunger that feels impossible to satisfy.

Enter our hero: a specialized medication that acts like a skilled technician for that broken control panel. Instead of trying to bypass the problem, this treatment goes straight to the source. It targets something called the MC4R pathway. Think of the MC4R pathway as a critical bridge in the brain. When it’s working, it tells the body to burn energy and stop looking for snacks. When it’s broken, the bridge is out. This new treatment acts like a temporary patch that restores the connection, allowing those "hey, we’re good on food!" signals to finally cross over and do their job.

The road to FDA approval wasn't just a walk in the park; it was backed by some pretty impressive science. Clinical trials showed that patients using this treatment didn’t just lose a little bit of weight—they saw significant, life-altering changes. We’re talking about a noticeable drop in body mass index (BMI) and, perhaps more importantly, a massive reduction in that constant, nagging hunger known as hyperphagia. Imagine finally being able to finish a meal and actually feel content. That’s the kind of magic we’re talking about here!

This approval is a major milestone for the longevity and wellness community because it proves that we are getting smarter about how we treat obesity. We are moving away from the "one size fits all" approach and moving toward precision medicine. By understanding the specific pathways in the brain that govern our weight, scientists can create "key-and-lock" solutions that fix the actual underlying issue rather than just treating the symptoms. It’s a win for science, a win for Rhythm, and a massive win for patients who have felt unheard for years.

But the fun doesn’t stop there! This approval opens the door for even more research into how we can fine-tune our internal chemistry to live longer, healthier lives. If we can fix a "broken" hypothalamus, what else can we optimize? The future of metabolic health is looking brighter and leaner than ever before. It’s a reminder that even when the brain’s "Smart Home" system gets a little wonky, there’s usually a brilliant team of scientists working on a software update to get everything back in tip-top shape.

As we celebrate this medical milestone, let’s take a moment to appreciate the sheer coolness of the human body and the persistence of those working to understand it. Here’s to fewer "hungry" signals and more "happy" ones! With this new tool in the medical toolkit, the journey toward better health just got a whole lot more exciting. So, cheers to the scientists, cheers to the FDA for the green light, and most importantly, cheers to the patients who now have a brand-new reason to smile!

Google is playing a busy game of bug Whack-A-Mole to keep Chrome safe!

The Great Digital Bug Hunt

Oops, They Did It Again: The Great Chrome Bug Squashing Extravaganza!

A friendly robot holding a giant wrench over a glowing computer screen

Welcome back to the wild, wacky, and sometimes slightly terrifying world of the World Wide Web! If you’ve been clicking around the internet lately, you might have noticed that your trusty sidekick, Google Chrome, has been acting a little bit like a housecat that accidentally swallowed a bumblebee. It turns out, our favorite shiny browser has been playing a high-stakes game of hide-and-seek with some digital gremlins. And not just once, not twice, but three times in a single month! It’s like a summer blockbuster movie where the monsters just keep coming back for the sequel before the first one is even out of theaters.

Now, don’t panic and throw your laptop into the nearest swimming pool just yet. In the tech world, we call these little surprises "zero-day vulnerabilities." It sounds like something out of a spy thriller, doesn't it? "Zero-Day: The Reckoning." But in reality, a zero-day just means that the clever folks who build the browser found a hole in the digital fence at the exact same time—or sometimes slightly after—the naughty hackers found it. It’s a race against the clock where the prize isn't a gold medal, but rather making sure your private data doesn't end up on a billboard in the middle of nowhere.

Imagine your browser is a giant, majestic castle. You’ve got high walls, a deep moat filled with digital alligators, and a shiny gate. Usually, this keeps all the internet ruffians out while you’re busy looking at pictures of capybaras or shopping for neon-colored socks. But every now and then, a sneaky little termite finds a tiny crack in the foundation. This month, it seems the termites have been particularly busy, finding three separate secret tunnels into the castle. It’s like a digital game of Whac-A-Mole, where Google’s engineers are the ones holding the big foam hammers.

So, what exactly is happening behind the scenes? Well, the digital wizards at Google HQ have been working overtime, fueled by gallons of coffee and probably some very high-quality snacks. When a third major bug popped up recently, they didn't just sit around and sigh. They leaped into action, coding at lightning speed to brew up a magical potion—otherwise known as a security patch. This patch is essentially a very high-tech band-aid that covers up the hole and tells the hackers, "Not today, friends! Move along!"

You might be wondering why this is happening so much lately. Is the internet getting scarier? Are the browsers getting tired? Not exactly. It’s more like a game of cat and mouse that has evolved into a game of cyborg-cat and laser-mouse. As our browsers become more powerful and capable of doing incredible things—like running 3D games or managing your entire life—they also become more complex. And in the world of code, complexity is like a big, beautiful mansion with a thousand windows; occasionally, someone is going to forget to lock one of them.

The good news is that you, the brave internet explorer, have a superpower. It’s a small, unassuming button that often pops up in the top right corner of your screen. It’s the "Update" button! Clicking that button is like giving your browser a suit of shiny new armor and a fresh sword. When you see that little green, orange, or red circle pleading for your attention, don't ignore it. It’s not just Chrome trying to be annoying; it’s Chrome asking for a quick nap and a makeover so it can keep protecting you from the spooky stuff lurking in the shadows of the web.

When you hit that update button, the browser does a quick "relaunch." It’s like a digital "Etch A Sketch"—it shakes everything up, clears out the cobwebs, and starts fresh with all the newest defenses. It only takes a few seconds, which is a small price to pay for the peace of mind that comes with knowing your digital castle is secure once again. Think of it as a spa day for your software. It comes back refreshed, rejuvenated, and ready to tackle another million tabs of research, shopping, and cat videos.

While the engineers are busy playing defense, it's a good reminder for all of us to stay sharp. The internet is a wonderful place, but it's always good to have your wits about you. Beyond just keeping your browser updated, remember to keep your passwords unique—no, "password123" is not a fortress—and maybe don't click on links that promise you’ve won a free private island from a long-lost cousin you’ve never heard of. A little bit of common sense goes a long way in keeping the digital gremlins at bay.

In the end, the fact that these bugs are being found and fixed so quickly is actually a good sign. It means the people who build our tools are watching over us like digital guardian angels. They are constantly scanning for trouble, even when we’re sound asleep. So, let's raise a metaphorical glass to the bug hunters, the code-smiths, and the security experts who keep the internet spinning. And remember, the next time you see that update notification, give it a click. Your browser will thank you, your data will thank you, and those sneaky digital termites will have to go find somewhere else to hang out!

Stay safe, stay curious, and keep those browsers shiny and chrome!

Anthropic Just Had a Giant Whoopsie With Their Secret Claude Code Reveal

The Great AI Sneak Peek

Oopsie Daisy! The Day the AI Secret Slipped Out of the Bag

A playful robot accidentally dropping a glowing blue cube

Artist rendition: When your top-secret AI project decides to take a public stroll.

Imagine you are working at one of the world’s most prestigious AI laboratories. You are surrounded by literal geniuses, mountains of high-end server hardware, and enough coffee to power a small European nation. You are building the future—a tool so powerful and sleek that it will change how every programmer on the planet interacts with their computer. You have the code name, you have the hype, and you have a very strict launch schedule. Then, you click one button, and suddenly, the whole world is staring at your unwashed laundry.

This is exactly the kind of "facepalm" moment that just rocked the tech world. One of the industry's biggest players, known for their helpful and ethical AI models, accidentally let their newest secret out of the garage before the paint was even dry. It wasn't a sophisticated heist by a group of international super-hackers, nor was it a calculated corporate leak designed to stir up buzz. Instead, it was something much more human and much more relatable: a simple developer error.

The star of this accidental show is a tool currently being whispered about as "Claude Code." For those who aren't deep in the trenches of software development, think of this as a super-powered sidekick for people who write software. While standard AI can help you write a poem or summarize a long meeting, this new tool is designed to live right inside the programmer's terminal. It’s meant to look at entire folders of code, understand how they all fit together, and help fix bugs or build new features with the speed of a caffeinated squirrel.

For a few glorious, chaotic hours, the digital gates were left wide open. Tech enthusiasts and internet sleuths stumbled upon the project documentation and internal files that were never meant for public eyes. It was like finding the blueprint for a secret spaceship left on a park bench. People started poking around, taking screenshots, and sharing the news faster than a viral cat video. The excitement was palpable because this tool represents a massive leap forward in making AI a truly integrated partner in the creative process of coding.

When the dust settled and the "Private" settings were frantically toggled back on, the company had to offer an explanation. They didn't point fingers at a malicious outsider or blame a glitch in the Matrix. They simply admitted that a developer made a mistake. In the high-stakes world of Silicon Valley, where companies spend billions of dollars on security and public relations, there is something incredibly refreshing about such a grounded excuse. It reminds us that behind every world-changing algorithm is a human being who might just be having a "Monday" on a Tuesday.

The leak gave us a juicy preview of what the future holds. We saw hints of how the tool manages complex tasks, how it remembers previous instructions, and how it navigates the labyrinthine structures of modern software projects. It’s clear that the goal is to move beyond simple chat boxes and into a world where the AI is an active participant in the work environment. It’s the difference between asking a librarian where a book is and having a dedicated research assistant who has already read the book and highlighted the best parts for you.

Of course, the internet did what the internet does best: it speculated. Was this a "happy accident" intended to build hype? Most experts think not. In the world of high-end AI development, keeping your trade secrets under wraps is vital for maintaining a competitive edge. Accidental leaks can lead to security headaches and might even give competitors a roadmap of where to focus their own efforts. For the company involved, this was likely a day of frantic meetings and very red faces.

However, for the rest of us, it’s a moment of levity. It’s a reminder that no matter how advanced our machines become, the people building them are still susceptible to the same little blunders we all are. We’ve all sent an email to the wrong person or forgotten to attach a file. When an AI giant does the equivalent of walking out of the restroom with toilet paper stuck to their shoe, it makes the whole industry feel a little more approachable.

As we wait for the official, non-accidental release of this new coding companion, the hype train has officially left the station. The leak has actually served as a massive endorsement of the tool's potential. If people are this excited about a version that wasn't even ready for showtime, imagine the reaction when it finally hits the market in its full, polished glory. Until then, we can all take a deep breath and remember: if a world-class AI developer can accidentally leak their biggest project, then it’s probably okay if you forgot to save that spreadsheet today.

So, here’s to the developers, the innovators, and especially the ones who occasionally click the wrong button. You’ve given us a glimpse into a very cool future, and you’ve given the tech community a great story to tell over our next round of coffees. Just maybe double-check those permissions next time, okay?

Thursday, October 1, 2015

Fossils help to reveal the true colours of extinct mammals for the first time

Jay Matternes/Wikimedia Commons

The animal kingdom is full of colour. Animals use it for camouflage, to advertise themselves and even as various forms of protection. But we haven’t been paying as much attention to what colours now-extinct mammals might have had – until now.

By matching samples of organic material to their chemical make up we’ve been able to determine the colour of extinct bats and our novel research, published in PNAS, has the potential to work out colours in lots of other organisms.

Fossils usually only leave us information about the harder parts of an animal such as bones and shells. Occasionally, however, soft tissues, such as feathers, skin or hair are left behind.

Palaeontologists have previously discovered dark, organic residues in fossils that for decades were thought to be remnants of decaying bacteria from the surface of the dead bodies. However, in 2008 it was suggested that these little bacteria-like structures were in fact preserved melanosomes, the special sub-units of a cell that carry the pigment melanin. This is the primary source of pigment for feathers, hair and skin across the animal kingdom.

Palaeontology in black and white. Yale

Looking at a fossilised feather from the Cretaceous period (roughly 105m years old) with an alternating black and white pattern revealed that the microscopic structures were only present in the black bands. If these structures were bacteria as originally thought, they would have covered the entire feather. The fact that the structures were missing from the white areas, which would lack pigment, suggested the organic matter was actually melanosomes. What’s more, the structures were aligned along the fine branches of the feather (barbs and barbules), another characteristic feature of melanosomes.

Colour clues

Different melanosomes have different shapes. Of the two main types, reddish brown pheomelanosomes are shaped like tiny little meatballs (500 nanometres in diameter). Black eumelanosomes, meanwhile, are shaped like little narrow sausages and are about twice the size at one micrometre in length.

Subsequent studies have used these facts to reconstruct colour patterns of dinosaurs, with the shape of melanosomes found in different places of a fossil indicating its pigment colour and even iridescence. But until now, little work has been done to characterise the chemistry of the pigment in these fossil melanosomes and there is little evidence to prove that the melanosome shape actually reflects the original colour in fossils.

Bacteria or colour carriers? Jakob Vinther

Using a combination of techniques, we have been able to describe melanin and melanosomes in animals ranging from fish to birds to squids, and for the first time, frogs, tadpoles and mammals. We looked at the shape of the melanosomes under a scanning electron microscope. We also analysed the molecules directly associated with these structures and found that their chemical signature resembled modern melanin samples. However, there were also some clear differences.

We speculated that perhaps the melanin had changed its chemical composition over millions of years buried in the ground under high pressure and temperature. In order to test this, we subjected melanin to even higher pressures and temperatures to replicate within 24 hours the conditions it would have experienced over millions of years. The chemical signature from our cooked melanin then looked more similar to the fossils.

Furthermore, we found that we could quantify the difference between red and black melanin in both fresh and fossil samples. This meant we could test the idea that melanosome shape correlated to chemical colour in the skin of the now fossilised animal – and we found that it did.

Secret in the bones. A. Vogel, Senckenberg Institution, Messel Research

Most excitingly, this also meant that we could for the first time determine the colour of long-extinct mammals just by studying their fossils. We looked at two fossilised bat species from Messel in Germany that lived in the Eocene period (around 49m years ago). Based on the small spherical melanosomes – which are indicative of pheomelanosomes – and the chemical signature associated with the related pigment, we were able to infer that these bats originally sported a reddish brown coat. This means they did not look much different from modern bats.

The study of fossil melanin and other pigments is a blooming research area. Knowing something about fossilised creatures' original colours will not only make Jurassic Park sequels more realistic, but will also inform us about the whole ecology of dinosaurs and other extinct animals.

The Conversation

Wednesday, September 30, 2015

It's not just Facebook that goes down: the cloud isn't as robust as we think

Josemaria Toscano/shutterstock.com

The computing cloud we have created supports much of our day-to-day office and leisure activity, from office email to online shopping and sharing holiday photos. Even health, social care and government functions are moving towards digital delivery over the internet.

However, we should be wary that as we become more dependent on it, the cracks will show. The systems are often a patchwork of interconnected services provided by various companies and industry partnerships. A failure of one can lead to a failure in others.

For example, Skype recently went down for almost an entire day, while Facebook was down for more than an hour – the second time in a week – meaning that many sites that depend on Facebook accounts as authentication were locked out too.

Losing Facebook is an annoyance, but interruptions to major health and social care services or energy supply management systems can lead to real damage to the economy and people’s lives.

A few weeks ago Google’s data centres in Belgium (europe-west1-b) lost power after the local power grid was struck by lightning four times. While most servers were protected by battery backup and redundant storage, there was still an estimated 0.000001% loss of disk space – which for Google’s huge data stores meant a few gigabytes of data.

The lesson is not to trust cloud providers to store and provide backups for your data. Your backups need backups too. What it also shows is our dependence on power supply system which, as long runs of conductive metal, are more prone to lightning strikes than you might imagine.

Facebook response graph, showing outage. Bill Buchanan

When the lights go out

Former US secretary of defence, William Cohen, recently outlined how the US power grid was vulnerable to a large-scale outage: “The possibility of a terrorist attack on the nation’s power grid — an assault that would cause coast-to-coast chaos,” he said, “is a very real one.”

As a former electrical engineer, I understand well the need for a safe and robust power supply, and that control systems can fail. It’s not uncommon to have alternative or redundant power supplies for important equipment. Single points of failure are accidents waiting to happen. Back-up your backup.

The electrical supply grid will try to provide alternative power whenever any part of it fails. The power supply system needs to be built with redundancy in case of problems, and monitoring and control systems that can respond to failures and keep the electricity supply balanced.

Cohen fears a major power outage could lead to civil unrest. Janet Napolitano, former Department of Homeland Security secretary, said a cyber-attack on the power grid was a case of “when,” not “if”. And former senior CIA analyst Peter Vincent Pry went so far as to say that an attack on the US electrical power supply network could “take the lives of every nine out of ten Americans”. The damage that an electromagnetic pulse (EMP) could cause, such as from a nuclear weapon air-burst, is well known. But many now think the complex and interconnected nature of industrial control systems, known as SCADA, could be the major risk.

An example of the potential problem is the north-east US blackout on August 14 2003, which affected 508 generating units at 265 separate power plants, cutting off power to 45m people in eight US states and 10m people in Ontario. It was caused by a software flaw in an alarm system in an Ohio control room which failed to warn operators about an overload, leading to domino effect of failures. It took two days to restore power.

As the world becomes increasingly internet-dependent, we have created a network that provides redundant routes to carry traffic from point to point, but electrical supply failures can still take out core routing systems.

Control systems - the weakest link

Often it’s the less obvious elements of infrastructure that are most open to attack. For example, air conditioning failures in data centres can cause overheating sufficient to melt equipment, especially the tape drives used to store vast amounts of data. This could affect anything from banking transactions worth billions, the routing of traffic around a busy city, or an emergency services call centre.

As we become more dependent on data and data-processing, so we are more vulnerable to their loss. Safety critical systems are built with failsafe control mechanisms, but those mechanisms can also attacked and compromised.

The cloud we have created and upon which we increasingly depend is not as hardy as we think. The internet itself, and the way we use it, is not as distributed as it was designed to be. We still rely too heavily on key physical locations where data and network interconnections are concentrated, creating unacceptable points of failure that could lead to a domino-effect collapse. The DNS infrastructure is a particular weak point, where just 13 root servers worldwide act as master lists for the entire web’s address book.

I don’t think governments have fully thought this through. Without power, without internet connectivity, there is no cloud. And without the cloud we have big problems.

The Conversation

Mars: why contamination and planetary protection are key to any search for life

The dark streaks on Mars' hills will be a good place to look for life. Credit: NASA/JPL/University of Arizona

It has been over 400 years since Galileo put humankind in its right place in the solar system. By looking at how Jupiter’s moons revolve about the gas giant, he came to the conclusion that Earth was not at the centre but one of many planets revolving around the sun. Similarly, recent evidence that water is likely to flow on Mars means facing the idea that Earth is not the only planet in the solar system to harbour life.

While Galileo’s heliocentric views were met by fierce opposition, finding life on Mars would today spark an unprecedented global scientific revolution on Earth. The immediate (and sensible) response will be a likely boost to the exploration of the red planet. But how should we go about it in an ethical and scientifically considered way – without bringing biological contamination from Earth to the unspoilt environment of Mars?

Where there’s water, there could be life. NASA’s recent discovery of salty traces, thought to come from seasonal water flows, means the race is now on to see actual water flowing on the surface. The salty traces were seen by the Mars Reconnaisance Orbiter – a satellite overlooking the surface of the planet – so were from off-site observations. Current ground missions, including the Opportunity and Curiosity rovers, have so far found no evidence of liquid water on the surface, so future ground missions will now certainly focus on looking for water and testing for the presence of microbial life harboured by liquid water.

Artist’s concept of Mars Reconnaissance Orbiter. NASA/JPL

NASA’s plans for a manned mission, part of the Journey to Mars programme, could start as early as the 2030s. These could directly confirm, or reject, the possibility of a Martian biosphere within our lifetime. But it may be more difficult than it sounds.

Surviving the extreme

Back in the 1970s, experiments carried out by the Viking landers looked for signatures of biological activity in dust samples from the Martian surface. These famously led to tantalising positive results that were later disproved so any new evidence of life on Mars will have to be thoroughly scrutinised.

The new evidence suggests liquid briny water can exist at temperatures as low as -23°C. This raises important questions about whether biochemical processes can take place in such exotic environments. One possibility could be Martian extremophile organisms, ones that are hardy enough to survive the most extreme environments and could withstand the harsh conditions of the red planet. This might motivate testing for subtler “proto” life forms – organisms similar to viruses, enzymes and prions – similar to those that may have existed on Earth before bacteria and archaea.

Plans will certainly include integrated tests, for example using lab-on-a-chip devices, to search for signature biochemical substances. But perhaps most importantly, newly devised tests will have to consider the effect that native Martian conditions, such as chemistry, radiation levels and temperature, could have on the biochemistry of any lifeforms.

New technologies should be adapted to test for life in areas of Mars of special interest. In fact, the National Academy of Sciences of the USA and the European Space Sciences Committee have already produced a report foreseeing potential “special regions” of interest apart from sources of briny water, including methane-rich areas, shallow ice-rich deposits and subsurface cavities such as caves.

Terraforming and contamination

But we need to proceed with care. Mars is a pristine environment and we we would need to take into account the potential fragility of Martian life. Earth extremophiles could, in principle, accidentally make the whole journey to Mars as microscopic stowaways and survive on the Martian surface. This could already be the case with current land missions such as the Opportunity and Curiosity rovers, which might be deemed unfit to travel to biologically promising areas due to the hazard of microbial contamination from Earth.

With its thin atmosphere and plummeting temperatures, Mars is a very inhospitable environment for humans. However, the existence of water could open up opportunities for terraforming, a process to modify a planet to have Earth-like conditions. Air and soil humidity are key factors for plant growth and human sustenance and attempts to create a more hospitable environment could start with small, artificially enclosed areas of Earth-like soil pockets immersed in Earth-like atmospheres. Building such structures would pose several engineering challenges to ensure a protective shield against radiation, and to prevent leaks.

Colonisation plans would have to include extensive tests on the viability of organisms from Earth within the extreme Martian environment. For example their resistance to lower gravity and higher radiation levels. However, there are subtler ramifications that might arise from constrained genetic and ecological diversity such as genetic disorders caused by inbreeding.

The prospect of a potential biochemical and ecological clash between Earth and Martian organisms would be the most complex problem so far seen by biologists. Introducing alien species to an indigenous environment could lead to significant adverse effects on the stability of the ecosystem and much like conservation work on Earth, we would have to address the issue of planetary protection.

Incoming organisms might also be susceptible to pathogenic infections from native lifeforms, something we would need to mitigate and plan for.

Beyond Galileo

In a famous letter to Kepler, Galileo complained that sceptic scholars of his celestial observations would not even look through his telescope, thus “shutting their eyes to the light of truth". Sadly, Galileo’s work supporting heliocentrism was eventually banned and the man himself subjected to house arrest by the Inquisition.

This time around nobody will look away. The consistent progress made over the past decades to understand Mars is a signature of a much more cooperative and ideologically open society.

Much like the light that bounced off Jupiter’s moons and came through Galileo’s telescope, the images captured by the Mars Reconnaissance Orbiter have already started unveiling new and exciting information. The truth about Martian life is out there, and it is just a matter of time before we go and find it.

The Conversation

Tuesday, September 29, 2015

Why it hurts to see others suffer: pain and empathy linked in the brain

Study suggests the ability to experience pain may be the key to having empathy for others in pain. www.shutterstock.com

The human brain processes the experience of empathy – the ability to understand another person’s pain – in a similar way to the experience of physical pain. This was the finding of a paper that specifically investigated the kind of empathy people feel when they see others in pain – but it could apply to other forms of empathy too. The results raise a number of intriguing questions, such as whether painkillers or brain damage could actually reduce our ability to feel empathy.

The researchers used a complicated experimental set up, which included using functional magnetic resonance imaging (fMRI), which measures blood flow changes in the brain. However, brain imaging alone can’t prove a link between pain and “pain empathy”. This is because the same brain areas are activated in each case, partly because there is a lot of overlap generally between the brain areas used for feelings and emotion. Another factor is that fMRI is not a direct measure of brain activity – the blood flow measure is instead something that we infer to accompany brain activity.

Brain waves: using fMRI as one of their tools, scientists have tracked how empathy in processed in the brain. www.shutterstock.com

The authors therefore took a new approach. They investigated whether the way a drug changes how the brain processes pain and empathy for those in pain can be used to understand the similarities and differences between these two experiences.

The study is based on two experiments on a total of about 150 participants – which is an unusually large number for this kind of study. The financial expense and general inconvenience of running fMRI studies, means scientists usually just involve some 20 or 30 people.

The painkiller trick

All the participants in the study were given a tablet that they were told was an approved, highly effective, expensive, over-the-counter painkiller (to ensure it had the maximum chance of working). However, none of the participants actually had a real painkiller but a placebo. This effect, called “placebo analgesia”, has been shown to be highly effective at reducing the amount of pain one perceives. However the authors wanted to know whether it affected how pain and pain empathy are processed in the brain.

A second group of people were also given this placebo analgesia, and 15 minutes later a second tablet – a drug that reverses the action of a painkiller. However, the participants were told this tablet would enhance the action of the painkiller, so they weren’t expecting it to counteract any previous drug they were given. The authors wanted to know whether the “placebo analgesia” could be reversed in the same way real painkillers can.

After waiting for the placebo painkiller to “take effect”, and checking that it had “worked” in all people, participants underwent various experiments. These involved receiving a short painful electrical shock to the back of the hand (the strength of this had previously been matched for differences in individual levels of pain threshold – we’ll call this self pain) and watching a picture of someone they had earlier met receive the painful stimulus (we’ll call this pain empathy).

Participants were then split into two groups: some received a real and painful shock (or watched someone receive it), while others received a painless stimuli. The painless stimulus was administered in the same way as the electrical stimulus, but at a lower current.

Participants were asked to rate the amount of pain they felt during self pain and were asked to rate the level of unpleasantness they felt while watching another person receive pain (pain empathy). And they also underwent fMRI during self pain and pain empathy.

The results?

In the first experiment with the one tablet only (placebo painkiller), 53 people received real pain and 49 people received (pretend) pain stimuli. The placebo painkiller reduced the amount of pain the participants reported feeling and also reduced the amount of unpleasantness they reported feeling while watching someone else experience pain. At the same time, the fMRI scan revealed that the network of regions that usually process pain showed a reduction in activity for placebo (pretend) pain compared to real pain.

In the second experiment, where 50 participants took an additional tablet – 25 had the real drug that reverses the action of a painkiller and another 25 people a placebo. The real drug was found to reverse the effects of the placebo analgesia on self pain and also on pain empathy, each by a similar amount. This confirms that the effect of the “pretend” painkiller can be reversed in the same way that a real (drug) painkiller can.

Placebo or reality? We can feel other people’s pain. www.shutterstock.com

This means that empathy for pain is likely to be processed very similarly (in the brain) to first-hand pain. We can infer that this is because both self pain and pain empathy are changed in the same way by the painkiller-reversing drug, and because placebo analgesia also reduces pain empathy in the same way as it reduces pain. The fMRI results add further evidence that this is indeed what is going on.

Exploring empathy further

This is therefore consistent with the theory that empathy for pain occurs as a result of simulating another person’s feelings within one’s own brain. It also provides further evidence that the feelings of pain and pain empathy occur as a result of similar processes within the brain.

Further, patients who have damage and/or disease in the parts of the brain that fall within this network of pain-processing areas, often experience a reduction in ability to feel empathy for painlink. This suggests that the ability to feel pain is necessary in order to experience empathy for pain.

Going forward, the research could be useful to explore empathy in other contexts. For example, the researchers suggest addressing the question of whether the pain from other events – for example social rejection – is processed in a similar way. This study certainly provides a new angle to investigate the feelings of pain and empathy – namely by manipulating two experiences to see if they are processed in similar ways.

Another suggestion is that taking painkillers may decrease one’s feeling of empathy for pain – but that topic needs further research. A way to do this could be to compare the results of this study using placebo painkillers with a similar design using real painkillers.

The Conversation

Can digging up 100-year-old bodies help crack unsolved murders?

Wikimedia Commons

Imagine the untold misery caused by telling the wrong family that their loved one is dead while another family is left in blissful ignorance. That’s why accurately identifying bodies is of paramount importance.

Identification is usually based on simple criteria. Visual recognition or distinctive tattoos are often enough. But as time passes and the body deteriorates, these methods become less reliable or impossible. This will certainly be the case for the bodies alleged to be those of Russia’s Crown Prince Alexei and Grand Duchess Maria, which are due to be re-examined in an attempt to determine if they are the real royals killed during the Russian Revolution.

Forensic analysis has developed apace in the last century, and DNA technology in particular has opened ways of analysing bodies that were previously unthought of even relatively recently. Such technology is increasingly being applied to cases from the past, and the media are always quick to report stories where high profile mysteries are finally “solved” using modern forensics. The cynic would note that some cases (I’m particularly thinking of the “Jack the Ripper” murders) have been “definitively solved” several times with different outcomes.

DNA evidence

So what can forensic science actually bring to these old cases? Certainly DNA can often be extracted from the body, often in teeth and bones. But a DNA profile isn’t just a printout of who someone is. It has to be compared to a known profile. It’s unlikely that we still have a hairbrush or toothbrush from Crown Prince Alexei, but if we have a known sample of DNA from a relation (such a bloodstains on a uniform from his great-grandfather Emperor Alexander II) then familial similarities can be used.

Our knowledge of DNA can do more than just identify someone. An old DNA sample can spot any genetic diseases a subject may have prone to. Similarly, advances in technology allow us to look at the chemical composition of bones and determine what kinds of things they ate and so where in the world they probably came from.

Analysis of pollen from the sinuses can tell us about what plants were around the person. Carbon dating may tell us how old someone actually is, although – as with most forensic techniques – only a range of dates can be given rather than a definitive answer. All of this information can help us work out whose body is (or isn’t) being examined.

Ricard III: picking out the detail Darren Staples/Reuters

But we also shouldn’t forget the simpler techniques. Just looking at a body may yield information depending on how well it has been preserved. Old or perimortem (from the time of death) injuries or bone deformities may be apparent. The shape of the skull and teeth may point to gender and ethnicity. CT scanning can show us inside the body without having to open it, helpful when dissection, which is an invasive and destructive process, is not an option.

This battery of tests can tell us an awful lot about how someone lived, how they died, and who they may have been. The most publicised example of this in the UK was the discovery of King Richard III, which was identified by my colleagues at Leicester University last year.

It’s not just the body itself that the forensic investigators can examine. If someone is buried, what is the grave like – deep or shallow? What does the soil tell us? If the body is in a shroud, how was that made and of what? The possibilities are only limited by the imagination of the investigators.

Unsolved mysteries

But before we get carried away, we must bear in mind that few things in this field are completely certain. By way of example, you’d think that attempts to carbon date the Shroud of Turin – the cloth claimed to have covered Jesus’s dead body – would allow us to finally decide whether or not it really dates to Biblical times. But when the results came out suggesting not, an argument arose as to whether the sample tested was from the original weave or part of a medieval repair.

As time passes, the possibility of deterioration, contamination, alteration or outright fraud of a sample increases. The more people who handle a body, the more foreign DNA can be introduced. Time changes the body and changes the environment.

Finally, there is always the issue of interpretation. For example, was Palestinian leader Yasser Arafat poisoned with radiation or not? Different interpretations of the test results can lead to different conclusions. It was originally suggested that Arafat had been poisoned by 210 Polonium, and an exhumation of his body produced samples showing unusually high levels of this element, but later analysis suggested that this was environmental in nature.

So can modern forensic science reveal secrets from the past? Yes, but not necessarily as definitively as excited headlines may wish us to believe.

The Conversation

Ad industry may gripe about adblockers, but they broke the contract – not us

madpixblue/shutterstock.com

The latest version of Apple’s operating system for phones and tablets, iOS9, allows the installation of adblocking software that removes advertising, analytics and tracking within Apple’s Safari browser. While Apple’s smartphone market share is only around 14% worldwide, this has prompted another outpouring from the mobile and web advertising industry on the effects of adblockers, and discussion as to whether a “free” web can exist without adverts.

It’s not a straightforward question: advertising executivesand publishers complain that ads fund “free” content and adblockers break this contract. Defenders of adblocking point out that the techniques used to serve ads are underhand and that the ads themselves are intrusive. Who is right?

Why we use adblockers

There are good reasons for using adblockers. People are usually prompted to do so by online advertising techniques that they find intrusive. These include pop-ups, pop-unders, blinking ads, being forced to watch videos before getting to the content, and ads that contravene the Acceptable Ads Manifesto.

Adverts and trackers can be loaded from multiple third-party websites, inserted into the web page by advertising networks rather than by the site’s publishers. While this saves publishers the hassle of finding advertisers and negotiating rates, it means they often have little say over what ads appear, which can lead to ads that are irrelevant, dubious, even offensive. The additional load on the browser from connecting to multiple sites at once also drains battery and bandwidth and slows down the page load – all for something we don’t want and which scours our devices to collect information about us for further use.

The UK’s Internet Advertising Bureau (IAB) believe that 15% of British adults use adblockers. The IAB study found that people blocked adverts because they were intrusive (73%), ugly or annoying (55%), slowed down web browsers (54%), were irrelevant (46%), or over privacy concerns (31%). What this suggests is that users don’t reject advertising per se, but intrusive advertising specifically.

Advertising, ethics and the web

The advertising industry argues that adblockers undermine the revenue model for publishers that relies upon behaviourally targeted advertising. They claim adblockers stifle start-ups that are dependent on advertising as a means of generating revenue. The theory goes that without advertising revenue all that’s left is subscription services, something which generally only large corporations are good at building.

While there is some truth to this, the argument assumes that digital start-ups (whether this be an app, a new social media service, or a news website) have access to a large user base from which to generate ad revenue. But of course this isn’t the case when firms are only just getting going. Start-ups rely on investment to grow and be self-sustaining: only then can advertising assist.

It is reasonable to argue that content has to be paid for. We might try to ignore the adverts that subsidise printed newspapers and magazines, but we cannot remove them. However, in respect of mobile devices – which have now become the primary means through which the world gets online – we must also consider the data plan that we pay for as part of our mobile phone contract. The firm behind one mobile adblocker, Shine, estimates that depending on where we live, ads can use up 10-50% of a user’s data allowance.

Annoying mobile ads make for unhappy phones and users. ronbennetts, CC BY-ND

Browsers as consent mechanisms

So the case for mobile is different, in that ads represent a cost to the user. Europeans living in EU member states have the right to refuse to be tracked by third parties. This comes under Article 5(3) of the EU ePrivacy Directive, that in 2012 was altered so people have to be asked upfront whether they consent to cookies.

The aim of this was to shift third-party cookies from being opt-out to being opt-in. The ad industry argued that people’s web browser settings were sufficient to indicate consent to interest-based advertising and tracking – but of course, many people do not know how to alter browser settings. Seen in this way, adblockers are a means of expressing (or rather, denying) consent – something made clear by the need to find and install an adblocking programme or browser extension.

The problem with the implied contract of advertising-for-content is that it is opaque and built upon questionable terms. It’s disingenuous to blame people for using adblockers: we accept adverts in magazines, newspapers and cinemas and on radio, billboards and television. The good ones make us smile. The best we fondly remember. We mostly stick to the deal that we get content free or at reduced cost in exchange for being exposed to ads.

But the growth of adblocking demonstrates that parts of the advertising industry have overstepped the mark with their creepy tracking mechanisms and deliberately confusing or irritating formats. The ad industry broke the contract, not us. How does anyone think that irritating people is the way forward? Which brand, large or small, would want to be associated with annoying their customers?

The growing number of people using desktop and mobile adblockers leaves the online advertising industry two options: fight web users and ad-blocking firms by lobbying for legal change or protection, or the more interesting route of trying to create a model that works for everyone. Rather than fighting the tide, advertising and publishing need to find a way to swim with it.

The Conversation

Monday, September 28, 2015

The Martian: a perfect balance of scientific accuracy and gripping fiction

Matt Damon is feeling lonely on Mars. 20th Century Fox

“I’m going to have to science the shit out of this,” says astronaut Mark Watney, played by Matt Damon, after being stranded on Mars. That pretty much sums up the tone in Ridley Scott’s new film The Martian, adapted from Andy Weir’s novel, which appears in cinemas this week. Many have already commended the movie for its scientific rigour and Scott has said himself that it is as “accurate as we can possibly get it”.

So does the movie live up to its expectations? Well, the mission design and the hardware are based on actual NASA capabilities and an existing plan to get humans to Mars known as Mars Direct. However, there are parts that are less scientifically accurate. But what the story lacks in scientific rigour, it makes up for with great fiction that could inspire new interest in science.

Growing food in space

The main challenge for Watney is to find a way to grow food on the planet in order to stay alive the four years until NASA’s next planned mission to Mars. While this has of course never been done in real life, it is not entirely unrealistic. In August 2015 astronauts on the International Space Station (ISS) ate lettuce that they had grown in space. This was the first time that humankind had grown and eaten food away from home.

In these so-called “VEGGIE” experiments the crew had been provided with everything they needed: soil, seeds, specific lamps tuned to the requirements of the plants. In The Martain, however, Watney had none of this specially-prepared equipment and, crucially, no soil.

The vegetable production system aboard the ISS. NASA/wikipedia

The technical term for loose material covering rock is regolith, which includes the soil that we all know on Earth. Even regolith on Mars is familiar: we have been studying its properties since the 1970s, starting with the Mars Viking missions. NASA’s Phoenix Mars Exploration Rover (MER) has found evidence that the regolith contains crucial minerals for growing plants and is slightly alkaline, suitable for a range of crops – including asparagus and green beans.

Normally potatoes are grown in an acidic soil as this suppresses the effect of pathogens, such as common scab, but also because alkaline soils have a negative effect on the yield of potatoes. Our hero could easily account for this in his calculations for the number of plants required to grow enough food for a set number of days.

But Martian regolith may also contain perchlorates that are not good for human bodies. However, somewhat ironically, they are used as markers for the presence of water. Watney needs additional water for his crops and sets about making this by combining oxygen with hydrogen. To get the hydrogen, Watney catalyses a type of rocket fuel known as Hydrazine, in a somewhat dangerous experiment which would be even more dangerous in real life – as you’d end up with some toxic leftovers.

But exciting new results suggest that water sometimes runs openly on the surface of Mars. So in reality, it would have been safer for Watney to just go and extract it from the regolith itself.

We are seeing the early days of growing food in space. Eventually, if humans are to start living for extended periods on the moon, and eventually Mars, we need to be able to do experiments generating raw materials directly on their surfaces. There are already ideas to test our ability to grow food on the moon in small canisters, including basil and turnips.

Stormy weather

What plunges Watney into peril in the movie is an aborted mission in strong winds. Here on Earth, we use the Beaufort scale to measure wind strength. Gale force winds have speeds up to 74km per hour. To get a sense of what’s that like, imagine putting your head out of a car window when moving at 50 miles per hour. Then try to imagine what it would be like at 100 miles per hour as experienced by Watney and his fellow astronauts on Mars.

On Earth this would be a devastating storm, but not on Mars. The pressure that you feel on your skin when out on a windy day is known as the dynamic pressure. It depends not only on how fast the air is moving, but also on its density. In gale force winds on Earth this pressure is about 250 Pascals. The force this exerts on an average person is about one-third of Earth gravity. This is why you have trouble walking about in gale force winds.

But on Mars, the atmosphere is just 1% of the density of that on Earth, meaning the dynamic pressure is much smaller. Even in Watney’s storm the force on a human being would be tiny — less than one-tenth of Mars' gravity. The storm that Watney and his crew encounter would only feel like a gentle breeze – not the devastating storm shown in the film.

Dust storms of 2001 observed on Mars by Mars Global Surveyor. NASA/JPL/Malin Space Science Systems

Despite this, the wind and the sound it produces does actually have an important function in the film – it creates tension and allows us to empathise with Watney and feel his fear.

Finding pathfinder

Even though this is a work of fiction, as a follower of Mars exploration I felt a tingle of excitement as Watney recovered the Mars Pathfinder, buried under a huge pile of dust. Just as NASA follow Watney’s exploits using imaging from orbit in the film, space scientists have also been monitoring the landing sites of Mars spacecrafts, including Pathfinder.

Measurements at the landing site of the Mars lander Phoenix have shown that dust settles out of the atmosphere at a rate of about 0.1 – 1 thousandth of a millimetre per Martian day. Over the 20 years Pathfinder has been on Mars, that only amounts to between 1 mm and 10 mm of accumulated dust. So, in reality, Watney wouldn’t really have needed to do much digging at all. But this dramatic unearthing of Pathfinder pulls at the heart-strings of our exploration of Mars.

Pathfinder’s landing site imaged by Mars Reconnaissance Orbiter NASA/JPL/University of Arizona

All too often in science fiction the characters are placed in impossible situations from which they can only escape by resorting to a kind of scientific deus ex machina. This is certainly not so in The Martian, in which the story has a logically and physically possible resolution.

The Martian is one of an increasing number of Hollywood films that explore the human soul and spirit of humanity while still grounded in science. Another example is how Christopher Nolan and Kip Thorne used Einstein’s theory of General Relativity to tremendous effect in Interstellar. However, The Martian uses science in a different way. It shows what it is to be a scientist. It shows Watney building scientific arguments, doing calculations, facing the outcome of making an error in reasoning – his answers aren’t in the back of the book. This engages audiences with compelling science.

One could easily be critical of the science shown in fiction. But in a push to reflect “real science” in the cinema we shouldn’t surrender strong narratives for the sake of scientific accuracy. To do so denies us the opportunity to tell stories and to show science in action and in unfamiliar settings.

The Conversation

NASA: streaks of salt on Mars may mean flowing water, and new hopes of life

There's finally evidence that salty water could be behind the mysterious ephemeral dark streaks on Mars. NASA/JPL-Caltech/UArizona

Salty streaks have been discovered on Mars, which could be a sign that salty water is seeping seasonally to the surface. Scientists have previously observed dark streaks (see image above) on the planet’s slopes which are thought to have resulted from seeps of water wetting surface dust. Evidence of salts left behind in these streaks as the water dried up are the best evidence for this yet. The discovery is important – not least as it raises the tantalising prospect of a viable habitat for microbial life on Mars.

I have lost track of how many times water has been “discovered” on Mars. In this case, the researchers have detected hydrated salts rather than salty water itself. But the results, published in Nature Geoscience, are an important step to finding actual, liquid water. So how close are we? Let’s take a look at what we know so far and where the new findings fit in.

Ice versus liquid water

Back in the 18th century, William Herschel suggested that Mars’s polar caps, which even a small telescope can detect, were made of ice or snow – but he had no proof. It wasn’t until the 1950s that data from telescopes fitted with spectrometers, which analyse reflected sunlight, was interpreted as showing frozen water (water-ice). However, the first spacecraft to Mars found this difficult to confirm, as water-ice is in most places usually covered by ice made up of carbon dioxide.

Part of Nirgal Vallis, a valley on Mars first seen on this image by Mariner 9 in 1972. This image is 120km from side to side. NASA

In the 1970s attention turned to the much juicier topic of liquid water on Mars, with the discovery by Mariner 9 of ancient river channels that must have been carved by flowing water. These channel systems were evidently very ancient (billions of years old), so although they showed an abundance of liquid water in the past they had no bearing on the occurrence of water at the present time.

‘Canals’ on Mars drawn by Percival Lowell in 1896. Percival Lowell/wikipedia

Gullies & droplets

Things became more interesting in 2000, with the announcement that high-resolution images from the Mars Orbiter Camera on board Mars Global Surveyor showed gullies several metres deep and hundreds of metres long running down the internal slopes of craters.

It was suggested that they were carved by water that had escaped from underground storage. Such small and sharp features had to be young. They could still have been thousands of years old but annual changes were soon noticed in a few gullies which appeared to suggest that they were still active today.

Gullies inside a crater in Noachis Terra, 47 degrees south. NASA/JPL/Malin Space Science Systems

Are gullies really evidence of flowing water? Some probably are, but there are other explanations such as dry rock avalanches or slabs of frozen carbon dioxide scooting downhill. Some gullies start near the tops of sand dunes where an underground reservoir of water is very improbable.

In 2008 the lander Phoenix actually saw water on Mars. When it scraped away at the dirt, it found water-ice a few centimetres down, but more excitingly droplets that could hardly be anything other than water were seen to form on the lander’s legs. It was suggested that the water had condensed around wind-blown grains of calcium perchlorate, a salt mineral whose properties enable it to scavenge water from the air and then dissolve it. Moreover, whereas pure water would freeze at the local temperature at the time (between -10°C and -80°C), water containing enough dissolved salts could stay liquid.

Water droplets on the leg of the Phoenix lander in 2008. Arrow points to the relevant leg. NASA/JPL-Caltech/University of Arizona/Max Planck Institute

Water seeps?

In 2011 a new phenomenon was recognised on high resolution images from orbit by the Mars Reconnaissance Orbiter. These are “recurrent slope lineae” or RSLs, dark downhill streaks that come and go with the seasons (which last about twice as long as seasons on Earth).

They are usually between 0.5m to 5m wide, and not much more than 100m long. These could mark avalanches of dry dust, but the favoured explanation has always been – and which the new NASA find also suggests – is that water is seeping from the ground and wetting the surface enough to darken it, though without flowing in sufficient volume to erode a gully.

Artificial perspective view of the streaks. NASA/JPL/University of Arizona

What is most noteworthy about the new research is that it is the first determination of the composition of the streaks. They used an instrument called CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on board the orbiter to analyse the light reflected off the surface of these streaks. In this way, they could show that they contain salts that are most likely to be magnesium perchlorate, magnesium chlorate and sodium perchlorate. These kinds of salts have antifreeze properties that would keep water flowing in the cold temperature, and tallies with what Phoenix had suggested in 2008.

There are no signs that liquid water was present when the NASA measurements were made. Scientists will surely keep looking in the same spot in the hope of finding the features that would indicate liquid water instead of those indicative of salts left behind after the water has dried up. However, few can doubt that the salts were put there by flowing water.

Importantly, with liquid water comes the prospect of life on Mars. The researchers cannily conclude by pointing out that in the most arid parts of Earth’s Atacama desert the only source of water for microbes is what they can get from salts dissolved in water. If it can happen on Earth, maybe it can happen on Mars too.

The Conversation

In the future, your internet connection could come through your lightbulb

mightyohm, CC BY-SA

The tungsten lightbulb has served well over the century or so since it was introduced, but its days are numbered now with the arrival of LED lighting, which consume a tenth of the power of incandescent bulbs and have a lifespan 30 times longer. Potential uses of LEDs are not limited to illumination: smart lighting products are emerging that can offer various additional features, including linking your laptop or smartphone to the internet. Move over Wi-Fi, Li-Fi is here.

Wireless communication with visible light is, in fact, not a new idea. Everyone knows about using smoke signals on a desert island to try to capture attention. Perhaps less well known is that in the time of Napoleon much of Europe was covered with optical telegraphs, otherwise known as the semaphore.

The photophone, with speech carried over reflected light. Amédée Guillemin

Alexander Graham Bell, inventor of the telephone, actually regarded the photophone as his most important invention, a device that used a mirror to relay the vibrations caused by speech over a beam of light.

In the same way that interrupting (modulating) a plume of smoke can break it into parts that form an SOS message in Morse code, so visible light communications – Li-Fi – rapidly modulates the intensity of a light to encode data as binary zeros and ones. But this doesn’t mean that Li-Fi transceivers will flicker; the modulation will be too fast for the eye to see.

Wi-Fi vs Li-Fi

The enormous and growing user demand for wireless data is placing huge pressure on existing Wi-Fi technology, which uses the radio and microwave frequency spectrum. With exponential growth of mobile devices, by 2019 more than ten billion devices are expected to exchange around 35 quintillion (1018) bytes of information each month. This won’t be possible using existing wireless technology due to frequency congestion and electromagnetic interference. The problem is most acutely felt in public spaces in urban areas, where many users try to share the limited capacity available from Wi-Fi transmitters or mobile phone network cell towers.

A fundamental communications principle is that the maximum data transfer possible scales with the electromagnetic frequency bandwidth available. The radio frequency spectrum is heavily used and regulated, and there just isn’t enough additional space to satisfy the growth in demand. So Li-Fi has the potential to replace radio and microwave frequency Wi-Fi.

Light frequencies on the electromagnetic spectrum are underused, while to either side is congested. Philip Ronan, CC BY-SA

Visible light spectrum has huge, unused and unregulated capacity for communications. The light from LEDs can be modulated very quickly: data rates as high as 3.5Gb/s using a single blue LED or 1.7Gb/s with white light have been demonstrated by researchers in our EPSRC-funded Ultra-Parallel Visible Light Communications programme.

Unlike Wi-Fi transmitters, optical communications are well-confined inside the walls of a room. This confinement might seem to be a limitation for Li-Fi, but it offers the key advantage that it is very secure: if the curtains are drawn then nobody outside the room can eavesdrop. An array of light sources in the ceiling could send different signals to different users. The transmitter power can be localised, more efficiently used and won’t interfere with adjacent Li-Fi sources. Indeed the lack of radio frequency interference is another advantage over Wi-Fi. Visible light communications is intrinsically safe, and could end the need for travellers to switch devices to flight mode.

A further advantage of Li-Fi is that it can use existing power lines as LED lighting so no new infrostatcutre is needed.

How a Li-Fi network would work. Boston University

Lightening the burden of the internet of things

The internet of things is an ambitious vision of a hyper-connected world of objects autonomously communicating with each other. For example, your fridge might inform your smartphone that you have run out of milk, and even order it for you. Sensors in your car will directly alert you though your smartphone that your tyres are too worn or have low pressure.

Given the number of “things” that can be fitted with sensors and controllers then network-enabled and connected, the bandwidth needed for all these devices to communicate is vast. Industry monitor Gartner predicts that 25 billion such devices will be connected by 2020, but given that most of this information needs only to be transferred a short distance, Li-Fi is an attractive – and perhaps the only – solution to making this a reality.

Several companies are already offering products for visible light communications. The Li-1st from PureLiFi, based in Edinburgh, offers a simple plug-and-play solution for secure wireless point-to-point internet access with a capacity of 11.5 Mbps – comparable to first generation Wi-Fi. Another is Oledcomm from France, which exploits the safe, non-radio frequency nature of Li-Fi with installations in hospitals.

There are still many technological challenges to tackle but already the first steps have been taken to make Li-Fi a reality. In the future your light switch will turn on much more than just illumination.

The Conversation

Simpler, smaller, cheaper? Alternatives to Britain's new nuclear power plant

na0905/flickr, CC BY-SA

Britain appears to finally be on the way to building its first new nuclear power station for 20 years. The chancellor of the exchequer, George Osborne, recently announced a £2 billion loan guarantee linked to the development of the Hinkley Point C power plant, signalling that the final decision to build cannot be far behind. But the plans from French firm EDF have drawn criticism from an array of experts and commentators for being too expensive and relying on an as yet unproven technology that is already being redesigned.

Although the basic principles of nuclear energy are relatively simple, the specific designs of different reactors can vary considerably. The two other companies hoping to build new nuclear plants in the UK, for example, each favour alternatives to EDF’s model. So are we in danger of backing the wrong technology with the current plans for Hinkley Point?

Nuclear reactors generate heat from uranium using a reaction known as fission. This is a process where atomic nuclei split into two fragments, releasing energy in the form of heat. Fission of one atom also releases several neutrons that can spark the same process in neighbouring atoms, leading to a chain reaction throughout the uranium fuel within the reactor core. The chain reaction can be slowed or stopped by inserting control rods into the core to absorb the excess neutrons.

The heat from the reaction is used to create steam, which generates electricity via a turbine. The heat is carried away from the core by a coolant substance, which can also be used as a moderator to slow down the neutrons and increase the chances that they induce fission in other fuel atoms (although some designs use separate moderators).

Overdue, over-budget, over-engineered

The reactor EDF wants to use at Hinkley Point C is a type of pressurised water reactor (PWR) that uses water as both the moderator and coolant. The specific design is known as a European pressurised reactor (EPR) and evolved from earlier French models with innovations such as a concrete-ceramic core catcher to prevent the molten core of the reactor escaping in the case of a meltdown. If built, it will deliver 3.2GW of electrical power, roughly equivalent to 7% of the UK’s electricity.

Power stations featuring this enhanced EPR design are being built in France, Finland and China, but none are yet online and the first two are billions of pounds over budget and years overdue. The Chinese projects are only delayed by around two years, perhaps due to experience gained in the European projects.

The predicted cost of Hinkley Point C has steadily risen from £14bn to £24.5bn and has steadily risen from earlier estimates of £16bn. The complexity of the project is enormous, due to what is believed to be by many to be an over-engineered design. There are also reported issues regarding the manufacture of the reactor pressure vessel for the EPR associated with anomalies in the composition of the steel.

Proven technology in Japan Toach japan/Wikimedia Commons, CC BY-SA

Simpler reactor

EDF has admitted that Hinkley Point C will not start operating in 2023 as originally predicted. As a result, the first new nuclear plant to come online in the UK may actually be an entirely different type: the advanced boiling water reactor (ABWR), a proven Japanese design from Hitachi-GE that has been used in nuclear power stations since the 1990s.

This reactor is simpler than because the water is allowed to boil in the reactor creating steam directly. In PWRs on the other hand, two stages are required to create the steam and the water in the core is maintained at pressure to prevent boiling. The ABWR is also self-compensating. This means it can maintain a stable temperature simply through normal operation. The hotter it gets, the more steam it produces. This reduces the amount of neutrons produced and so the reaction slows down, diminishing the amount of heat again.

On top of this, the ABWR has advantages from a manufacturing point of view. It has a modular design (it is build in sections assembled in factories rather than in one big piece) and so its construction is more straightforward and therefore cheaper. This means the electricity price the government will need to guarantee to the plant’s operator Horizon is likely to be lower than that of the 92.5p/MWh agreed with EDF for Hinkley Point C.

New generation

Looking further into the future, the NuGen proposal, backed by Toshiba, to bring the Westinghouse AP1000 design to the UK is another promising prospect. This advanced passive 1GW reactor is actually a PWR but is highly simplified compared to the EPR with far fewer components and so far fewer things that could wrong. It also employs a large amount of passive safety features that work even without an external power source. In this instance natural processes such as gravity-induced flow and convection are used to drive the circulation of cooling.

Unfortunately, the rather blinkered focus of the government on delivering the Hinkley Point project without recognising what is coming in the near future is a significant point of weakness for UK nuclear energy policy. An approach that gave greater recognition to the potential of other designs could avoid future embarrassment, as well as saving money for the taxpayer and energy bill payer.

The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...