It takes more than water vapour to make an alien latte

We need to talk about K2-18b.

You know why.

You 👏are 👏having 👏fun 👏wrong.

We shall begin with a plot re-cap.

K2-18b is an extrasolar planet, orbiting a dim star known as a red dwarf. As the planet slid across its star, light from that central ball of stellar fusion passed through the planet’s atmosphere. This was detected with the Hubble Space Telescope, which noted that wavelengths of light typically munched up by water molecules were missing. Thus occurred the first detection of water vapour in the atmosphere of an exoplanet that is smaller than Neptune… and that orbits in the habitable zone.

Is this exciting? HELLA YES.

Should we all be focussed on the last five words of that re-cap? HELLA NO.


Because the habitable zone means absolutely nothing for this planet.

And you would die there.

Now at this juncture, I feel you burning to stop this planetary crusade. “Wait a minute!” —you shout— “We’ve heard this spiel from you before. The habitable zone is just a region around a star where the Earth could support liquid water. Any old Bob, Dick or Henrietta of a planet can saunter in and orbit without any other Earth-like properties whatsoever.”

Average temperatures of your favourite habitable zone worlds. The moon doesn’t really do average temperature, as it lacks at atmosphere to do the averaging. So lunatics get a scorching day and frozen night. They die. Just like you would.

Average temperatures of your favourite habitable zone worlds. The moon doesn’t really do average temperature, as it lacks at atmosphere to do the averaging. So lunatics get a scorching day and frozen night. They die. Just like you would.

That is true. Well done for remembering.

“Buuut this time, we’ve detected water vapour!” —you persist— “So we know that the planet HAS ITSELF SOME SEAS! And everyone knows, that makes for some sweeeeeet alien lattes!”

No. NO. NOO! I try to inject, but you blaze on unperturbed:

“Sure, habitability is complex and we need more than water to thrive. But the detection of water on a habitable zone planet makes K2-18b the absolutely scrumptious best candidate for a world teaming with small furry creatures we’ve ever ever evvvvvvveeeeerrrr seen!”

At this stage, I post a cute meme to the internet and wait for you to collapse in an exhausted heap of alien world ecstasy.

Now that we’re settled, let me tell you how you are guaranteed to die on K2-18b (if you want a cup of tea, go and get it now).

K2-18b weighs in at 8 times heavier than the Earth, with a size 2.3 times that of our planet. This gives the planet an average density around 3.3 g/cm3.

This density is tricky. It’s similar to Mars (3.9 g/cm3) which initially seems promising; Mars being a rocky world that we believe may have been habitable in the past. But Mars is a squiffy excuse for a planet. It only has 1/10 of the mass of the Earth. As planets beef up, gravity exerts a greater squeeze that ups the density. The Earth has an average density of about 5.5 g/cm3 and if K2-18b had a similar composition, we’d be looking at a density of ballpark 10 g/cm3 [1] <-- these are references at the bottom of the article, see how professional we're getting?.

To lower that density, we’ve got to mix our Earth-y silicate existence with something light and fluffy. For this planetary cookery class, there are three main options:

  • Fatal Option #1: A thick atmosphere of light gases such as hydrogen and helium.

  • Fatal Option #2: A truck load of water.

  • Fatal Option #3: A funky hybrid mix of the fatal hydrogen and fatal water.

Excited? Who wouldn’t be?! Let’s start with Bachelor Planet #1.

While the Earth doesn’t have a strong enough gravitational pull to hold onto light gases such as hydrogen and helium, there’s no real question that K2-18b has a hefty stockpile in its atmosphere. The detection of water vapour was published in two independent studies [2] and both find the data is best matched by a hydrogen-dominated atmosphere. Moreover, previous empirical eyeballing of exoplanets [3] has found that once you start approaching a radius of 1.5 Earth, planets swell in size but not mass, which is conducive to acquiring a thick cloak of light gases. If we claim that the low density of K2-18b is entirely due to these light and fancy-free elements, then a mass fraction of about 0.7% in hydrogen and helium should give us the required density [4].

So less than one percent? I bet you’re thinking that’s no big deal! What feeble life couldn’t handle that?!

You. You couldn’t handle that.

It turns out a splash of hydrogen goes rather a long way. Writing in the Astrophysical Journal, Eric Lopez and Jonathan Fortney offer a particularly delicious analogy [4]:

A 0.5% Hydrogen/Helium atmosphere leads to a surface pressure twenty times higher than that at the bottom of the Marianas Trench (deep. You can’t live there), and the temperature would be more than 2700°C (hot. You can’t live there).

“WAIT A MINUTE!” —you shout— “The planet is in the habitable zone! How did we get to surface temperatures of thousands of degrees?!”

The problem is that the habitable zone (as used for exoplanets discoveries) is defined as the amount of starlight the Earth needs to keep temperate surface conditions. Our planet can adjust the surface temperature (on rather long geological timescales) by altering the level of carbon dioxide in the atmosphere through the carbon-silicate cycle. As carbon dioxide is a greenhouse gas that traps heat, lowering its level cools the planet while letting it accumulate gives the planet more of a cosy blanket. Within the habitable zone, this thermostat works well. But beyond its edges, the carbon-silicate cycle can’t manage its job and the planet either boils or freezes.

Within the habitable zone, the Earth can adjust the level of carbon dioxide in the atmosphere to keep surface conditions comfortable.

Within the habitable zone, the Earth can adjust the level of carbon dioxide in the atmosphere to keep surface conditions comfortable.

So the habitable zone is the region where the carbon-silicate cycle can put the right amount of carbon dioxide into the Earth’s atmosphere to keep the surface temperature comfy.

Got a planet with no carbon-silicate cycle?
Or a world with a different atmosphere composition?

Then that habitable zone don’t mean jot. (And you all really know this as Mars and the Moon are in the habitable zone but don’t offer lakeside retreats.)

The fact that atmosphere of K2-18b is dominated by hydrogen therefore utterly invalidates our habitable zone ticket. Like carbon dioxide, hydrogen is a greenhouse gas, giving K2-18b an extra thick thermal coat that it can’t shrug off. Therefore even if the rest of the planet was hypothetically entirely Earth-like with a carbon silicate cycle, the planet’s surface would be far far warmer within the habitable zone than the Earth.

Hydrogen is also a greenhouse gas, making planets too hot in the classical habitable zone.

Hydrogen is also a greenhouse gas, making planets too hot in the classical habitable zone.

In short, you’re squashed flat and roasted. Got it? Excellent. Let’s move on to Planet Bachelor #2: the truck load of water.

…. where we are going to use the same logic to die horribly. Again.

If we ignore the hydrogen detection in the two discovery papers (or assume it’s somehow suuuuuuper low), then we can match the low density of K2-18b by employing a thinner, more Earth-like atmosphere but mixing-in a whole load of water into the silicate rock. The problem is that the amount we need… is rather more than what’s in your kitchen sink.

To match the density of K2-18b, the planet’s mass would need to be as much as 50% water [2]. By contrast, the Earth has less than 0.1% water by mass. And while all life on Earth needs water, too much can geologically murder the entire planet.

The carbon-silicate cycle works best when there’s exposed land. Before we reach 1% water, the planet will likely become an ocean world and the land sinks below the waves. If the sea is shallow enough, a more pathetic carbon-silicate cycle can still work with the sea floor. Ramp that up by pouring more water into the planet, and the weight of the ocean on the seabed will trigger the production of deep sea ices. These ices seal off the silicate rocks from the water, shutting down the carbon-silicate cycle as carbon dioxide can no longer be stashed away in the ground. Not only does that mean the habitable zone shrinks to a thin strip as the planet becomes unable to adapt to different levels of starlight, but it also prevents the cycling of nutrients such as phosphorous from the planet interior to the ocean. Even with perfect positioning, life therefore gets throttled due to lack of nosh.

Up the water content on the planet to several percent of its mass, and the pressure of the water can shut down plate tectonics. The exact role of the motion of our crustal plates in the habitability of the Earth is not fully understood, but it is thought to be a key player in nutrient cycling and magnetic field generation that protects our atmosphere. In short, by the time you have shut down plate tectonics, water has rendered your world well and truly geologically dead.

There is a possibility that an ocean world could develop alternative mechanisms outside the carbon-silicate cycle for temperature modulation [5]. But different mechanics requires a different levels of starlight, resulting in a new habitable zone that has different boundaries from the classical carbon-silicate cycle definition. An ocean world K2-16b in the regular carbon-silicate habitable zone is therefore not a mecca for alien lattes.

Could we rescue this situation with hybrid Bachelor Planet #3? Mishmash a splash of hydrogen atmosphere with mega ocean but avoid the pitfalls of both?


Because you can’t minimise both the ocean and the hydrogen atmosphere, and even much smaller abundances than that suggested above would kill the planet.

The bottom line is that K2-18b is not a potentially habitable world and it is not where we would focus our resources in hunting for biosignatures. Measurements of the planet’s density require either a thick hydrogen atmosphere and/or a deep ocean. These un-Earth-like environments mean that the habitable zone does not apply to K2-18b and moreover, they suggest a world too hot and geologically dead to support life.


So is the discovery of K2018b notable at all?

ABSOLUTELY. For three main reasons:

  1. We’ve detected water in the habitable zone. While K2-18b is not a potentially habitable world itself, the presence of water suggests one of life’s key ingredients may be easy to come by on more Earth-like rocky worlds.

  2. Being able to detect a planet’s atmosphere is crazy hard. But it’s this information that tells us what a planet is truly like, from surface conditions to geology to potential biology. And we’ve just done it for a planet in the habitable zone. It’s going to be the start of an amazing slew of information, not just about potentially habitable worlds but ones that might be far more alien than anything we’ve dreamed about. And on that note…

  3. Planets like K2-16b that are larger than the Earth but smaller than Neptune are called ‘super Earths’ or ‘mini Neptunes’. They appear to the most common size of planet in our galactic neighbourhood but we… ain’t got one. Are these large worlds more like giant terrestrial planets or teeny gas giants? Do they form in-situ or migrate from somewhere else? Can they have an active geology or are they all crushed by water or gas? K2-16b orbits a bright but small star and has an atmosphere not hidden by clouds. It will be the perfect planet for observations with up-coming instruments such as the JWST, which will be able to detect far more molecules in the planet’s atmosphere and shed some light on this mysterious of all planet classes. Frankly, this excites me the most. After all, for Earth 2.0… well, we already got one.


The water vapour on K2-66b is an awesome discovery: it’s a huge step to discovering what planets beyond our own Sun are really like. But it’s not the habitable world you’ve been searching for.

You would die there. Stop thinking about going.

Unfashionable facts:

  1. Estimate based on Weiss & Marcy, 2014 for a rocky planet without a thick atmosphere.

  2. Detection papers [Benneke et al, 2019] and [Tsiaras et al, 2019].

  3. A bunch of papers have noticed this break around 1.5 Earth radii where rocky planets seem to acquire deep atmospheres, including Weiss & Marcy (2014), Rogers (2015), Chen & Kipping (2016) and Fulton et al (2018).

  4. Based on Figure 9 in Lopez & Fortney, 2013.

  5. For example, ‘the ice cap zone’ by Ramirez & Levi, 2018.

Want to measure habitability? You can't.

People. We need to have another chat.

We have all these exoplanets. Some of them are deliciously Earth-sized. And you want to know which are the most habitable. But here’s the thing…

You can’t.

NOT POSSIBLE. CAN’T BE DONE. DENIED. I don’t even care what you read in the Daily Mail last week.

There’s no way of creating a quantitive scale that actually measures how capable a planet is of supporting life. No matter how these habitability metrics are constructed They. Just. Don’t. Measure. Habitability.

Why? I’m so glad you asked. Take a seat. This ain’t going to be difficult.

First let’s crush the ultimate temptation: Why isn’t an Earth-sized planet definitely habitable? Because this:


Our Solar System has two Earth-sized planets: the Earth and Venus. Venus is 95% the size of the Earth, yet its surface would melt lead. And you.

Every time you see “Earth-sized planet” and feel tempted to pack a suitcase, remember this is equivalent to saying “Venus-sized” and see if you still feel like quaffing down an out-of-this-world Starbucks coffee.

Buuuuuuut (I hear you scream) there are so many more planet properties we can measure to calculate the perfect exo-real estate!


While we’ve discovered nearly 4,000 exoplanets, the amount known about each world is pretty sparse. For most of the planets out there, we only know radius and the amount of radiation the planet receives from the star. Just two properties.

There are three important points to realise at this juncture:

  1. Two is a small number. If you had only two jelly beans, that would be sad.

  2. No matter how complex you make your habitability metric, it can only depend on two measurements. Any other quantities will need to be estimated from those two.

    This is like designing a skin-tight superhero suit with exact measurements for torso length, foot size, knee height, shoulder width, arm and leg length, waist, hip and head size but actually just measuring your waist and height and estimating the rest from those two measurements. Sounds doable? Ask any woman about getting jeans to fit.

  3. The two properties you can measure are not directly related to habitability. Whether a planet can support life depends on the surface environment (or potentially sub-surface for life underground). Venus is proof that the radius and level of starlight really tells you jack. Not only does the planet have nearly the same radius as the Earth, but it receives only about twice the amount of starlight. If Venus had an Earth-like atmosphere, then the surface temperature would be somewhere in the 50°C (~ 120°F) range. Certainly a little toasty, but a far cry from the true surface temperature of 460°C (fuck no°F). The difference is Venus’s thick carbon dioxide atmosphere, which traps heat like…

    … seriously, nothing on Earth comes close. That’s the point.

OKAY OKAY OKAY. WHAT IF we could measure more? While our current telescopes have been focussed on finding these planets, the next generation may be able to discover properties such as if they have surface water or identify gases in their atmospheres. If we could measure a few basic properties about the surface environment, could we create an accurate habitability scale?

Here is your problem: this is an accurate diagram of all the planets we know support life:


The only planet we know that supports life is the Earth. That gives us just one data point. And you can’t make any kind of meaningful scale with just one point.

Don’t believe me? Let’s give that a go.

We know a planet can be habitable if it is the same size as the Earth (even if that isn’t always true). But does a planet become more or less able to host life if you change the size? Without more habitable planets of different sizes to compare with, the scale could go up or down as you increase in size.


But but but (you shout quite reasonably) we do know of many uninhabitable planets. Can’t we use this to constrain the radius? You’re absolutely right and we can… a bit.

Based on a handful of planets that have both radius and mass measurements, astronomers spotted that planets larger than about 50% larger than the Earth were typically quite low density. This suggests those planets have very very thick atmospheres, like Neptune.

Neptune is definitely not habitable. Let’s use a “50% larger” as a size limit. The problem is that y’all wanted a habitability scale. So these two points have to be connected by…. what?


The Earth is the largest rocky planet in our Solar System and Neptune is the smallest gaseous world. We don’t have any data points to tell us how the environment would change as the Earth increases in mass. A slight increase in mass might not change habitability at all. Or the extra gravity might flatten our topology, immersing the land under water and cutting off weathering, to result in the planet freezing into a snowball of death while the Sun was still young. Whichever. We’ve absolutely no way of knowing.

The best we can do is say 50% larger = probably bad. Below that…. pray to your gods.

What this means is when you see a fancy-smancy habitability law that looks all like:


You need to be thinking like the embittered, twisted and cynical soul you know you really are and say coldly:

  1. R, M, D and T are likely based on only two actual measurements, neither of which directly relate to the surface property.

  2. The values a, b, c and d supposedly tells you how the habitability (H) changes as you vary that property (R, M, D and T). But there’s no way we have a damned clue what they should be.

  3. The resultant H is as meaningful as my star sign. Now go get this cancer a beer.

When hunting for planets that might be habitable, the best that can be done is to slam down some limits. A planet larger than 50% that of the Earth will probably have choked its surface under a Neptune-like fug. A planet that receives far less starlight than us risks being too cold for talking. Or living. Not good options for scoping out alien neighbours.

What we can’t do is develop a scalable index for habitability and expect a planet with H = 0.7 to be less likely to support life than a planet with H = 0.75.

Can’t be done on just two measurements and one example of life.

Pretty obvious when you think on it, right?

Seven (TRAPPIST-1 planets) for the Dwarf-lords in halls of stone

“Hey Elizabeth, do you know what the NASA press conference is about tomorrow?”

I hadn’t a clue. Having stepped off a plane from Japan the night before, I was twirling around in a swivel chair in one of the student offices at McMaster University while I tried to bully my brain into action. Until that moment, I wasn’t aware NASA had announced a press conference.

The NASA vintage travel poster for one of the Trappist-1 planets.

The NASA vintage travel poster for one of the Trappist-1 planets.

The NASA site did not reveal much. Tomorrow’s event was to “present new findings on planets that orbit stars other than our sun.” It was exoplanet news, but the lack of details left us speculating. 

“It’s an atmosphere detection for Proxima Centauri-b!” 

“Can’t be. The planet doesn’t transit.”

This fact made our nearest exoplanet something of a disappointment. Proxima Centauri-b had been found by detecting the slight wobble in the position of the star due to the planet’s gravity. However, without an orbit that took the planet between star and Earth, there was no opportunity to examine starlight passing through the planet’s atmospheric gases. Such a technique is known as ’atmospheric spectroscopy’ and can uncover which molecules are in the air to reveal processes that must be occurring on the planet’s surface — the location relevant to habitability. The next generation of telescopes including NASA’s JWST and ESA’s Ariel are focussed on using this method to finally probe planet surface conditions. The uselessly orientated orbit of Proxima Centauri-b however, removes it from the target selection lists. 

This took us back to the problem of what NASA were about to announce. 

“It can’t just be another planet.”

“It could be a possible biosignature?”

“… do we have anything that could measure that yet?”

This was the crux of the mystery. It is amazing that in the scant 25 years since the first exoplanet discoveries, finding a new world beyond our solar system has become insufficient to warrant a press conference. We now know of nearly 3,500 exoplanets, roughly a third of which are less than twice the size of the Earth. The news had to be bigger than a simple additional statistic. 

However, a discovery of alien life seemed to be too premature. It is true that the presence of biological organisms may be detected by their influence on a planet’s atmosphere. It is also true that the Hubble Space Telescope (HST) can do atmospheric spectroscopy, although not nearly at the resolution of the future instruments. As far as I am aware, HST has examined the atmosphere of three super Earth-sized planets and only seen features in 55 Cancri-e, which orbits so close to its star that a year is done in hours. So … a biosignature was not impossible. It just would have meant we had got very very very very very lucky.

Nobody’s that lucky. Especially not in 2017.

We were evidently not alone in our speculation, since the news was leaked later that day. Seven Earth-sized planets had been discovered orbiting the ultracool dwarf star, TRAPPIST-1. It was a miniature solar system and NASA were about to infuriate me by gabbling non-stop about the prospect of life.

Let’s make something clear:

Apart from roughly the same number of planets (by astronomer standards, 7 basically equals 8. Or 9.) the TRAPPIST-1 system is very unlike our own. 

That is what makes it cool.

Also, the system takes its name from a Belgian beer. 

Trappist beer. Oh yes.

Trappist beer. Oh yes.

Last year, three planets were discovered around TRAPPIST-1. The star was named for the telescope that was used in the discovery, the robotic Belgian 60cm ‘TRAnsiting Planets and Planetesimal Small Telescope’. It sounds like a perfectly reasonable acronym until you learn that Trappist is a Belgian brewing company. Astronomers have no shame. It’s all kinds of wonderful.

The news was that further inspection of the system had added another four planets. The fresh observations had used a number of telescopes around the world and finished with an intensive stint on NASA’s infrared Spitzer Space Telescope. 

(Interesting fact: Launched in 2003, Spitzer was never designed to be able to see planets. Some swanky engineering tricks from the ground allowed a 1000 times improvement in measuring star brightness that led to the tiny dip from a transiting planet being detectable. Cool stars like TRAPPIST-1 are a 1000 times brighter in the infrared than at optical wavelengths, making Spitzer a kick-ass planet grabbing machine.)

What was still more exiting is that all the planets transit, leaving the door wide-open for some rocky planet atmosphere spectroscopy rock n’ roll. 

Were alien climates ammonia cloudy with a chance of methane meatballs? The next five years might reveal the answer to that question. 

The planets were all on short orbits, with years lasting between 1.5 and 13 days. This close packed system meant that neighbouring planets would appear larger than the Moon in the night sky. The in-your-face sibling-ness also allowed for the planet masses to be measured.

While transit observations normally yield only the planet radius, the gravitational tugs from planets in the same system can vary the time between successive transits. These ‘transit timing variations’ can be used to estimate the size of the tug, and thereby measure the mass of the planets. With the exception of the outermost planet —whose single transit measurement is only enough for a radius estimate— the TRAPPIST-1 planets got both radius and mass measurements. 

And you know what that means.

(Density. It means density.)

In fact, the mass measurements were not particular accurate, leading to error bars as large as the measured value except in the case of planet TRAPPIST-1f. However, all measurements hinted at (and Ms Accurate TRAPPIST-1f agreed) that these planets were on the fluffy side.

With sizes less than the empirical threshold value 1.6 Earth radii, the planets were unlikely to be Neptune-like gas worlds. But their low density suggests they do have a much higher fraction of volatiles than the Earth. They could even be downright watery.

This possibility is backed up in a less obvious way by the planet orbits. The inner six worlds are in resonance, meaning that the ratio between their orbital times can be expressed as two small integer numbers. So while the innermost world orbits the star 8 times, the outer planets orbit 5, 2 and 2 times. 

Well… almost. And since we declared above that 7 basically equalled 8 or 9, I’d say we were good. 

Strings of planets in resonance are completely unsurprising and utterly predictable.

… so long as you formed somewhere entirely different.

Transits of the seven TRAPPIST-1 worlds. The orbits shown in the first few seconds show the resonance between the planets.

Resonant orbits between neighbouring planets occur when young planets migrate through the planet-forming gas disc. This gas migration can occur once the growing planet reaches the size of Mars and its gravity begins to pull on the surrounding gas, which pulls back. The net force usually sees the planet move towards the star. If multiple planets take this site-seeing tour of their system, their mutual gravity will pull on one another. These tugs only balance out when the orbital times form integer ratios, producing a resonance. The predicted result is a series of planets in resonant orbits close to the star — exactly what is seen in the TRAPPIST-1 system.

If the planets formed in cold outer reaches far from the star,  then a substantial part of their mass would be in ice. As the planets moved towards the (ultracool but still a nuclear furnace and way hotter than Colin Firth in Pride and Prejudice) star, the ice would melt into water or vapour. This would explain the low densities compared to the Earth’s predominantly silicate composition. 

Three of the TRAPPIST-1 planets stopped their mooch inwards within the star’s temperate zone (or ‘habitable zone’ if you must). This is the region around a star where an exact Earth clone could support liquid water on the surface. 

Once more for the cheap seats at the back?


If you’re not an exact Earth clone, then the temperate zone guarantees as much as one of Nigel Farage’s Brexit bus adverts. 


So how Earth-like are these temperate zone wannabes? On the plus side, they likely have plenty of water. On the down side, it’s quite likely too much.

While the majority of the Earth’s surface is covered with oceans, water makes up less than 0.1% of our planet’s mass. If we had formed further out where water freezes into ices (i.e. past the ‘ice line’), then that fraction could be nearer 50%. This would create huge oceans as the planet warmed, enveloping all land under a sea a bajillion fathoms deep (exact measurement. Prove me wrong.) 

The bottom of such a monstrous ocean would be so high pressure than a thick layer of ice would separate the water from the rocky core. This would scupper the carbon-silicate cycle, preventing the quantity of carbon dioxide in the air responding like a thermostat to global changes in climate. This would mean anything other than the absolute perfect amount of stellar heat would render the planet uninhabitable. The temperate zone would shrink to a thin slice and any slight ellipticity in the planet orbit, or variation in the star’s heat, would fry or freeze everything in site. 

It ain’t impossible for life, but it ain’t promising. It also ain’t Earth.

Even if the oceans were shallow enough to avoid this, the icy composition of the planet might burp out a crazy atmosphere. Our atmosphere was outgassed in volcanic eruptions during the Earth’s early years. But if the planet was made not of silicates, but of comet-like ices, then the gasses emerging from the volcanos would likely be mainly ammonia or methane. Not yummy. Also strong greenhouse gases, so could end up roasting planets within the temperate zone. 

Since we’ve no analogue of such planets in our own solar system, it’s hard to speculate on their surface conditions. Could such a rocky ice mix produce a magnetic field? The icy Jovian moon, Ganymede, has a weak field, so it could be possible. If it is not, then any atmosphere might be stripped by the star’s stellar wind. 

Google doodle celebrating the Trappist-1 system

Google doodle celebrating the Trappist-1 system

The fact we’ve not the foggiest idea of what these worlds would really be like is why they’re so exciting. Here we have 7 prime candidates for atmospheric studies and we’re hoping to see not the same thing as beneath our feet, but something entirely new. This would tell us about how planets form (really migration? really ices?) to how a completely non-terrestrial geology behaves. It’s going to be so much more awesome Ewoks. 

So are we going to give these planets better names than TRAPPIST-1b, c, d, e, f and h? Speaking at the NASA press conference, lead author Michaël Gillon admitted,

“We have plenty of possibilities that are all related to Belgian beers, but we don’t think that they’ll become official!”  


In better news, NASA has designed a new travel poster to mark the occasion. And there’s a google doodle. Yay. 

Media, get your shit together and read

My weekend slid downhill when I began an article that started:

"The discovery of alien life could be a step closer after scientists found a newly discovered planet is ‘likely' to harbour life forms."

My friends, that be pretty big talk for a planet for which we only know the minimum mass.

The planet in question is Proxima-b, whose discovery around our closest star was announced in August. You may remember its name from my previous editions of OMG-PLANET-NEWS-GET-UR-SHIT-TOGETHER.

This article (in the UK newspaper, the Independent) covered recent research published in the scientific journal, MNRAS. Unfortunately, it represents the work more poorly than the Hollywood adaption of your favourite novel. One you really, really liked.

Now, I too find reading research papers a drag: they’re dry, gloss over the exciting scrumptious bits in favour of a parameter space study and sometimes the graphs aren’t even in colour. But this particular paper was less than five pages. FIVE. And that includes the plots. And those five pages do not discuss the chances of Proxima-b being inhabited by anything.

Let’s take a look at “In innards of Proxima-b: how the movie differs from the book".

The research paper asks a simple question: If we assume Proxima-b is a rocky planet, what might it be like?

Did you notice that summary started with an “if”?

Because Proxima-b has not been observed passing in front of its star, we don’t know the planet’s radius. Instead, astronomers have measured the slight wobble in the star’s position due to the tug from the planet’s gravity. This tells us how much the planet is pulling the star towards the Earth and that gives us a handle on its mass. However, since we don’t know the angle of the planet’s orbit, we can’t tell whether the planet is pulling the star directly towards the Earth, or if only part of its tug is in our direction. The upshot is we know only the lowest possible mass for the planet, with its true value being potentially much higher.

Minute Physics runs through how to find exoplanets (explaining the transit and radial velocity techniques).

Proxima-b’s minimum mass is ~1.3 Earth masses; a value that suggests (but still doesn’t guarantee) a rocky surface. The maximum value would make the planet a gas giant such as Neptune.

For this paper, the researchers are only interested in the outcomes for a rocky composition. This leads them to consider only the follow situation:

(1) The MINIMUM MASS of the planet is the TRUE mass. Since every measurement has errors, they actually consider the planet has a mass between 1.1 Earth masses - 1.46 Earth masses.

(2) The planet has a thin Earth-like atmosphere, not a thick envelope like Neptune.

(3) The rock composition is similar to that found in the solar system, with the planet having an iron core, silicate mantle and ice or water top layer.

There is no observational evidence at all for any of these points. A familiarly solid base for the planet is assumed, and then the research asks what permutations are possible. To say this suggests Proxima-b is like Earth is akin to filling a pen with red ink and then claiming this proves all pens write in red. It’s nonsensical and it’s not the point of the paper.

The paper considers three possible masses for Proxima-b, within the error bars that surround the minimum possible value: (1) 1.1 Earth mass planet, (2) 1.27 Earth mass planet and (3) 1.46 Earth mass planet. The authors then tweak the relative amounts of core, mantle and water to see what worlds result.

To put limits on the possibilities, the research assumes a mix of silicate, iron and water typical of planets, asteroids and comets found in the solar system. Planet models that have more water than most solar system objects, or huge cores are dismissed as implausible.

The authors placed down these boundaries as the solar system is only place where we have data on what ranges are reasonable. However, Proxima-b’s star is not like our sun. Instead, it’s a dim red dwarf with a different mix of elements. It could therefore be that the rocks available to build planets have a very different blend than those around our own sun. Such differences can lead to drastic changes in planet conditions, such as producing carbon worlds with diamond mantles and seas of tar.

However —again— we have to work with the data we have. Which is very little for the Proxima system.

The result of the paper is not a single favoured model, but a range of possibilities for a rocky Proxima-b. A Proxima-b with a 1.1 Earth mass but radii between 1.2 - 1.3 Earth size could contain 60 - 70% water, compared to our own Earth’s minute 0.05%. On the other hand, an Earth-sized planet of that mass could contain no water but a fat iron core. The total composition range (for conditions 1 - 3 above) is a planet made from 65% iron / 35% rocky silicates (matching a radius of 0.94 x Earth) to a 50% silicate / 50% water world (radius 1.4 x Earth), with 200 km deep liquid ocean.

While the inspiration of the paper was Proxima-b, there’s nothing really particular about this calculation that applies only to this planet. The results are true for any world around 1.3 Earth masses.

Should the radius of Proxima-b ever be measured, these models could help narrow down possible planet conditions or even rule out the planet being rocky at all. However, it’s worth noting that even for an exact radius and mass, different combinations of water, silicate and iron are still possible. At present, there is no way of selecting a more probably model amongst any of the options.

So do these possibilities say anything about habitability? Not a jot.

When you only know the minimum mass of a planet, even an artist impression endorsed by ESO isn't based on a whole lot.

When you only know the minimum mass of a planet, even an artist impression endorsed by ESO isn't based on a whole lot.

Should water be present, life would get a helpful medium for some biochemistry action. But this is only one of many many (many many) factors. The changing iron core size is liable to affect the magnetic field; a likely essential component of any planet orbiting a red dwarf. These dim stars may sound benign, but they are prone to violent outbursts of energy that could strip a planet’s atmosphere without the protection from some heavy duty magnetics. A thick mantle will have a baring on plate tectonics and is liable to determine the gases in the atmosphere. The deep water world may also have a thick layer of ice that cuts off the silicon surface from the ocean, preventing a carbon-silicate cycle of elements that helps control planet temperature on Earth.

We cannot be anymore quantitive about these properties, since we don’t know the surface conditions on any exoplanets. The next generation of instruments are just beginning to be able to sniff the atmospheres of these new worlds. This may provide us with the first clue of what the surfaces could be like.

This paper was a neat modelling experiment that drives home how varied a planet could be, even with a huge number of assumptions. So how did the news article get the message quite so wrong?

My guess is that the writer did not read the journal paper at all (despite quoting it as the source) but took the information from the very beginning of the press release by CNRS:  the ‘Centre National de la Recherché Scientifique’ in France, and home institute of the research authors. The press release overall isn’t bad, but the opening paragraphs are misleadingly phrased and contain the statement; “[Proxima-b] is likely to harbour liquid water at its surface and therefore to harbour life forms."

No dude, that just ain’t true. Water is a possibility for the planet’s composition, but the research doesn’t promise that any is there.

Taking this as the full research, the news article then quotes the lead author seemingly collaborating this statement. While it’s hard to know without hearing the interview verbatim, I suspect this is an example of poor editing. The author apparently told the newspaper:

"Among the thousands of exoplanets we have already discovered, Proxima-b is one of the best candidates to sustain life."

With only the minimum mass measured, there’s no reason Proxima-b is more likely to harbour life than many other exoplanet discoveries. However, it is true the proximity of the planet makes it an excellent candidate for more detailed observations.

"It is in the habitable zone of its star, [and] even if it is really close to the star the fact that Proxima Centauri is a red dwarf allows the planet to have a lower temperature and maybe liquid water."

Note, the author said “maybe” here. Like the Earth, it is unlikely that Proxima-b formed with liquid water: its location close to the star would have been too warm for ice to be incorporated into its body. Instead, the planet would need to form further out and move inwards, or receive a delivery of icy meteorites from further from the star. Both are possible; neither are certain.

"The fact there could still be life on the planet today, not only during its formation, is huge."

I guess this is true, but its unsubstantiated based on the research paper, so I don’t understand the motivation behind the comment. I’m inclined to blame selective editing once again.

"The interesting thing about Proxima-b is it is the closest exoplanet to Earth. It is really exciting to have the possibility that there is life just at the gates of our solar system."

Yes. This is the main reason Proxima-b is exciting. We don’t yet know if the planet is rocky. We certainly don’t know if its surface conditions are similar to Earth. But while the world is too far to visit with current technology, its relative proximity gives the next generation of telescopes the best chance at finding out more.

Research paper: Brugger, Mousis, Deleuil, Lunine, 2016  

How do you vote?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Your pen hovers above the list of names printed on the ballot slip. Do you choose your favourite candidate, or opt for your second choice because they stand a stronger chance of victory?

It is this thought process that drives the curiosity of Assistant Professor Kengo Kurosaka in the Graduate School of Economics.

“When I first started school, we often had to vote for choices in our homeroom,” he explains. “I felt at the time this was not always done fairly! Perhaps that inspired me.”

Assistant Professor Kengo Kurosaka

Assistant Professor Kengo Kurosaka

According to the theorem developed by American professors, Allan Gibbard and Mark Satterthwaite, it is impossible to design a reasonable voting system in which everyone will simply declare their first choice [1]. Instead, people base their selection not only on their own preferences, but on how they believe other voters will act.

Such ‘strategic voting’ can take a number of forms. Voters may opt to help secure a lower choice candidate if they believe their top choice has little chance of success. Alternatively, they may abstain from voting altogether, if they perceive their first choice has ample support and their contribution is not needed. Voters can also be influenced by the existence of future polls, when the topic they are voting on is part of a sequence of ballots for a single event.

One example of sequential balloting was the construction of the Shinkansen line on Japan’s southern island of Kyushu. The extension of the bullet train from Tokyo was performed in three sections: (1) Tokyo to Hakata, (2) Hakata to Shin Yatsushiro and (3) Shin Yatsushiro to Kagoshima. However, rather than voting for the segments sequentially as (1) -> (2) -> (3), the northern most segment (1) was first proposed, followed by segment (3) and then finally segment (2). Kengo can explain the choice for this seemingly illogical ordering by considering the effect of strategic voting.

The Shinkansen line through Kyushu

The Shinkansen line through Kyushu

In his hypothesis, Kengo made three reasonable assumptions: Firstly, that the purpose of the Shinkansen line is the connection to Tokyo. Without this, residents would not gain any benefit from the line’s construction. The second assumption was that if the Shinkansen line was not built, the money would be spent on other worthwhile projects. Finally, that the order of the voting for each segment of line was known in advance and voted for individually by the Kyushu population.

If the voting occurred on segments running north to south, (1) -> (2) -> (3), Kengo argues that none of the Shinkansen line would have been constructed. The issue is that the people who have a connection to Tokyo have no reason to vote for the line extending further south. This means that once the line has been constructed as far as Shin Yatsushiro in segment (2), there would not be enough votes to secure the construction of the final extension to Kagoshima. The residents living in the Kagoshima area will anticipate this problem. They therefore will vote against the construction of line segments (1) and (2), knowing that these will never connect them to Tokyo. Without their support, segment (2) will also not get built. This in turn will be anticipated by the Shin Yatsushiro residents, who will then also not vote for segment (1), knowing that it cannot result in the capital connection. The result is that none of the three line segments secure enough votes to be constructed.

The only way around this, Kengo explains, is to vote on the middle section (2) last. The people living around Shin Yatsushiro know that unless they vote for segment (3), the Kagoshima population will not support their line in segment (2). They therefore vote for section (3), and then both they and the Kagaoshima population vote for the final middle piece, (2). Predicting the success of this strategy, everyone votes for segment (1). The people who do vote against the line are therefore the ones who genuinely do not care about the connection to Tokyo.

Kengo’s theory works well for explaining why the voting order for the Shinkansen line was the best way to create a fair ballot. However, it is hard to scientifically test universal predictions for such strategic voting, since it would be unethical to ask voters to reveal how they voted after a ballot. To circumnavigate this problem, Kengo has been designing laboratory experiments that mimic the voting process. His aim is to understand not just how sequential balloting affects results, but the overall impact of strategic voting.

In 8 sessions attended by 20 students, Kengo presents the same problem 40 times in succession. The students are divided into groups of five, denoted by the colours red, blue, yellow and green.

Voting experiment:&nbsp;students are assigned to a group and gain different  &nbsp;point scores depending on which ‘candidate’ wins.

Voting experiment: students are assigned to a group and gain different point scores depending on which ‘candidate’ wins.

They are offered the chance to vote for one of four candidates, A, B, C or D. Students in the red group will receive 30 points if candidate A wins, 20 points if candidate B wins, 10 for candidate C and nothing if candidate D is selected. The other groups each have different combinations of these points, with candidate B being the 30 point favourite for the blue group and candidate C and D being the highest scorers respectively for the yellow and green groups. If each student simply voted for the candidate which would give them the highest point number, the poll would be a draw, with each candidate receiving five votes. But this is not what happens.

When confronted with the four options, the students opt for different schemes to attempt to maximise their point score. One choice is simply to vote for the highest point candidate. However, a red group student may instead vote for the 20 point candidate B, in the hope that this would break the tie and promote this candidate to win. While candidate B is not as good as the 30 point candidate A, it is preferable to either of the lower scoring candidates C or D winning.

Since the voting is conducted multiple times, students will also be influenced by their past decisions. If a vote for candidate A was successful, then the student is more likely to repeat this choice for the next round. Then there are the students who attempt to allow for all the above scenarios, and make their choice based on a more complex set of assumptions.

This type of poll mimics that used in political voting and interestingly, the outcome in that case is predicted by ‘Duverger’s Law’; a principal put forward by the French sociologist, Maurice Duverger. Duverger claimed that the case where a single winner is selected via majority vote strongly favours a two party system. So no matter how many candidates are in the poll initially, most of the votes will go to only two parties. To support a multi-party political system, a structure such as proportional representation needs to be introduced, where votes for non-winning candidates can still result in political influence.

Duverger’s Law appears to be supported by political situations such as those in the United States, but can it be explained by the strategic behaviours of the voters? By constructing the poll in the laboratory, Kengo can produce a simplified system where each voter’s first choice is clear and influenced only by their strategic selections. What he found is that the result followed Duverger’s Law with the four candidates reduced to two clear choices. Kengo is clear that this does not prove Duverger’s Law is definitely correct: the laboratory situation, with the voters drawn from a very specific demographic, does not necessarily translate accurately to the real world. However, if the principal had failed in the laboratory, it would have proved that strategic voting alone cannot be the only process at work.

An overall goal for Kengo’s work is to predict the effect of small rule changes in the voting process, such as the order of voting for segments of a Shinkansen line or the ability to vote for multiple candidates in an election. This allows such adjustments to be assessed and a look at who would most likely benefit. Such information can be used to make a system fairer or indeed, influence the result.

So next time you are completing a ballot paper, remember the complex calculation that your decision is about to join.

[1] The word ‘reasonable’ here is loaded with official properties that the voting system must have for the Gibbard-Satterthwaite theorem to apply. However, these are standard in most situations.

Living on an edge: how much does a planetary system’s tilt screw up our measurements?

Journal Juice: summary of the research paper, 'On the Inclination and Habitability of the HD 10180 System',  Kane & Gelino, 2014. (Shared also on Google+)

This animation shows an artist's impression of the remarkable planetary system around the Sun-like star HD 10180. Observations with the HARPS spectrograph, attached to ESO's 3.6-metre telescope at La Silla, Chile, have revealed the definite presence of five planets and evidence for two more in orbit around this star.

HD 10180 is a sun-like star with a truck load of planets. The exact number in this ‘truck load’ however, is slightly more uncertain. The problem is that we haven’t seen these brave new worlds cross directly front of their star, but have found them by looking at the wobble they produce in the star’s motion. As a planet orbits its stellar parent, its gravity pulls on the star to make it move in a small, regular oscillation as the planet alternates from one side of the star to the other. By fitting this periodic wobble, scientists can estimate the planet’s mass and location. 

The problem is that when there is a truck load of planets, there are multiple possible fits to the star’s motion, and each of these yield different answers for the properties of the planetary system. 

In 2011, a paper by Lovis et al. determined that there were 7 planets orbiting HD 10180, labelled HD 10180b through to HD 10180h. However, they were uncertain about the existence of the innermost ‘b’ planet and their model made a number of inflexible assumptions about the planet motions.

In this paper, authors Kane & Gelino revisit the system. One of the constraints they remove from the Lovis model is the insistence that a number of the planets sit on circular orbits. Rather, the authors allow the planets to potentially all follow squished elliptical paths around their star. In our own Solar System, the Earth has an almost circular orbit around the sun, but Pluto does not, moving in a squashed circle of an orbit. 

The result of this new model is a six planet system, where the dubious planet ‘b’ is removed. Two other planets, ‘g’ and ‘h’, also have their orbits changed, with ‘g’ now moving on a more elliptical path. This is particularly interesting since planet ‘g’ was thought to be in the habitable zone: the region around the star where the radiation levels would be right to support liquid water. How does g’s new orbit change things?

Before charging off in that direction, the authors ask a second important question: what is the inclination of the planetary system? Are we looking at it edge-on (inclination angle, i = 90 degrees),  face-on (i = 0) or something in between (i = ??) ?

This question is important since it affects the estimated mass of the planets. The more face-on the planetary system is with respect to our view on Earth, the larger the mass of the planets must be to produce the observed star wobble. Unfortunately, inclination is devilishly hard to determine without being able to watch the planets pass in front of their star. 

While it was not possible to view the inclination directly, the authors ran simulations of the planet system’s evolution to test out different options. Since the planets’ masses change with the assumed inclination, the gravitational interactions between the worlds also changes. Some of these combinations are not stable, kicking a planet out of its orbit. Unless we got very lucky with regards to when we observed the planets, the chances are these unstable configurations are not correct. This allows scientists to limit the inclination possibilities. The simulations suggested that an angle less than 20 degrees was not looking at all good for planet ‘d’. 

With the new model and a set of different inclinations, the authors then returned to the most exciting (from the point of view of a new Starbucks chains) planet ‘g’. Even with its new squished-circle orbit, planet ‘g’ spends 89% of its time in the main habitable zone. The remaining 11% is within the more optimistic inner edge for the habitable zone, whereby the temperatures might be able to support water for a limited epoch of the planet’s history. This is a pretty promising orbit, since there is evidence that a planet’s atmosphere might be able to redistribute heat to save it during the time it spends in a rather too toasty location.

However, planet ‘g’ has other problems.

If the planet system is edge-on, the mass of planet ‘g’ is 23.3 x Earth’s mass with a rather massive radius of 0.5 x Jupiter. Tilting round to face-on at i = 10 degrees (bye bye planet ‘d’) increases that still more to 134 Earth masses and the same radii as Jupiter. 

All options then, suggest promising planet ‘g’ is not a rocky world like the Earth, but a gas giant. The best hope for a liveable location would therefore be an Ewok-invested moon. Yet, even here the authors have doubts. Conditions on a moon are affected both by the central star and also heat and gravitational tides from the planet itself. Normally, these are small enough to forget compared to the star’s influence, but with the planet skirting so close to the star for 11% of its orbit, this may be sufficient to give a moon some serious dehydration problems. 

The upshot is that the inclination of the planetary system’s orbit is vitally important for determining the masses of the member planets and that HD 10180g is worth watching for moons, but probably not ready for a Butlins holiday resort. 

The microscope that can follow the fundaments of life

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Professors Bi-Chang Chen and Peilin Chen describe their research. &nbsp;Left: (anti-clock-wise from bottom) myself, Professor Peilin Chen, Professor Bi-Chang Chen and Professor Nemoto.&nbsp;Right: Professor Peilin Chen (left) and Bi-Chang Chen.

Professors Bi-Chang Chen and Peilin Chen describe their research. Left: (anti-clock-wise from bottom) myself, Professor Peilin Chen, Professor Bi-Chang Chen and Professor Nemoto. Right: Professor Peilin Chen (left) and Bi-Chang Chen.

“Everyone wants to see things smaller, faster, for longer and on a bigger scale!” Professor Bi-Chang Chen exclaims. 

It sounds like an impossible demand, but Bi-Chang may have just the tool for the job.

Professor Bi-Chang Chen and his colleague, Professor Peilin Chen, are from Taiwan’s Academia Sinica. Their visit to Hokudai this month was part of a collaboration with Professors Tomomi Nemoto and Tamiki Komatsuzaki in the Research Institute for Electronic Science. The excitement is Bi-Chang’s microscope design: a revolutionary technique that can take images so fast and so gently, it can be used to study living cells. 

The building blocks of all living plants and animals are their biological cells. However, many aspects of how these essential life-units work remains a mystery, since we have never been able to follow individual cells as they evolve. 

The problem is that cells are changing all the time. Like photographing a fast moving runner, an image of a living cell must be taken very quickly or it will blur. However, while a photographer would use a camera flash to capture a runner, increasing the intensity of light on the cells knocks them dead. 

Bi-Chang’s microscope avoids these problems. The first fix is to reduce unnecessary light on the parts of the cell not being imaged. When you look down a traditional microscope, the lens is adjusted to focus at a given distance, allowing you to see different depths in the cell clearly. A beam of light then travels through the lens parallel to your eye and illuminates the sample. The problem with this system is that if you are focusing on the middle of a cell, the front and back of the cell also get illuminated. This both increases the blur in the image and also drenches those extra parts of the cell in damaging light. With Bi-Chang’s microscope, the light is sent at right-angles to your eye, illuminating only the layer of the cell at the depth where your microscope has focused.

This is clever, but it is not enough for the resolution Bi-Chang had in mind. The shape of a normal light beam is known as a ‘Gaussian beam’ and is actually too fat to see inside a cell. It is like trying to discover the shape of peanuts by poking in the bag with a hockey stick. Bi-Chang therefore changed the shape of the light so it became a ‘Bessel beam’. A cross-section of a Bessel beam looks like a bullseye dart board: it has a narrow bright centre surrounded by dimmer rings. The central region is like a thin chopstick and perfect for probing the inside of a cell, but the outer rings still swamp the cell with extra light. 

Bi-Chang fixed this by using not one Bessel beam, but around a hundred. Where the beams overlap, the resultant light is found by adding the beams together. Since light is a wave with peaks and troughs, Bi-Chang was able to arrange the beams so the outer rings cancelled one another, a process familiar to physics students as ‘destructive interference’. This left only the central part of the beams which could combine to illuminate a thin layer of the cell at the focal depth of the microscope. 

Not only does this produce a sharp image with minimal unnecessary light damage, but the combination of many beams allows a wide region of the sample to be imaged at one time. A traditional microscope must move point-by-point over the sample, taking images that will all be at slightly different times. Bi-Chang’s technique can take a snap-shot at one time of a plane covering a much wider area.

To his surprise, Bi-Chang also found that this lattice of light beams (known as a lattice light sheet microscope) made his cells healthier. In splitting the light into multiple beams, the intensity of the light in each region was reduced, causing less damage to the cells. 

The net result is a microscope that can look inside the cells and leave them unharmed, allowing the microscope to take repeated images of the cell changing and dividing. By rapidly imaging each layer, a three dimensional view of the cell can be put together. Such a dynamical view of a living cell has never been achieved before, and opens the door to a far more detailed study of the fundamental working of cells. Applications include understanding the triggering of cell divisions in cancers, how cells react to external senses and message passing in the brain.

“We don’t know how powerful this technique is yet,” explains Peilin Chen. “We don’t know how far we can go.”

This is a question Tomomi Nemoto’s group are eager to help with. In collaboration with Hokudai, Bi-Chang and Peilin want to see if they can scale up their current view of a few cells to a larger system. 

“We’d like to extend the field of view and if possible, look at a mouse brain and the neuron activity,” Bi-Chang explains. “That is our next goal!”

It is an exciting possibility and one that may be supported by a new grant Hokudai has received from the Japanese Government. Last summer, Hokudai became part of the ‘Top Global University Project’, with a ten year annual grant to increase internationalisation at the university. Part of this budget will be used in research collaborations to allow ideas such as Bi-Chang’s microscope to be combined with projects that can put this new technology to use. Students at Hokudai will also get the opportunity to take courses offered by guest lecturers from around the world. These are connections that will make 2015 the best year yet for research. 

Why you’re wired to make poor choices

Let’s play a game. I am going to flip a coin. If you guess correctly which side it will land, I will give you $1. If you get it wrong, you will give me $1. We will play this game 100 times. 

Since we are using my coin, it may be weighted to preferentially land on one side. I am not going to tell you which way the bias works, but after 20 - 30 flips, you will notice that the coin lands with ‘heads’ facing up three times out of four. 

What are you going to do?

This was the question posed by Professor Andrew Lo from the Massachusetts Institute of Technology, during his talk last week for the Origins Institute Public Lecture Series. Lo’s background is in economics. It is a research field with a problem. 

In common with subjects such as anthropology, archaeology, biology, history and psychology, economics is about understanding human behaviour. While the findings in these areas may not always be relevant to one another, they should not be contradictory. For example, studying mating behaviours in anthropology should be consistent with the biological mechanics of human reproduction. The problem is that economics is full of contradictions. 

In the example of the weighted coin, the financially sound solution made by a hypothetical ’Homo-economicus’ would be to always choose ‘heads’. This would correctly predict the coin toss 75% of the time and net a tidy profit. However, when this choice is given to real Homo-sapiens, people randomise their choices. In fact, humans match the probability of the coin weighting, selecting heads 75% of the time and tails 25% of the time. Moreover, if the coin is surreptitiously switched during the game to one with a weighting for 60% heads, 40% tails, then the contestants change their guesses to also select heads 60% of the time. 

Economics tells us this is not the choice we should be making. So why do we do it?

This selection choice holds in even much more serious situations. During World War II, the U.S. Army Airforce organised bombing missions over Germany. These were incredibly dangerous flights with two main risks: the plane could receive a direct hit and be shot down, or it could be pelted with shrapnel from anti-aircraft artillery. In the former case, the best chance of survival would be a parachute. In the latter, it would be safest to wear a metal plated flak jacket. Due to the weight of the flak jacket, it was not possible for pilots to select both pieces of equipment. The army therefore gave them a choice of items. 

The chances of being hit by shrapnel were roughly three times higher than being shot down. Despite this, pilots selected the flak jackets only 75% of the time. When the army realised this was delivering a high death toll to their pilots, they tried to mandate the flak jacket selection. However, the pilots then refused to volunteer for such missions, claiming that if they were taking their lives in their own hands, they ought to be able to pick the kit they wanted. 

What is perhaps even stranger is that humans are not the only species that does this probability matching. Similar behaviour is seen in ants, fish, pidgeons, mice and any other species that can select between option A and B. This can be tested on fish surprisingly simply. If you feed a tank of fish from the left-hand corner 75% of the time and from the right-hand corner 25% of the time, then 25% of the fish will swim hopefully to the right side of the tank when it is time to be fed. 

Lo’s feeling was that this common probability matching trait must have stemmed from a survival advantage. He therefore turned from economics to evolution. 

Lo admitted that crossing from one field of research to another can be daunt-ing. He humourously recalled an attempt by his university to bring together researchers in different areas during an organised dinner. 

“We were speaking different languages! I never went back!” he told us.

Despite this disastrous first attempt, Lo has found himself drawn into projects that span fields. 

“Interdisciplinary work comes about naturally when you’re trying to solve a problem,” he explained. “Problems don’t care about traditional field boundaries.”

To explore the possible survival advantages of a probability matching instinct, Lo considered a simple choice for a creature: where to build a nest? 

In Lo’s system there are two choices for nest location: a valley floor by a river or high up on a plateau. While it is sunny, the valley floor provides the best choice. It is shaded with a water source, allowing the creature to successfully breed. Meanwhile, the plateau is too hot and any off-spring will die. However, when it rains, the situation is reversed: the valley floods and kills all the creatures, while the plateau provides a safe nest. 

If it is sunny 75% of the time, which is the best choice? 

Lo ran this experiment as a computer model for five sets of imaginary creatures. One group (the ‘Homo-economist’ creatures), always picked the valley. The second group selected the valley 90% of the time, the third 75% of the time and the forth and fifth groups 50% and 20% of the time. 

At the beginning of the simulation, the ‘Homo-economists’ had a population boom, successfully producing more children then any other group. However, then it finally rained in the valley. All of this group were killed, leaving only the groups which had some creatures on the plateau. After 25 generations of creatures (corresponding to 25 chances to rain or shine), the group which selected the valley 75% of the time were in the lead. In short, while the valley was the better bet for any one individual, the species did better if it probability matched. 

Lo’s model is simple, but it does offer an explanation to why humans may not make the most logical decision when confronted with two choices: it is an instinct that helped the human race survive. Perhaps even more interestingly, it may even explain the origins of intelligence.

Lo next posed the question: What is intelligence?

He admitted this is a difficult quantity to assess, and suggested the reverse is much easier to recognise: What is stupidity?

To demonstrate this, Lo presented three pictures. The first showed two men listening to the radio in a swimming pool. Said radio was attached to the electricity supply via a cable held out of the water on a floating flip-flop shoe. In the second image, a man was holding open a crocodile’s jaws so he could take a photograph within its gaping maw. The final photograph depicted a man working underneath his truck, which was raised at a precarious 120 degrees using two planks of wood. 

This caused laughter from the audience. We recognise this as stupid, Lo suggested, because we know these people are about to take themselves out of the gene pool. If intelligence is the opposite of stupidity, it must mean intelligent behaviour is linked with our chances of survival. By evolving to maximise the probability of our species surviving, we are selecting for our own impressive IQ. 

The Origins Institute is based at McMaster University in Canada. Their public lectures are free to attend and live recordings can be found on their website.

The search for our beginnings

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website. This post was also featured on Physics Focus.


“Volcanologists go to volcanoes for their field work, but meteoriticists never go to asteroids!”

Shogo Tachibana is an associate professor in the department of Department of Natural History Sciences and one of the key scientists on the team behind Japan’s new Hayabusa2 mission. As he talked, he plucked from a shelf in his office a small grey rock embedded in a cottonwool filled container. It was a fragment of a meteorite that had fallen to Earth in Australia in 1969.

Appearing as a bright fireball near the town of Murchison, Victoria, the later named ‘Murchison meteorite’ became notorious not just for its size, but because embedded in its rocky matrix were the basic building blocks for life. The amino acids found in the Murchison meteorite were the first extra-terrestrial examples of these biological compounds, and their discovery threw open the door to the possibility that the seeds for life on Earth came from space.

“Meteoriticists can only pick up stones, observe the asteroids and imagine what happened,” Tachibana pointed out, tapping the meteorite. “But by visiting the asteroids we can get a much clearer idea of our Solar System’s evolution.”

It is this goal that drives the Hayabusa2 mission. Translating as ‘peregrine falcon’ in English, Hayabusa2 will chase down an asteroid, before performing three surface touchdowns to gather material to return to Earth. It is a project that recalls the recent Rosetta mission by the European Space Agency, and with good reason, since the missions have a common goal.

All life on Earth requires water, yet in the most favoured planetary formation models, our world forms dry. To produce the environment for life, water must have been delivered to Earth from another source. One possible source is the comets, which Rosetta visited. A second, is the asteroids.

Situated primarily in a band between Mars and Jupiter, asteroids are left-over parts that never made it into a planet. Broadly speaking, they come in two main classes. The ’S-type’ asteroids match the most common meteorites that fall to Earth. During their lifetimes, they have been heated and their original material has changed. The second type are the ‘C-type’ asteroids and these are thought to have altered very little since the start of the Solar System 4.56 billion years ago. It is this early remnant of our beginning that Hayabusa2 is going to visit.

The asteroid target is ‘1999 JU3’: an alphabetical soup name derived from its date of discovery on the 10th May 1999. Observations of 1999 JU3 have tantalisingly hinted at the presence of clays that could only have formed with water. Since 1999 JU3 has a meagre size of approximately 1 km across, it does not have the gravity to hold liquid water. Instead, it must have formed from parts of a larger asteroid. Such a history fits with 1999 JU3’s orbit. The asteroid is termed a ‘near-Earth asteroid’, on an unstable orbit between Mars and the Earth. Since it cannot maintain its current orbit for more than about 10 million years, it must have originated from a different location. One of Hayabusa2’s missions is to investigate this possibility by examining the presence of radioactive elements in the asteroid’s body. Atoms that are radioactive decay into other elements over time. However, rays from the Sun may trigger their formation. As the asteroid has changed its path, the abundance of radioactive elements will shift depending on if it moved closer or further from the Sun. By measuring the quantities of these elements, Hayabusa2 can glean a little of the asteroid’s past.

The delivery of water alone is not the only reason why asteroids may be important. Discoveries from meteorites such as the one that fell in Murchison, have revealed a connection between the evidence of water and the presence of amino acids. Even more excitingly, these reactions with water appear to be correlated with the production of left-handed amino acids. Like your left and right hand, amino acids can exist as mirror images of one another. While laboratory experiments produce an equal number of left- and right-handed molecules, life on Earth strongly favours the left-handed version. How this preference came about is not clear, but it is possible the selection began on the asteroid that brought this organic material to Earth. When the samples from Hayabusa2 are analysed, scientists will be holding the final evolution of organic matter on the asteroid and possibly, the beginning of our own existence.

Hayabusa2 launched from the Japanese Aerospace Exploration Agency’s (JAXA) launch pad on the southern tip of Japan at the start of this month. It will reach 1999 JU3 by mid-2018, and spend the next 18 months examining the asteroid before return to Earth in December 2020. During its visit, three touchdowns to the asteroid’s surface are planned to gather material to return to Earth. The first touchdown will occur after a thorough examination to search for the presence of hydrated clays. Sweeping down to the surface, Hayabusa2 will launch a 5g bullet at the asteroid to kick-up the material to be gathered by the spacecraft. It will also dispatch a lander called ‘MASCOT’ that in turn, will deploy three small rovers. MASCOT (Mobile Asteroid Surface SCout) was developed by the German Aerospace Center and French space agency: the team also behind the successful Philae lander for Rosetta. MASCOT has a battery designed to last for 15 hours and the ability to make at least one move to a second location. The destination for the trio of rovers will be decided once the initial data from Hayabusa2 reaches Earth.

After its first landing, Hayabusa2 will drop to the surface another two times. Since the asteroid is thought to be formed from fragments of a larger body, different surface locations may have very different properties. Prior to the third decent, Hayabusa2 will launch an impactor carrying 4.5 kg of explosives at the asteroid’s surface. During the collision, Hayabusa2 will hide behind the asteroid to avoid damage, but send out a small camera to watch the impact. While the resultant crater size will depend on the structure of 1999 JU3, the maximum extent is expected to be around 10m in diameter. Hayabusa2 will then drop to gather its final sample from this area, which will consist of material below the surface of the asteroid.

The gathering of sub-surface material is particularly important for investigating 1999 JU3’s formation history. Exposure to cosmic and solar rays can cause ‘space weathering’ that changes the surface of the asteroid. This is something Japanese researchers are well aware, since this incredibly ambitious mission has actually been performed before.

As the name implies, Hayabusa2 is the second mission of its kind. In 2003, the first Hayabusa spacecraft began its seven year round-trip journey to the asteroid, Itokawa. Unlike 1999 JU3, Itokawa is a S-type asteroid and the evidence from Hayabusa suggests it had been previously heated up to temperatures of 800 celsius. While the mission to Itokawa was not designed to unravel the Solar System’s early formation, it did reveal a large amount of information about the later evolution of space rocks, including the effects of weathering. In addition to the science, Hayabusa’s journey was also a lesson in what was needed to pull off such an ambitious project.

While Hayabusa was ultimately successful, not everything went according to plan. During its landing sequence to the asteroid’s surface, one of Hayabusa’s guidance lasers found an obstacle at the landing site. This should have terminated the operation, but rather than returning to space, Hayabusa continued with its decent. In a problem that the Rosetta lander, Philae, would later mimic, Hayabusa bounced. It returned to the asteroid’s surface but stopped, staying there for half an hour rather than the intended seconds needed to gather a sample.

“This was really dangerous,” Tachibana explains. “On the surface of the asteroid, the temperature is very high and it’s not good for the spacecraft to be there long.”

Despite this, Hayabusa survived and was able to make a second landing. Yet even here there were issues, since neither landing had deployed the bullets needed to stir up the surface grains. This left scientists unsure how much material had been successfully gathered. Hayabusa’s lander also met an unfortunate end when an automated altitude keeping sequences mistakenly activated during the lander’s deployment. Missing the asteroid’s surface, the lander could not be held by the rock’s weak gravity and it tumbled into space.

While Hayabusa 2’s design strongly resembles its predecessor, a number of key alterations have been implemented to avoid these problems happening again.

“We are sure Hayabusa2 will shoot the bullets,” Tachibana says confidently. Then he pauses, “… but just in case, there is a back-up mechanism.”

The back-up mechanism uses teeth on the end of the sample-horn that can dig into the asteroid. As the teeth raise the grains on the surface, Hayabusa 2 will decelerate, allowing the now faster moving grains to rise-up inside the sample chamber.

The sample jar itself also has a different design from Hayabusa, since 1999 JU3 is likely to contain volatile molecules that will be mixed with terrestrial air on the return to Earth if the jar seal is not perfect. Tachibana is the Principal Investigator for the sample analysis once Hayabusa2 returns to Earth. In preparation for this,a new vacuum chamber must be set up in JAXA to ensure no terrestrial pollution when Hayabusa2’s precious payload is finally opened.

“It was not easy to get support for a second mission,” Tachibana explains. “JAXA have a much smaller budget than NASA and there are many people who want to do different missions. We had to convince the community and the researchers in various fields how important this project was.”

It is a view shared by the international scientific community and JAXA have no intention of completing this work alone. In addition to the MASCOT lander‘s international origins, NASA’s network of ground stations will help track the spacecraft to ensure continual contact with Earth. When Hayabusa2 returns to Earth, the analysis of the samples will then be performed by a fully international team. Hayabusa2 will also be able to compare results from the new OSIRIS-REx NASA mission, which will launch in 2016 to visit and sample the asteroid, ‘Bennu’. Such comparisons are vitally important when samples are this difficult to collect.

“International collaboration is very important for space missions,” Tachibana concludes. “For the future, we don’t want to be closed. We want to make the best team to analyse the samples.”

How not to catch a fish

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


At the University of the Algarve, Professor Teresa Cerveira Borges is trying to help fisherman to stop catching unwanted fish.

The Algarve stretches along the the southern coastline of Portugal. With a warm climate and beautiful beaches, it is one of the most popular holiday destinations in Europe. The region is also famous for fishing, with a lagoon system formed between the mainland and a string of barrier islands creating an ideal natural fish hatchery.

Like Japan, fish are a major part of the local diet. Within the European Union, Portugal is number one for fish consumption and the sea zone controlled by Portugal (more technically, the ‘Exclusive Economic Zone’ for exploration and marine resource management) is the third largest in the European Union and 11th in the world. Import and export of fish is also a major business, so much so that the country’s most traditional fish —a cod— is not found in Portuguese waters but is imported from Norway and Iceland.

However, acquiring the Christmas Bacalhau (the Portuguese word for ‘cod’) is not Teresa’s concern. What she is worried about is the fish that are caught, but then thrown away.

Portuguese fishermen use a variety of different fishing gear in their work. Some —like the seine or trammel nets— surrounds the fish and then draws closed around the catch. Others like the trawl nets, scrape the bottom of the sea and sweep up everything in their path. Longlines consist of a row of hooks baited with a small fish like a sardine while traps and pots entice prey into an enclosed space to be easily gathered up.

The problem is that not everything caught is used. As well as the species the fishermen are trying to catch, a large number of other species will be pulled out of the water. This accidental acquisition is known as the ‘by-catch’ and while some of it can be sold, a large fraction has no economic value and is thrown back into the sea.

This so called ‘discard’ is huge. In the Algarve, studies identified more than 1,200 species in the trawl catch but around 70% are thrown away since they lack any commercial value. World data from 1995 estimates the discard amounts to as much as 27 million tonnes of fish per year.

Since the process of being caught is not gentle on the fish, almost all discards are dead when they are tossed overboard. This results in serious ecological impacts, since food chains are strongly affected by the removal of some species and the addition of other dead. A photograph of Teresa in the 1980s shows her struggling to lift a hake half the length of her body. Such large hake are no longer found in Portugal’s fishing hauls. So much discard also has a more direct economic impact, as fishermen are forced to waste valuable time and boat space sorting through their catch.

Teresa’s work has focussed on minimising the quantity of the discard. Part of this research is technological, investigating better net designs that allow the targeted species to be trapped while other species can escape. One example of this has been to introduce sparsely placed square-mesh shaped holes into a diamond-mesh shaped net. While the diamond net holes draw closed when pulled from the top, the square holes remain open to allow small, unwanted fish to swim free.

Another part of this work is exploring new ways to use the fish accidentally caught. Teresa has been visiting Hokkaido University these last few weeks in the Department of Natural History Sciences. From one fish-loving nation to another, she has been examining the fish and preparation methods used in Japan and whether these could be used to minimise discards back home. During her stay, she shared her experiences in the ‘Sci-Tech’ talk series; general interest science seminars held in English in the Faculty of Science.

Teresa explained that fish are discarded when there is no commercial value, but the reasons it cannot be sold vary. In some cases, the fish is inedible. The Boarfish for example, is small and has a hard outer skin with spines that makes it impossible to eat. For this species, the only possible use is for animal meal, but the value would be very low which makes it economically unattractive.

In other cases, the fish is edible but there is no dietary tradition for that fish locally. Without people being in the habit of eating a particular fish, there is no demand in the market place. This can be remedied by finding new ways to prepare different fish or new markets where catch could be sold further afield.

When new ways of preparing fish are found, the next step is to educate and entice the consumers. Campaigns focussed around selling a particular species send chefs into markets and schools to demonstrate how to prepare the fish, while web and Facebook pages are set up with recipe ideas. Books are also being produced to showcase the fish in Portuguese waters. These publications come out both in Portuguese and English to attract the attention of the Algarve’s 10 million spring and summer visitors. The result is a world-wide fish-eating initiative!

Teresa also emphasises the importance of working with the fishermen. With a craft steeped in tradition, many fishermen are reluctant to try new techniques or are wary of interference from scientists. Teresa’s department in the Centre of Marine Sciences works hard to encourage a good relationship with the local fishermen, both by explaining their work and listening to what they say. The result has been a strong cooperation in favor of a cleaner catch with less laborious sorting; a win for both fisherman and fish.

So next time you see a sample of fish you’ve never tried before: why not give it a go?

How to stop the Earth slamming into the Sun

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Associate Professor Hidekazu Tanaka (right) with graduate student Kazuhiro Kanagawa

Associate Professor Hidekazu Tanaka (right) with graduate student Kazuhiro Kanagawa

4.56 billion years ago, our solar system was a dusty, gaseous disc encircling a young star. Buffeted by the gas, grains smaller than sand began to collide and stick together. Like a massive Lego assembly project, steadily bigger structures continued to connect until they had enough mass to pull in an atmosphere from the surrounding gas. Eventually, the sun’s heat evaporated away the remaining disc gas and our solar system of planets was born.

There is just one issue: As the baby planet reaches the size of Mars (roughly 1/10th of Earth’s mass), the gas disc tries to throw it into the sun.

The problem is gravity. When the juvenile planet is small, its low mass results in a weak gravitational pull. This has very little effect on the surrounding disc of gas and neighbouring rocks. However, once the planet reaches the size of Mars, its gravity starts to tug on everything around it.

Rather like a circular running track, gas closer to the Sun travels around a small orbit, while gas further out has to move a greater distance to perform one complete circuit. This makes the gas between the planet and sun draw ahead of the planet’s position, while the gas further away from the sun is left behind. Once the planet’s gravity becomes large, it pulls on the gas either side of it, trying to slow down the inner gas and speed up the outer gas. In turn, this gas also pulls back on the planet, attempting to both slow it down and speed it up. The disc’s structure gives the outside gas the edge, and it slows the planet down. In more technical language, this exchange of speed is called ‘angular momentum transfer’ and it gives the planet a big problem.

The planet’s position from the sun is determined by the balance between the sun’s gravitational pull and the centripetal force. The latter is the cause of the outwards push you feel riding on a spinning roundabout. When the planet loses speed, its centripetal force decreases and the sun’s gravitational force pulls it inwards before the two cancel again. As the gas disc continues to push on the planet, it begins a death spiral towards the Sun.

It is stopping the planet’s incineration that interests Associate Professor Hidekazu Tanaka in the Institute for Low Temperature Science. Dubbed ‘planetary migration’, Hidekazu explains that when this problem was first proposed back in the 1980s, nobody believed it.

“The migration happens at very high speed,” he says. “So most scientists thought there must be some mistake.”

Scientists were right to be suspicious. The migration time for an Earth-sized planet is about 100,000 years, far shorted than the time needed to build the planet. For the more massive Jupiter, the problem is even more severe, implying that migration should kill any planet forming in the disc.

The discovery of the first planets outside the solar system shut down this debate. In 1995, the first exoplanet orbiting a regular star was discovered. It was the size of Jupiter, but so close to its stellar parent that its orbit took a mere 4.2 days. Since the original gas disc would not have had enough material at close distances to birth such a massive planet, this hot Jupiter must have formed further out and moved inwards.

With migration accepted as a real process, the task became how to stop it.

“This is still a very difficult problem!” Hidekazu exclaims.

Since migration involves the interaction with the gas disc, it will stop once the sun’s heat has evaporated the last of the gas. But this process cannot have happen too early, or giant planets like Jupiter would not have been able to gain their massive atmospheres.

One possibility Hidekazu has proposed is that there could be a secondary force to stop the planet due to the different temperatures in the gas disc.

As the planet’s gravity drags on the disc, it pulls nearby gas towards it. On a larger orbit and therefore lagging behind the planet, the cold outer gas is dragged in to pile-up in front of the planet like slow-moving traffic. Its mass pulls the planet forward, causing it to move at a faster speed. This acceleration increases the planet’s centripetal force which means it must move outwards to balance the gravitational pull of the sun; an outward push could stop or slow the planet’s deathly migration.

This is just one idea that might turn the tables on the deathly walk of our young solar system. One problem with fully understanding this process is the lack of data. It has only been in the last 20 years that we have discovered any planets outside our solar system and we are still working on observing complete systems around other stars. When we do, Hidekazu’s research group will have much more information with which to match their models.

Meanwhile, we can be grateful that such a catastrophe only occurs during the early planet formation process and by this method or another, the Earth got lucky.

The vacuum cleaner that can tweet it can't fly

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


The vacuum cleaner that can tweet it can’t fly

Nestled under the desk in the Graduate School of Information Science and Technology is a Roomba robotic vacuum cleaner with a Twitter account. When there is a spillage in the laboratory, one tweet to ‘Twimba’ will cause her to leap into life to clean-up the issue.

Twimba’s abilities may not initially seem surprising. While the twitter account is a novel way of communicating (the vacuum cleaner is too loud when operating to respond to voice commands), robots that we can interact with have been around for a while. Apple iPhone users will be familiar with ‘Siri’; the smart phone’s voice operated assistant, and we have likely all dealt with telephone banking, where an automatic answer system filters your enquiry. 

Yet, Twimba is not like either of the above two systems. Rather than having a database of words she understands, such as ‘clean’, Twimba looks for her answers on the internet. 

The idea that the internet holds the solution for successful artificial intelligence was proposed by Assistant Professor Rafal Rzepka for his doctoral project at Hokkaido University. The response was not encouraging.

“Everybody laughed!” Rafal recalls. “They said the internet was too noisy and used too much slang; it wouldn’t be possible to extract anything useful from such a garbage can!”

Rafal was given the go-ahead for his project, but was warned it was unlikely he would have anything worth publishing in a journal during the degree’s three year time period: a key component for a successful academic career. Rafal proved them wrong when he graduated with the highest number of papers in his year. Now faculty at Hokkaido, Rafal heads his own research group in automatic knowledge acquisition.

“Of course, I’m still working on that problem,” he admits. “It is very difficult!”

When Twimba receives a tweet, she searches Twitter for the appropriate response. While not everyone has the same reaction to a situation (a few may love nothing better than a completely filthy room), Twitter’s huge number of users ensures that Twimba’s most common find is the expected human response. 

For instance, if Twimba receives a tweet saying the lab floor is dirty, her search through other people’s tweets would uncover that dirty floors are undesirable and the best action is to clean them. She can then go and perform this task.

This way of searching for a response gives Twimba a great deal of flexibility in the language she can handle. In this example, it was not necessary to specifically say the word ‘clean’. Twimba deduced that this was the correct action from people’s response to the word ‘dirty’. A similar result would have appeared if the tweet contained words like ‘filthy’, ‘grubby’ or even more colloquial slang. Twimba can handle any term, provided it is wide enough spread to be used by a significant number of people. 

Of course, there are only so many problems that one poor vacuum cleaner is equipped to handle. If Twimba receives a tweet that the bathtub is dirty, she will discover that it ought to be cleaned but she will also search and find no examples of robotic Roombas like herself performing this task.

“A Roomba doesn’t actually understand anything,” Rafal points out. “But she’s very difficult to lie to. If you said ‘fly’, she’ll reply ‘Roombas don’t fly’ because she would not have found any examples of Roombas flying on Twitter.”

The difficulty of lying to one of Rafal’s machines due to the quantity of information at its disposal, is the key to its potential. By being about to sift rapidly through a vast number of different sources, the machine can produce an informed and balanced response to a query more easily than a person. 

A recent example where this would be useful was during the airing of a Japanese travel program on Poland, Rafal’s home country. The television show implied that Polish people typically believed in spirits and regularly performed exorcisms to remove them from their homes; a gross exaggeration. A smart phone or computer using Rafal’s technique could swiftly estimate the correct figures from Polish websites to provide a real-time correction for the viewer. 

Then there are more complicated situations where tracking down the majority view point will not necessarily yield the best answer. A UFO conspiracy theory is likely to result in many more excitable people agreeing with its occurrence than those discussing more logically why it is implausible. To produce a balanced response, the machine must be able to assess the worth of the information it receives.

A simple way to assess or ‘weight’ sources is to examine their location. Websites with academic addresses ending ‘.edu’ are more trustworthy than those ending ‘.com’. Likewise, .pdf files or those associated with peer reviewed journals have greater standing. Although these rules will have plenty of exceptions, rogue points should be overwhelmed by their correctly weighted counterparts.

A computer can also check the source of the source, tracing the origin of an article to discover if it is associated with a particular group or company. A piece discussing pollution for example, might be less trustworthy if the author is employed by a major oil company. These are all tasks that can be performed by a human, but limited time means this is not often practical and can unintentionally result in a biased view point. 

With the number of examples available, one question is whether the machine can go a step further than sorting information and actually predict human behaviour. Rafal cites the example of ‘lying with tears’, where a person might cry not through sorrow, but from an ulterior motive. Humans are aware for such deceptions and frequently respond suspiciously to politicians or other people in power when they show public displays of emotion. Yet can a machine tell the difference?

Rafal’s research suggests it is possible when the computer draws parallels between similar, but not identical, situations that occur around the world. While the machine cannot understand the reasons behind the act, it can pick out the predicaments in which a person is likely to cry intentionally. 

This ability to apply knowledge across related areas allows a more human-like flexibility in the artificial intelligence, but it can come at a cost. If the machine is able to correlate events too freely, it could come to the conclusion that it would be perfectly acceptable to make a burger out of a dolphin. 

While eating dolphins is abhorrent in most cultures, a computer would discover that eating pigs was generally popular. In Japanese, the Kanji characters for dolphin are ‘海’ meaning ‘sea’ and ‘豚’ meaning ‘pig’. A machine examining the Japanese word might therefore decide that dolphin and pigs were similar and thus dolphins were a good food source. This means it is important to control how the machine reaches out of context to find analogies.

Such problems highlight the issue of ‘machine ethics’ where a computer must be able to identify a choice based more on morals than pure logic. 

One famous ethical quandary is laid out by the ‘Trolley problem’, first conceived by Philippa Foot in 1967. The situation is of a runaway train trolley barrelling along a railway line to where five people are tied to the tracks. You are standing next to a leaver that will divert the trolley, but such a diversion will send it onto a side line where one person is also tied. Do you pull that leaver?

Such a predicament might be one that Google’s driverless car may soon face. The issue with how the computer makes that decision and why is therefore central to the success of future technology.

“Knowing why a machine makes a choice is paramount to trusting it,” Rafal states. “If I tell the Roomba to clean and it refuses, I want to know why. If it is because there is a child sleeping and it has discovered that noise from the Roomba wakes a baby, then it needs to tell you. Then you can build up trust in its decisions.” 

If we are able to get the ethics right, a machine’s extensive knowledge base could be applied to a huge manner of situations.

“Your doctor may have seen a hundred cases similar to yours,” Rafal suggests. “A computer might have seen a million. Who are you going to trust more?”

Tracking the past of the ocean’s tiny organisms

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


While many people head to the ocean to spot Japan’s impressive marine life, graduate student Norico Yamada had a more unusual goal when she joined the research vessel, ‘Toyoshio Maru’ at the end of May. Norico was after samples of sea slime and she’s been collecting these from around the world. 

As part of Hokkaido University’s Laboratory of Biodiversity II, Norico’s research focusses on the evolution of plankton: tiny organisms that are a major food source for many ocean animals.

Plankton, Norico explains, are a type of ‘Eukaryote’, one of the three major super groups into which all forms of life on Earth can be divided. To make it into the Eukaryote group, the organism’s cells must contain DNA bound up in a nucleus. In fact, the esoteric group name is just an allusion to the presence of the nucleus, since ‘krayon’ comes from the Greek meaning ‘nut’. 

The Eukaryote group is so large, it contains most of what we think of as ‘life’. Humans belong to a branch of Eukaryotes called ‘Opistokonta’, a category we share with all forms of fungus. This leads to the disconcerting realisation that to a biodiversity expert like Norico, the difference between yourself and a mushroom is small. 

Of the five other branches of Eukaryote, one contains all forms of land plants and the sea-dwelling green and red algae. Named ‘Archaeoplastida’, these organisms all photosynthesise, meaning that they can convert sunlight into food. To do this, their cells contain ‘chloroplasts’ which capture the sunlight’s energy and change it into nutrients for the plant.

This seems very logical until we reach Norico’s plankton. These particular organisms also photosynthesise, but they do not belong to the Archaeoplastida group. Instead, they are part of a third group called ‘SAR’ whose members originally did not have this ability, but many acquired it later in their evolution. So how did Norico’s plankton gain the chloroplasts in their cells to photosynthesise? 

Norico explains that chloroplasts were initially only found a single-cell bacteria named ‘cyanobacteria’. Several billion years ago, these cyanobacteria used to be engulfed by other cells to live inside their outer wall. These two cells would initially exist independently in a mutually beneficial relationship: the engulfing cell provided nutrients and protection for the cyanobacteria which in turn, provided the ability to use sunlight as an energy source. Over time, DNA from the cyanobacteria became incorporated in the engulfing cell’s nucleus to make a single larger organism that could photosynthesise. The result was the group of Archaeoplastidas.

To form the photosynthesising plankton in the SAR group, the above process must be repeated at a later point in history. This time, the cell being engulfed is not a simple cyanobacteria but an Archaeoplastida such as red algae. The cell that engulfs the red algae was already part of the SAR group, but then gains the Archaeoplastida ability to photosynthesise. 

To understand this process in more detail, Norico has been studying a SAR group organism that seems to have had a relatively recent merger. Dubbed ‘dinotom’, the name of this plankton combines its most recent heritage of a photosynthesising ‘diatom’ plankton being engulfed by a ‘dinoflagellate’ plankton. Mixing the names ‘dinoflagellate’ and ‘diatom’ gives you the new name ‘dinotom’. This merger of the two algae is so recent that the different components of the two cells can still be seen inside the dinotom, although they cannot be separated to live independently.

From samples collected from the seas around Japan, South Africa and the USA, Norico identified multiple species of dinotoms. Every dinotom was the product of a recent merger between a diatom and dinoflagellate, but the species of diatom engulfed varied to give different types of dinotoms. As an analogy, imagine if it were possible to merge a rodent and an insect to form a ‘rodsect’. One species of rodsect might come from a gerbil and a bee, while another might be from a gerbil and a beetle. 

Norico identified the species by looking at the dinotom’s cell surface which is covered by a series of plates that act as a protective armour. By comparing the position and shape of the plates, Norico could match the dinotoms in her sample to those species already known. However, when she examined the cells closer, she found some surprises. 

During her South Africa expedition, Norico had collected samples from two different locations: Marina Beach and Kommetjie. Both places are on the coast, but separated by approximately 1,500 km. An examination of the plates suggested that Norico had found the same species of dinotom in both locations, but when she examined the genes, she discovered an important difference. The engulfed diatom that had provided the cells with the ability to photosynthesise were not completely identical. In our ‘rodsect’ analogy, Norico had effectively found two gerbil-beetle hybrids, but one was a water beetle and the other a stag beetle. Norico therefore concluded that the Marina Beach dinotom was an entirely separate species from the Kommetjie dinotom.

This discovery did not stop there. Repeating the same procedure with a dinotom species collected in Japan and the USA, Norico found again that the engulfed diatom was different. In total, she found six species of dinotoms in her sample, three of which were entirely new. 

From this discovery, Norico concluded that the engulfing process to acquire the ability to photosynthesis likely happens many times over the evolution of an organism. This means that the variety of species that can mix is much greater, since the merger does not happen at a single point in history. Multiple merging events also means that the organism can sometimes gain, lose and then re-gain this ability during its evolution. 

Next year, Norico will graduate from Hokkaido and travel to Osaka prefecture to begin her postdoctoral work at Kwansei Gakuin University, where she hopes to uncover still more secrets of these minute lifeforms that provide so much of the ocean’s diversity.

Fighting disease with computers

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


When is comes to students wishing to study the propagation of diseases, Heidi Tessmer is not your average new intake.

“I did my first degree in computer science and a masters in information technology,” she explains. “And then I went to work in the tech industry.”

Yet it is this background with computing that Heidi wants to meld with her new biological studies to tackle questions that require the help of some serious data crunching.

Part of the inspiration for Heidi’s change in career came from her time working in the UK, where she lived on a sheep farm. It was there she witnessed first hand the devastating results from an outbreak of the highly infectious ‘foot and mouth’ disease. This particular virus affects cloven-hoofed animals, resulting in the mass slaughter of farm stock to stem the disease’s spread.

“The prospect of culling of animals was very hard on the farmers,” she describes. “I wanted to know if something could be done to save animal lives and people’s livelihoods. That was when I began to look at whether computers could be used to solve problems in the fields of medicine and disease.”

This idea drove Heidi back to the USA, where she began taking classes in biological and genetic sciences at the University of Wisconsin-Madison. After improving her background knowledge, she came to Hokkaido last autumn to begin her PhD program at the School of Veterinary Medicine in the Division of Bioinformatics.

“Bioinformatics is about finding patterns,” Heidi explains.

Identifying trends in data seems straight forward enough until you realise that the data sets involved can be humongous. Heidi explains this by citing a recent example she has been studying that involves the spread of the influenza virus. While often no more than a relatively brief sickness in a healthy individual, the ease at which influenza spreads and mutates gives it the ongoing potential to become a global pandemic, bringing with it a mortality figure in the millions. Understanding and controlling influenza is therefore a high priority across the globe.

Influenza appears in two main types, influenza A and B. The ‘A’ type is the more common of the two, and its individual variations are named based on the types of the two proteins that sit on the virus’ surface. For example, H1N1 has a subtype 1 HA (hemagglutinin) protein and subtype 1 NA (neuraminidase) protein on its outer layer while H5N1 differs by having a subtype 5 HA protein.

The inner region of each influenza A virus contains 8 segments of the genetic encoding material, RNA. Similar to DNA, it is this RNA that forms the virus genome and allows it to harm its host. When it multiplies, a virus takes over a normal cell in the body and injects its own genome, forcing the cell to begin making the requisite RNA segments needed for new viruses. However, the process which gathers the RNA segments up into the correct group of 8 has been a mystery to researchers studying the virus’ reproduction.

In the particular case study Heidi was examining (published by Gog et al. in the journal of Nuclear Acids Research in 2007), researchers proposed that this assembly process could be performed using a ‘packaging signal’ incorporated into each RNA segment. This packaging signal would be designed to tell other RNA segments whether they wished to be part of the same group. If this signalling could be disrupted, proposed the researchers, then the virus would form incorrectly, potentially rendering it harmless.

This, explains Heidi, is where computers come in. Each RNA segment is made up of organic molecules known as ‘nucleotides’, which bunch together in groups of three called ‘codons’. The packaging signal was expected to be a group of one or more codons that were always found in the same place on the RNA segment; signalling a crucial part of encoding. In order to find this, codon positions had to be compared across 1000s of RNA segments. This job is significantly too difficult to do by hand, but it is a trivial calculation for the right bit of computer code. Analysing massive biological data efficiently in this way is the basis for bioinformatics.

In addition to the case above, Heidi cites the spread of epidemics as another area that is greatly benefiting from bioinformatics analysis. By using resources as common as Google, new cases of disease can be compared with historical data and even weather patterns.

“The hardest part about bioinformatics is knowing what questions to ask,” Heidi concludes. “We have all this data which contains a multitude of answers, but you need to know what question you’re asking to write the code.”

It does sound seriously difficult. But Heidi’s unique background and skill set is one that just might turn up some serious answers.

Reactions in the coldest part of the galaxy

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Professor Naoki Watanabe with graduate student, Kazuaki Kuwahata.

Professor Naoki Watanabe with graduate student, Kazuaki Kuwahata.

Within the coldest depths of our galaxy, Naoki Watanabe explores chemistry that is classically impossible.

At a staggering -263°C, gas and dust coalesce into clouds that may one day birth a new population of stars. The chemistry of these stellar nurseries is therefore incredibly important for understanding how stars such as our own Sun were formed.

While the cloud gas is made primarily from molecular (H2) and atomic (H) hydrogen, it also contains tiny dust grains suspended like soot in chimney smoke. It is on these minute surfaces that the real action can begin.

Most chemical reactions require heat. In a similar way to being given a leg-up to get over a wall, a burst of energy allows two molecules to rearrange their bonds and form a new compound. This is known as the ‘activation energy’ for a reaction. The problem is that at these incredibly low temperatures, there’s very little heat on offer and yet mysteriously reactions are still happening.

It is this problem that Professor Naoki Watanabe in the Institute of Low Temperature Science wants to explore and he has tackled it in a way different from traditional methods.

“Most astronomers want to work from the top down,” Naoki describes. “They prepare a system that is as similar to a real cloud as possible, then give it some energy and examine the products.”

The problem with this technique, Naoki explains, is that it is hard to see exactly which molecule has contributed to forming the new products: there are too many different reactions that could be happening. Naoki’s approach is therefore to simplify the system to consider just one possible reaction at a time. This allows his team to find out how likely the reaction is to occur and thereby judge its importance in forming new compounds.

This concept originates from Naoki’s background as an atomic physicist, where he did his graduate and postdoctoral work before moving fields into astrochemistry. Bringing together expertise in different areas is a primary goal for the Institute of Low Temperature Science and Naoki’s group is the top of only three in the world working in this field.

Back in the laboratory, Naoki shows the experimental apparatus that can lower temperatures down to the depths of the galactic clouds. Dust grains in the clouds are observed to be covered with molecules such as ice, carbon monoxide (CO) and methanol (CH3OH). Some of these can be explained easily: for instance, carbon monoxide can form from a positively charged carbon atom that is known as an ion (C+). The charge on the ions make them want to bond to other atoms and become neutral, resulting in no activation energy being required. A second possible mechanism is by a carbon atom (C) attaching to a hydroxyl molecule (OH): a particularly reactive compound known as a ‘radical’ due it possessing a free bond. Like the charge on the C+, this lose bond wants to be used, allowing the reaction to proceed without an energy input. These paths allow CO to form in the cloud even at incredibly low temperatures and freeze onto the dust.

However, methanol is more of a mystery. Since the molecule contains carbon, oxygen and hydrogen (CH3OH), it is logical to assume that initially CO forms and then hydrogen atoms join in the fun. The problem is that to add a hydrogen atom and form HCO requires a significant activation energy, and there is no heat in the cloud to trigger that reaction. So how does the observed methanol get formed?

Naoki discovered that while the low temperature prevented HCO forming through a thermal (heat initiated) reaction, it was that same coldness that opened the door to a different mechanism: that of quantum tunnelling. According to quantum mechanical theory, all matter can behave as both a particle and a wave. As a wave, your position becomes uncertain. Your most likely location is at the wave peak, but there is a small chance you will be found to either side of that, in the wave’s wings.

Normally, we do not notice the wave nature of objects around us: their large mass and energy makes their wavelength so small this slight uncertainty goes unnoticed. Yet in the incredibly cold conditions inside a cloud, the tiny hydrogen atom starts to show its wave behaviour. This means that when the hydrogen atom approaches the carbon monoxide molecule, the edge of its wave form overlaps the activation energy barrier, giving a chance that it may be found on its other side. In this case, the HCO molecule can form without having to first acquire enough heat to jump over the activation barrier; the atoms have tunnelled through.

Such reactions are completely impossible from classical physics and the chance of them occurring in quantum physics is not high, even at such low temperatures. This is where the dust grain becomes important. Stuck to the surface of the grain, reactions between the carbon monoxide and hydrogen atom can occur many many times, increasing the likelihood that a hydrogen atom will eventually tunnel and create HCO.

After measuring the frequency of such reactions in the lab, Naoki was then able to use simulations to predict the abundance of the molecules over the lifetime of a galactic cloud. The results agreed so well with observations that it seems certain that this strangest of mechanisms plays a major role in shaping the chemistry of the galactic clouds.

Are you likely to be eaten by a bear?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


The Hokkaido Prefectural Government…” The article published on the Mainichi newspaper’s English website at the end of September announced, “...has predicted that brown bears will make more encroachments on human settlements than usual this fall.

While written in calm prose, it was a headline to make you hesitate stepping outside to visit the convenience store: exactly how many bears were you prepared to face down to get that pint of milk?

Yet exactly how serious was this threat and what can be done to protect both the bears and the humans from conflict?

Professor Toshio Tsubota in the Department of Environmental Veterinary Sciences is an expert on the Hokkaido brown bear population. He explains that bears normally live in the mountains and only come into residential areas if there is a shortage of food. Sapporo City’s high altitude surroundings make it a popular destination when times are tough and bears have been sighted as far in as the city library in the Chuo Ward.

The problem this year came down to acorns. Wild water oak acorns are a major source of food for bears in the Autumn months, and this year the harvest has been low. Despite the flurry of concern, a low acorn crop is not an uncommon phenomenon. The natural cycle of growth produces good years with a high yield of nuts and bad years with a lower count in rough alternation. However, a bad yield year does increase the risk of bear appearances in the city.

With food being the driving desire, bears entering residential streets typically raid trees and garden vegetables. The quality of your little allotment not withstanding, bears will return to the mountains once the food supply improves, meaning their presence in the city is a short-lived, intermittent issue.

That is, unless the bears get into the garbage.

High in nutrients and calories, bears that begin to eat garbage will not return to their normal hunting habits. This is when more serious problems between humans and bears start to develop. Since yellow bagged waste products do not strongly resemble the bears’ normal food, they will not initially know that garbage is worth eating. It is therefore essential, Toshio says, to ensure your garbage is properly stored before collection.

In Japan, brown bears are found exclusively in Hokkaido with Honshu hosting a black bear population. The brown bear is the larger of the two species, with the females weighing in around 150 kg and the males between 200 – 300 kg. The exact population number in Hokkaido is unknown, but estimates put it around 3000 – 3500 bears on the island.

On the far eastern shore of Hokkaido is the Shiretoko World Heritage Site. Free from hunting, bears living in this area have little to fear from humans, making it the ideal location to study bear behaviour and habitat. Able to approach within 10 – 20 m of the bears, it is here that Toshio collects data for his research into bear ecology.

The acorn feast in the autumn comprises of the final meal the bears will eat before hibernation. Hokkaido’s long winters are passed in dens, with bears beginning their deep sleep near the end of November and not emerging until April or May. In warmer climates, this hibernation time is greatly reduced, with black bears on Honshu sleeping for a shorter spell and bears in southern Asia avoiding hibernation altogether.

For bears that do hibernate, this sleepy winter is also the time when expectant mothers give birth. Despite this rather strenuous sounding activity, the mother bear’s metabolic rate remains low during this period, even while she suckles her cubs. This is in stark contrast to the first few months after a human baby’s arrival where the mother typically gets very little sleep at all.

Another feature human expectant mothers may envy is the ability of the bear to put her pregnancy on hold for several months. Bears mate in the summer whereas the cubs are born in the middle of winter, making the bear pregnancy official clocking in at 6 – 7 months. However, the foetus only actually takes about two months to develop; roughly the same time scale as for a cat or dog. This pregnancy pausing is known as ‘delayed implantation’ and stops the pregnancy at a very early stage, waiting until the mother has gained weight and begun her hibernation before allowing the cubs develop.

Brown bears typically have two cubs per litter and the babies stay with their mother during the first 1.5 – 2.5 years of their life. With an infant mortality of roughly 40 – 50%, this means that drops in the bear population are difficult to replenish. Toshio’s work in understanding brown bear ecology is therefore particularly important to preserving the species in Hokkaido.

As hunting drops in popularity, Hokkaido’s bear population has improved, yet this could be seriously damaged if humans and bears come into conflict.

“A bear does not want to attack a person,” Toshio explains. “He just want to find food. But sometimes bears may encounter people and this leads to a dangerous situation.”

To date, bears venturing into Sapporo have not resulted in serious accidents, although people have been killed by bears in the mountains during the spring. These attacks occur when both people and bears seek out the wild vegetables, producing a conflict over a food source.

Toshio believes that education is of key importance to keeping both bear and human parties safe, and advocates that a wildlife program be introduced in schools.

“If there are no accidents, people love bears! They are happy with their life and population,” Toshio points out. “But if people see bears in residential areas they will become scared and want to reduce their number.”

If you do see a bear in the street, Toshio says that keeping quiet and not running are key. In most cases, the bear will ignore you, allowing you to walk slowly away and keep your distance.

“We talk about ‘one health’,” Toshio concludes. “Human health, animal health and ecological health. These are important and we need to preserve them all.”

Baby Galaxies

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


Hold an American dime (a coin slightly smaller than a 1 yen piece) at arms length and you can just make out the eye of Franklin D. Roosevelt, the 32nd President of the USA whose profile is etched on the silver surface. Lift the eye on that coin up to the sky and you’re looking at a region containing over 10,000 galaxies. 

In 1995, the Hubble Space Telescope was trained on a small patch of space in the constellation Ursa Major. So tiny was this region that astronomers could only make out a few stars belonging to our Milky Way. To all intents and purposes, the space telescope was pointing at nothing at all.

After 10 days, scientists examined the results. Instead of a pool of empty blackness, they saw the light of 3,000 galaxies packed into a minute one 24-millionth of the entire sky. In 2003, the observation was repeated for an equally tiny region of space in the constellation Fornax. The result was over 10,000 galaxies. These images are known as the ‘Hubble Deep Field’ and ‘Hubble Ultra Deep Field’ and became some of the most famous results in the space telescope’s legacy. 

For their second public Sci-Tech talk, the Faculty of Science’s Office of International Academic Support invited Professor John Wise from the Georgia Institute of Technology in the USA to talk about his search to understand the formation of the very first galaxies in the Universe. 

John explained that when you look at the Hubble Deep Field, you do not see the galaxies as they are now. Rather, you are viewing galaxies born throughout the Universe’s life, stretching back billions of years. The reason for this strange time alignment is that light travels at a fixed, finite speed. This means that we see an object not as it is now, but how it was at the moment when light left its surface. 

Since light travels 300,000 km every second, this delay in light leaving an object and reaching your eyes is infinitesimally small for daily encounters. For light to travel to you from a corner shop 100 meters away, takes a tiny  0.0000003 seconds. However, step into the vastness of space and this wait becomes incredibly important. 

Light travelling to us from the moon takes 0.3 seconds. From the Sun, 8 minutes. For light to reach us from the Andromeda Galaxy, we have to wait 2.4 million years. The Andromeda we see is therefore the galaxy as it was 2.4 million years ago, before humans ever evolved on Earth. 

At a mere 24,000,000,000,000,000,000 km away, Andromeda is one of our nearest galactic neighbours. Look further afield, and we start to view galaxies that are living in a much younger universe; A universe that has only just began to build galaxies. 

“At the start of the Universe,” Wise explains. “There were no stars or galaxies, only a mass of electrons, protons and particles of light. But when it was only 380,000 years old, the Universe had cooled enough to allow the electrons and protons to join and make atoms.”

This point in the Universe’s 13.8 billion year history is known as ‘Recombination’; a descriptive name to mark electrons and protons combining to make the lightest elements: Hydrogen, Helium and a sprinkling of Lithium. 

Then, the Universe goes dark. 

For the next 100 million years, the Universe consists of neutral atoms that produce no light nor scatter the current populations of light particles. Astrophysicists, who rely on these light scatterings to see, refer to this time as the ‘Dark Ages’ where they are blind to the Universe’s evolution. 

Then, something big happens.

An event big enough that it sweeps across the Universe, separating electrons from their atom’s nucleus in an event called ‘Reionisation’. 

“It was like a phase transition,” Wise describes. “Similar to when you go from ice to water. Except the Universe went from neutral to ionised and we were now able to see it was full of young galaxies.”

But what caused this massive Universe phase transition? Wise suspected that the first galaxies were the cause, but with nothing visible during the Dark Ages, how is this proved?

To tackle this problem, Wise turned to computer models to map the Universe’s evolution through its invisible period. Before he could model the formation of the first galaxy, however, Wise had to first ask when the first stars appeared.

“Before any galaxies formed, the first stars had to be created,” Wise elaborates. “These were individual stars, forming alone rather than in galaxies as they do today.”

As the pool of electrons and protons join at ‘Recombination’ to create atoms, small differences in density were already in place from the first fraction of a second after the Big Bang. The regions of slightly higher density began to exert a stronger gravitational pull on the atoms, drawing the gas towards them. Eventually, these clouds of gas grew massive enough to begin to collapse and star was born in their centre. 

Their isolated location wasn’t the only thing that made the first stars different from our own Sun. With only the light elements of hydrogen, helium and lithium from which to form, these stars lacked the heavier atoms astrophysicists refer to as ‘metals’. Due to their more complicated atomic structure, metals are excellent at cooling gas. Without them, the gas cloud was unable to collapse as effectively, producing very massive stars with short lifetimes.

“We don’t see any of these stars in our galaxy today because their lifetimes were so short,” Wise points out. “Our Sun will live for about 10 billion years, but a star 8 times heavier would only live for 20 million years; thousands of times shorter. That’s a huge difference! The first star might be ten times as massive with an even shorter lifetime.” 

In the heart of the newly formed star, a key process then began: that of creating the heavier elements. Compressed by gravity, hydrogen and helium fuse to form carbon, nitrogen and oxygen in a process known as ‘stellar nucleosynthesis’. As it reaches the end of its short life, the first stars explode in an event called a ‘supernovae’, spewing the heavier metals cooked within their centre out into the Universe. 

The dense regions of space that have formed the first stars now begin to merge themselves. Multiple pools of gas come together to produce a larger reservoir from which the first galaxies begin to take shape. 

Consisting of dense, cold gas, filled with metals from the first stars, these first galaxies turn star formation into a business. More stars form, now in much closer vicinity, and the combined heat they produce radiates out through the galaxy. This new source of energy hits the neutral atoms and strips their electrons, ionising the gas. 

As the fields of ionised gas surrounding the first galaxies expand, they overlap and merge, resulting in the whole Universe becoming ionised. Wise’s simulations show this in action, with pockets of ionised gas and metals spreading to fill all of space. 

While this uncovers the source of the Universe’s ionisation, the impact of the first galaxies does not stop there. In the same way that the regions of first star formation had combined to form galaxies, further mergers pull the galactic siblings together. The result is the birth of a galaxy like our own Milky Way. 

These first galaxies, Wise concludes, therefore form a single but important piece of the puzzle that ultimately leads to our own existence here in the Milky Way. 

Can you catch your dog's illness?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


The Research Center for Zoonosis Control is concerned with the probability that you will catch a disease from your dog. Or –to put it more generally– zoonosis looks at the transmission of diseases between humans and animals. 

While we do not normally consider an illness in an animal a risk to humans, you can almost certainly name a number of exceptions: Severe Acute Respiratory Syndrome (SARS), avian Influenza (bird flu), rabies, Escherichai coli (E. coli ) and Creutzfeldt-Jakob disease (CJD), to offer a few examples. 

Diseases can be produced by different types of agents: viruses, bacteria, prions (a type of protein), fungi and parasites. Postdoctoral researcher, Jung-Ho Youn, is interested in the second of these types; diseases caused by bacteria. 

Bacteria, Jung-Ho explains, are everywhere and most are harmless or even essential to our health. For the bacteria which do cause disease, treatment usually involves antibiotics. 

In 1928, Scottish scientist Alexander Fleming discovered that a sample of the disease-producing bacterium, Staphylococcus aureus (S. aureus), had become contaminated by a mould which was inhibiting its growth. This fungus turned out to be producing a substance that would later be known as the antibiotic, penicillin. So successful was penicillin at treating previously fatal bacterial diseases that it was hailed the ‘miracle drug’. 

Yet, 85 years later we are far from celebrating the demise of all bacterial illness. Indeed, a variant of the very bacteria Fleming was studying can be a serious problem in hospitals due to its strong resistance to many antibiotics. 

So what went wrong?

The problem, Jung-Ho explains, is that bacteria are continuously changing. With the ability to divide in minutes, bacterial numbers can rise exponentially and these microorganisms are not always identical. Random mutations to the genetic structure of a bacterium usually have little effect but a small number of cases can arise that produce a spontaneous resistance to the drug set to kill them. For example, the bacteria’s surface may change to prevent the antibiotic bonding or it could reduce the build up of the drug by creating a pump to dispel it from its system. These changes are the same principal as evolution in mammals but on a much faster time scale due to the bacteria’s fast reproduction. This means that when a sick person takes a course of antibiotics such as penicillin, they kill most but not all of the bacteria. 

“It’s like people wearing bullet-proof vests,” Jung-Ho described. “The antibiotics are the guns. They kill most of the bacteria but not all because a few are wearing these bullet-proof vests. This is why bacteria have survived until now.” 

But the problems with bacteria do not stop at the ability to develop resistance once exposed to a drug. Not only can a resistant bacterium multiply to produce more bacteria will the same protection, it can share this information with different types of bacteria. The bacteria that gain this protection may never have been exposed to the antibiotic and –a key issue for zoonosis– they may not even cause disease in the same species. 

Jung-Ho’s research is with Staphylococcus pseudintermedius (S. pseudintermedius), a bacteria commonly found in dogs. While playing with your pet, this bacteria can transfer onto your skin where it can meet the human variant, S. aureus; the same bacteria Fleming was studying during his discovery of penicillin. 

“Normally there are no problems,” Jung-Ho assures animal lovers. “You don’t need to get rid of your pet. It’s just in some cases…”

In some cases, the type of S. pseudintermedius bacteria on your dog may be resistant to an antibiotic and pass this resistance onto the S. aureus. The result is a disease-spreading human bacteria that is resistant to an antibiotic it has never been exposed to. Likewise, the reverse may occur where a resistant human bacteria passes on its resistance to an animal  disease.

How can this spread of resistance between animals and humans be prevented? The first step, Jung-Ho explains, is to find out where these antibiotic resistant bacteria are and how they are spreading. 

Jung-Ho describes an experiment conducted in a veterinary waiting room. Swabs are taken from the patient pets, veterinary surgeons and the environment, such as the waiting room chairs and computers. The bacteria are then analysed to see if the same microorganism variant (or ‘strain’) is found in more than one place. For example, if the same bacteria strain are found on the vets’s skin and the dog, this suggests transmission is occurring between the two parties. 

The direction of this transmission (vet to dog or dog to vet) is not possible to determine, but in the case of S. pseudintermedius, Jung-Ho knows it is predominantly found on dogs, so must be moving from animal to human. If a large quantity of this bacteria is also found on the furnishings of the clinic, it is likely that significant extra transmission is occurring by people touching their keyboard or chair, even if they have never interacted with an animal. This kind of spread can then be prevented by sterilising the area more thoroughly: a find that can reduce disease and the spread of resistant bacteria. 

This approach sounds straightforward until the sheer number of bacteria are considered. Even when narrowed down to a specific strain, it is not immediately obvious whether the bacteria found have the same source, or if they have originated from entirely different locations. Jung-Ho described two main methods for checking to see if the samples are the result of a single transmitted population:

(1) The identification of ‘Housekeeping genes’. These genes are particular to a bacteria type and essential to its survival, resulting in these genes being present in each cell of the microorganism. ForStaphylococcus bacteria, there are seven housekeeping genes but their order may differ between bacteria populations. By identifying these genes in bacteria found in different locations, scientists can establish if they were transmitted between the two places. This technique is known as ‘Multilocus sequence typing’ or MLST. 

(2) The use of another molecule that cuts the bacteria’s DNA into pieces. Called ‘Pulsed-field gel electrophoresis’ (PFGE) this technique is also used in genetic fingerprinting to identify an individual from their DNA. In this case, the bacteria’s DNA is sliced using a gene cutting enzyme that only produces the same pieces if the bacterial populations are identical. 

Jung-Ho points out that such methods –while effective– are cumbersome compared with the prospect of being able to sequence the whole DNA of a bacteria. He hopes that with the new generation of machines capable of quickly finding complete gene sequences, it will become easier to track the antibiotic resistant bacteria. This will not only allow scientists to minimise infection but also warn doctors that a disease may be resistant to certain drugs. Such an early detection of the bacteria type can save vital time when treating a patient. 

One of the difficulties with this research is that it sits between human and animal medicine. Jung-Ho’s own background is in veterinary science, yet to understand the bacterial spread across species he must now gain expertise on the medical side. 

“It is a big question in zoonsis research,” he explains. “Who should do it? Doctors or vets? Someone has to, I think.”


Were Dinosaurs adaptable enough to survive the meteor?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


Based on the third floor of the Hokkaido University museum, Assistant Professor Yoshitsugu Kobayashi is a dinosaur hunter. His searches for the fossilised remains of Earth’s previous rulers has taken him on a pan Pacific sweep of countries that has most recently landed him in Alaska. 

With its freezing winters and long nights, Alaska is not the first place one might suppose 

to find the remains of beasts resembling giant lizards. Nor has the landmass moved from more comfortable tropics over the last 66 million years since the dinosaurs became extinct. To understand how these creatures could have thrived in such a hostile environment, Yoshitsugu suggests that our view of dinosaurs needs to change. 

Dinosaurs have long been considered the enormous ancestors of cold blooded reptiles. Unable to control their own body temperature, these creatures would need to live in warm climates to maintain a reasonable metabolism. Finding fossils in Alaska therefore, presents scientists with some problems. 

“To survive winters in Alaska, the dinosaurs would need to be warm blooded,” Yoshitsugu explains. “This allows them to adapt more easily to different environments.”

Which brings us to Yoshitsugu’s next big question; to what extent were the dinosaurs able to flourish in different living conditions? 

To investigate this, Yoshitsugu and his team examined the fossils found in Alaska and compared them with known dinosaur groups elsewhere in the world. There were three main possibilities for their findings:

The Alaskan population of dinosaurs could match that of North American dinosaurs at lower latitude, implying that same group had migrated north.


Rather than North American, the dinosaurs could be kin to those found in Asia. This would require a migration across from Russia to Alaska and its likelihood is determined by the condition of the Bering Strait; an 82 km stretch between Russia’s Cape Dezhnev and Cape Prince of Wales in Alaska. In the present time this gap is a sea passage but in the past, a natural land bridge has existed and is thought to be responsible for the first human migration into America. Could the dinosaurs have taken the same path millions of years before? 

The third option is that the Alaskan dinosaurs resemble neither their North American or Russian cousins and a different kind of dinosaur inhabited these frozen grounds.  

Examining their finds from this region, Yoshitsugu’s team concluded that what they had were North American dinosaurs. This meant that dinosaurs --like humans-- were capable of living across an incredibly wide range of environments, with different climates, food sources and dangers.  

“People often picture the dinosaurs as having small brains,” Yoshitsugu amends. “But the same dinosaurs lived in very different places. They were intelligent and adaptable.”

This adaptability has one very important consequence; the meteor that hit the Earth 66 million years ago is unlikely to have been solely responsible for the dinosaurs’ demise. 


As it crashed into the Earth, the meteor raised a huge cloud of dust that blotted out the sun, creating a cold and dark new world. Had the dinosaurs been cold blooded, such a massively extended winter would have proved fatal, but Yoshitsugu’s research supports suggestions that not only were the dinosaurs warm blooded, they were also equipped to deal with this new harsh environment. 

This doesn’t mean the meteor failed to have an impact on the dinosaur population. With a drop in sunlight, the vegetation would have decreased, reducing the food source for herbivores and subsequently, the carnivores who fed on them. Yet, it does appear that the dinosaurs could have survived if this were the only disaster.  

So what else could have happened to destroy this species? 


Yoshitsugu pointed to two other possible causes: The first is major volcanic activity in the vast mountain range that covers large parts of India. In a series of prolonged eruptions that persisted for thousands of years, lava poured over hundreds of miles of land and the ejected carbon and sulphur dioxide combined to acidify the oceans. The impact on life was catastrophic and longer lasting than that from the meteor.

The second possible cause of dinosaur extinction is linked with the drop in the sea levels that occurred towards the end of the 66 million year mark. As the water receded, land bridges were formed between the continents, allowing dinosaurs to travel freely across the globe. The result was a drop in genetic diversity among the dinosaur species. The same effect is seen today if a foreign breed of animal is introduced into a new ecosystem. In the UK, the introduction of the eastern grey squirrel has driven the native red squirrel to the point of extinction. The grey squirrel can digest local food sources more effectively and also transmits a disease fatal to red squirrels, driving down their numbers. This uniformity in dinosaur genetics made them highly susceptible to eradication, since a physiological or environmental impact will equally affect the entire, near-identical, population. 

It was most likely a combination of these three catastrophes that caused the death of the dinosaurs, Yoshitsugu concludes. 


However, dinosaurs were not the only giants walking the Earth 230 million years ago. Living alongside these warm blooded creatures were genuine cold blooded reptiles; the giant crocodiles. With limbs that came out from the body sideways, not straight, these huge beasts were a distinct species from the dinosaurs. Exactly how these monsters dwelt together is currently being explored in the new exhibit at the Hokkaido University Museum, ‘Giant Crocodiles vs Dinosaurs’, which opened last Friday. The exhibit includes full sized casts of both adult and juvenile crocodiles and dinosaurs, describing their location and where they overlapped. There is also a movie sequence Yoshitsugu was involved in creating that shows a simulated fight between a giant crocodile and a T-Rex over prey. 

This unique view of the world of the dinosaurs is open until October 27th. We hope to see you there!


[Photo captions from top to bottom: (1) Yoshitsugu Kobayashi standing in front of the skeleton of an afrovenator dinosaur. (2) The skull of a triceratops. (3) The carnivorous  afrovenator dinosaur. (4) The herbivorous camarasaurus. (5) The skull of a giant crocodile that lived alongside the dinosaurs.]