How do you vote?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Your pen hovers above the list of names printed on the ballot slip. Do you choose your favourite candidate, or opt for your second choice because they stand a stronger chance of victory?

It is this thought process that drives the curiosity of Assistant Professor Kengo Kurosaka in the Graduate School of Economics.

“When I first started school, we often had to vote for choices in our homeroom,” he explains. “I felt at the time this was not always done fairly! Perhaps that inspired me.”

Assistant Professor Kengo Kurosaka

Assistant Professor Kengo Kurosaka

According to the theorem developed by American professors, Allan Gibbard and Mark Satterthwaite, it is impossible to design a reasonable voting system in which everyone will simply declare their first choice [1]. Instead, people base their selection not only on their own preferences, but on how they believe other voters will act.

Such ‘strategic voting’ can take a number of forms. Voters may opt to help secure a lower choice candidate if they believe their top choice has little chance of success. Alternatively, they may abstain from voting altogether, if they perceive their first choice has ample support and their contribution is not needed. Voters can also be influenced by the existence of future polls, when the topic they are voting on is part of a sequence of ballots for a single event.

One example of sequential balloting was the construction of the Shinkansen line on Japan’s southern island of Kyushu. The extension of the bullet train from Tokyo was performed in three sections: (1) Tokyo to Hakata, (2) Hakata to Shin Yatsushiro and (3) Shin Yatsushiro to Kagoshima. However, rather than voting for the segments sequentially as (1) -> (2) -> (3), the northern most segment (1) was first proposed, followed by segment (3) and then finally segment (2). Kengo can explain the choice for this seemingly illogical ordering by considering the effect of strategic voting.

The Shinkansen line through Kyushu

The Shinkansen line through Kyushu

In his hypothesis, Kengo made three reasonable assumptions: Firstly, that the purpose of the Shinkansen line is the connection to Tokyo. Without this, residents would not gain any benefit from the line’s construction. The second assumption was that if the Shinkansen line was not built, the money would be spent on other worthwhile projects. Finally, that the order of the voting for each segment of line was known in advance and voted for individually by the Kyushu population.

If the voting occurred on segments running north to south, (1) -> (2) -> (3), Kengo argues that none of the Shinkansen line would have been constructed. The issue is that the people who have a connection to Tokyo have no reason to vote for the line extending further south. This means that once the line has been constructed as far as Shin Yatsushiro in segment (2), there would not be enough votes to secure the construction of the final extension to Kagoshima. The residents living in the Kagoshima area will anticipate this problem. They therefore will vote against the construction of line segments (1) and (2), knowing that these will never connect them to Tokyo. Without their support, segment (2) will also not get built. This in turn will be anticipated by the Shin Yatsushiro residents, who will then also not vote for segment (1), knowing that it cannot result in the capital connection. The result is that none of the three line segments secure enough votes to be constructed.

The only way around this, Kengo explains, is to vote on the middle section (2) last. The people living around Shin Yatsushiro know that unless they vote for segment (3), the Kagoshima population will not support their line in segment (2). They therefore vote for section (3), and then both they and the Kagaoshima population vote for the final middle piece, (2). Predicting the success of this strategy, everyone votes for segment (1). The people who do vote against the line are therefore the ones who genuinely do not care about the connection to Tokyo.

Kengo’s theory works well for explaining why the voting order for the Shinkansen line was the best way to create a fair ballot. However, it is hard to scientifically test universal predictions for such strategic voting, since it would be unethical to ask voters to reveal how they voted after a ballot. To circumnavigate this problem, Kengo has been designing laboratory experiments that mimic the voting process. His aim is to understand not just how sequential balloting affects results, but the overall impact of strategic voting.

In 8 sessions attended by 20 students, Kengo presents the same problem 40 times in succession. The students are divided into groups of five, denoted by the colours red, blue, yellow and green.

Voting experiment: students are assigned to a group and gain different point scores depending on which ‘candidate’ wins.

Voting experiment: students are assigned to a group and gain different point scores depending on which ‘candidate’ wins.

They are offered the chance to vote for one of four candidates, A, B, C or D. Students in the red group will receive 30 points if candidate A wins, 20 points if candidate B wins, 10 for candidate C and nothing if candidate D is selected. The other groups each have different combinations of these points, with candidate B being the 30 point favourite for the blue group and candidate C and D being the highest scorers respectively for the yellow and green groups. If each student simply voted for the candidate which would give them the highest point number, the poll would be a draw, with each candidate receiving five votes. But this is not what happens.

When confronted with the four options, the students opt for different schemes to attempt to maximise their point score. One choice is simply to vote for the highest point candidate. However, a red group student may instead vote for the 20 point candidate B, in the hope that this would break the tie and promote this candidate to win. While candidate B is not as good as the 30 point candidate A, it is preferable to either of the lower scoring candidates C or D winning.

Since the voting is conducted multiple times, students will also be influenced by their past decisions. If a vote for candidate A was successful, then the student is more likely to repeat this choice for the next round. Then there are the students who attempt to allow for all the above scenarios, and make their choice based on a more complex set of assumptions.

This type of poll mimics that used in political voting and interestingly, the outcome in that case is predicted by ‘Duverger’s Law’; a principal put forward by the French sociologist, Maurice Duverger. Duverger claimed that the case where a single winner is selected via majority vote strongly favours a two party system. So no matter how many candidates are in the poll initially, most of the votes will go to only two parties. To support a multi-party political system, a structure such as proportional representation needs to be introduced, where votes for non-winning candidates can still result in political influence.

Duverger’s Law appears to be supported by political situations such as those in the United States, but can it be explained by the strategic behaviours of the voters? By constructing the poll in the laboratory, Kengo can produce a simplified system where each voter’s first choice is clear and influenced only by their strategic selections. What he found is that the result followed Duverger’s Law with the four candidates reduced to two clear choices. Kengo is clear that this does not prove Duverger’s Law is definitely correct: the laboratory situation, with the voters drawn from a very specific demographic, does not necessarily translate accurately to the real world. However, if the principal had failed in the laboratory, it would have proved that strategic voting alone cannot be the only process at work.

An overall goal for Kengo’s work is to predict the effect of small rule changes in the voting process, such as the order of voting for segments of a Shinkansen line or the ability to vote for multiple candidates in an election. This allows such adjustments to be assessed and a look at who would most likely benefit. Such information can be used to make a system fairer or indeed, influence the result.

So next time you are completing a ballot paper, remember the complex calculation that your decision is about to join.

[1] The word ‘reasonable’ here is loaded with official properties that the voting system must have for the Gibbard-Satterthwaite theorem to apply. However, these are standard in most situations.

Living on an edge: how much does a planetary system’s tilt screw up our measurements?

Journal Juice: summary of the research paper, 'On the Inclination and Habitability of the HD 10180 System',  Kane & Gelino, 2014. (Shared also on Google+)

This animation shows an artist's impression of the remarkable planetary system around the Sun-like star HD 10180. Observations with the HARPS spectrograph, attached to ESO's 3.6-metre telescope at La Silla, Chile, have revealed the definite presence of five planets and evidence for two more in orbit around this star.

HD 10180 is a sun-like star with a truck load of planets. The exact number in this ‘truck load’ however, is slightly more uncertain. The problem is that we haven’t seen these brave new worlds cross directly front of their star, but have found them by looking at the wobble they produce in the star’s motion. As a planet orbits its stellar parent, its gravity pulls on the star to make it move in a small, regular oscillation as the planet alternates from one side of the star to the other. By fitting this periodic wobble, scientists can estimate the planet’s mass and location. 

The problem is that when there is a truck load of planets, there are multiple possible fits to the star’s motion, and each of these yield different answers for the properties of the planetary system. 

In 2011, a paper by Lovis et al. determined that there were 7 planets orbiting HD 10180, labelled HD 10180b through to HD 10180h. However, they were uncertain about the existence of the innermost ‘b’ planet and their model made a number of inflexible assumptions about the planet motions.

In this paper, authors Kane & Gelino revisit the system. One of the constraints they remove from the Lovis model is the insistence that a number of the planets sit on circular orbits. Rather, the authors allow the planets to potentially all follow squished elliptical paths around their star. In our own Solar System, the Earth has an almost circular orbit around the sun, but Pluto does not, moving in a squashed circle of an orbit. 

The result of this new model is a six planet system, where the dubious planet ‘b’ is removed. Two other planets, ‘g’ and ‘h’, also have their orbits changed, with ‘g’ now moving on a more elliptical path. This is particularly interesting since planet ‘g’ was thought to be in the habitable zone: the region around the star where the radiation levels would be right to support liquid water. How does g’s new orbit change things?

Before charging off in that direction, the authors ask a second important question: what is the inclination of the planetary system? Are we looking at it edge-on (inclination angle, i = 90 degrees),  face-on (i = 0) or something in between (i = ??) ?

This question is important since it affects the estimated mass of the planets. The more face-on the planetary system is with respect to our view on Earth, the larger the mass of the planets must be to produce the observed star wobble. Unfortunately, inclination is devilishly hard to determine without being able to watch the planets pass in front of their star. 

While it was not possible to view the inclination directly, the authors ran simulations of the planet system’s evolution to test out different options. Since the planets’ masses change with the assumed inclination, the gravitational interactions between the worlds also changes. Some of these combinations are not stable, kicking a planet out of its orbit. Unless we got very lucky with regards to when we observed the planets, the chances are these unstable configurations are not correct. This allows scientists to limit the inclination possibilities. The simulations suggested that an angle less than 20 degrees was not looking at all good for planet ‘d’. 

With the new model and a set of different inclinations, the authors then returned to the most exciting (from the point of view of a new Starbucks chains) planet ‘g’. Even with its new squished-circle orbit, planet ‘g’ spends 89% of its time in the main habitable zone. The remaining 11% is within the more optimistic inner edge for the habitable zone, whereby the temperatures might be able to support water for a limited epoch of the planet’s history. This is a pretty promising orbit, since there is evidence that a planet’s atmosphere might be able to redistribute heat to save it during the time it spends in a rather too toasty location.

However, planet ‘g’ has other problems.

If the planet system is edge-on, the mass of planet ‘g’ is 23.3 x Earth’s mass with a rather massive radius of 0.5 x Jupiter. Tilting round to face-on at i = 10 degrees (bye bye planet ‘d’) increases that still more to 134 Earth masses and the same radii as Jupiter. 

All options then, suggest promising planet ‘g’ is not a rocky world like the Earth, but a gas giant. The best hope for a liveable location would therefore be an Ewok-invested moon. Yet, even here the authors have doubts. Conditions on a moon are affected both by the central star and also heat and gravitational tides from the planet itself. Normally, these are small enough to forget compared to the star’s influence, but with the planet skirting so close to the star for 11% of its orbit, this may be sufficient to give a moon some serious dehydration problems. 

The upshot is that the inclination of the planetary system’s orbit is vitally important for determining the masses of the member planets and that HD 10180g is worth watching for moons, but probably not ready for a Butlins holiday resort. 

The microscope that can follow the fundaments of life

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Professors Bi-Chang Chen and Peilin Chen describe their research. Left: (anti-clock-wise from bottom) myself, Professor Peilin Chen, Professor Bi-Chang Chen and Professor Nemoto. Right: Professor Peilin Chen (left) and Bi-Chang Chen.

Professors Bi-Chang Chen and Peilin Chen describe their research. Left: (anti-clock-wise from bottom) myself, Professor Peilin Chen, Professor Bi-Chang Chen and Professor Nemoto. Right: Professor Peilin Chen (left) and Bi-Chang Chen.

“Everyone wants to see things smaller, faster, for longer and on a bigger scale!” Professor Bi-Chang Chen exclaims. 

It sounds like an impossible demand, but Bi-Chang may have just the tool for the job.

Professor Bi-Chang Chen and his colleague, Professor Peilin Chen, are from Taiwan’s Academia Sinica. Their visit to Hokudai this month was part of a collaboration with Professors Tomomi Nemoto and Tamiki Komatsuzaki in the Research Institute for Electronic Science. The excitement is Bi-Chang’s microscope design: a revolutionary technique that can take images so fast and so gently, it can be used to study living cells. 

The building blocks of all living plants and animals are their biological cells. However, many aspects of how these essential life-units work remains a mystery, since we have never been able to follow individual cells as they evolve. 

The problem is that cells are changing all the time. Like photographing a fast moving runner, an image of a living cell must be taken very quickly or it will blur. However, while a photographer would use a camera flash to capture a runner, increasing the intensity of light on the cells knocks them dead. 

Bi-Chang’s microscope avoids these problems. The first fix is to reduce unnecessary light on the parts of the cell not being imaged. When you look down a traditional microscope, the lens is adjusted to focus at a given distance, allowing you to see different depths in the cell clearly. A beam of light then travels through the lens parallel to your eye and illuminates the sample. The problem with this system is that if you are focusing on the middle of a cell, the front and back of the cell also get illuminated. This both increases the blur in the image and also drenches those extra parts of the cell in damaging light. With Bi-Chang’s microscope, the light is sent at right-angles to your eye, illuminating only the layer of the cell at the depth where your microscope has focused.

This is clever, but it is not enough for the resolution Bi-Chang had in mind. The shape of a normal light beam is known as a ‘Gaussian beam’ and is actually too fat to see inside a cell. It is like trying to discover the shape of peanuts by poking in the bag with a hockey stick. Bi-Chang therefore changed the shape of the light so it became a ‘Bessel beam’. A cross-section of a Bessel beam looks like a bullseye dart board: it has a narrow bright centre surrounded by dimmer rings. The central region is like a thin chopstick and perfect for probing the inside of a cell, but the outer rings still swamp the cell with extra light. 

Bi-Chang fixed this by using not one Bessel beam, but around a hundred. Where the beams overlap, the resultant light is found by adding the beams together. Since light is a wave with peaks and troughs, Bi-Chang was able to arrange the beams so the outer rings cancelled one another, a process familiar to physics students as ‘destructive interference’. This left only the central part of the beams which could combine to illuminate a thin layer of the cell at the focal depth of the microscope. 

Not only does this produce a sharp image with minimal unnecessary light damage, but the combination of many beams allows a wide region of the sample to be imaged at one time. A traditional microscope must move point-by-point over the sample, taking images that will all be at slightly different times. Bi-Chang’s technique can take a snap-shot at one time of a plane covering a much wider area.

To his surprise, Bi-Chang also found that this lattice of light beams (known as a lattice light sheet microscope) made his cells healthier. In splitting the light into multiple beams, the intensity of the light in each region was reduced, causing less damage to the cells. 

The net result is a microscope that can look inside the cells and leave them unharmed, allowing the microscope to take repeated images of the cell changing and dividing. By rapidly imaging each layer, a three dimensional view of the cell can be put together. Such a dynamical view of a living cell has never been achieved before, and opens the door to a far more detailed study of the fundamental working of cells. Applications include understanding the triggering of cell divisions in cancers, how cells react to external senses and message passing in the brain.

“We don’t know how powerful this technique is yet,” explains Peilin Chen. “We don’t know how far we can go.”

This is a question Tomomi Nemoto’s group are eager to help with. In collaboration with Hokudai, Bi-Chang and Peilin want to see if they can scale up their current view of a few cells to a larger system. 

“We’d like to extend the field of view and if possible, look at a mouse brain and the neuron activity,” Bi-Chang explains. “That is our next goal!”

It is an exciting possibility and one that may be supported by a new grant Hokudai has received from the Japanese Government. Last summer, Hokudai became part of the ‘Top Global University Project’, with a ten year annual grant to increase internationalisation at the university. Part of this budget will be used in research collaborations to allow ideas such as Bi-Chang’s microscope to be combined with projects that can put this new technology to use. Students at Hokudai will also get the opportunity to take courses offered by guest lecturers from around the world. These are connections that will make 2015 the best year yet for research. 

Why you’re wired to make poor choices

Let’s play a game. I am going to flip a coin. If you guess correctly which side it will land, I will give you $1. If you get it wrong, you will give me $1. We will play this game 100 times. 

Since we are using my coin, it may be weighted to preferentially land on one side. I am not going to tell you which way the bias works, but after 20 - 30 flips, you will notice that the coin lands with ‘heads’ facing up three times out of four. 

What are you going to do?

This was the question posed by Professor Andrew Lo from the Massachusetts Institute of Technology, during his talk last week for the Origins Institute Public Lecture Series. Lo’s background is in economics. It is a research field with a problem. 

In common with subjects such as anthropology, archaeology, biology, history and psychology, economics is about understanding human behaviour. While the findings in these areas may not always be relevant to one another, they should not be contradictory. For example, studying mating behaviours in anthropology should be consistent with the biological mechanics of human reproduction. The problem is that economics is full of contradictions. 

In the example of the weighted coin, the financially sound solution made by a hypothetical ’Homo-economicus’ would be to always choose ‘heads’. This would correctly predict the coin toss 75% of the time and net a tidy profit. However, when this choice is given to real Homo-sapiens, people randomise their choices. In fact, humans match the probability of the coin weighting, selecting heads 75% of the time and tails 25% of the time. Moreover, if the coin is surreptitiously switched during the game to one with a weighting for 60% heads, 40% tails, then the contestants change their guesses to also select heads 60% of the time. 

Economics tells us this is not the choice we should be making. So why do we do it?

This selection choice holds in even much more serious situations. During World War II, the U.S. Army Airforce organised bombing missions over Germany. These were incredibly dangerous flights with two main risks: the plane could receive a direct hit and be shot down, or it could be pelted with shrapnel from anti-aircraft artillery. In the former case, the best chance of survival would be a parachute. In the latter, it would be safest to wear a metal plated flak jacket. Due to the weight of the flak jacket, it was not possible for pilots to select both pieces of equipment. The army therefore gave them a choice of items. 

The chances of being hit by shrapnel were roughly three times higher than being shot down. Despite this, pilots selected the flak jackets only 75% of the time. When the army realised this was delivering a high death toll to their pilots, they tried to mandate the flak jacket selection. However, the pilots then refused to volunteer for such missions, claiming that if they were taking their lives in their own hands, they ought to be able to pick the kit they wanted. 

What is perhaps even stranger is that humans are not the only species that does this probability matching. Similar behaviour is seen in ants, fish, pidgeons, mice and any other species that can select between option A and B. This can be tested on fish surprisingly simply. If you feed a tank of fish from the left-hand corner 75% of the time and from the right-hand corner 25% of the time, then 25% of the fish will swim hopefully to the right side of the tank when it is time to be fed. 

Lo’s feeling was that this common probability matching trait must have stemmed from a survival advantage. He therefore turned from economics to evolution. 

Lo admitted that crossing from one field of research to another can be daunt-ing. He humourously recalled an attempt by his university to bring together researchers in different areas during an organised dinner. 

“We were speaking different languages! I never went back!” he told us.

Despite this disastrous first attempt, Lo has found himself drawn into projects that span fields. 

“Interdisciplinary work comes about naturally when you’re trying to solve a problem,” he explained. “Problems don’t care about traditional field boundaries.”

To explore the possible survival advantages of a probability matching instinct, Lo considered a simple choice for a creature: where to build a nest? 

In Lo’s system there are two choices for nest location: a valley floor by a river or high up on a plateau. While it is sunny, the valley floor provides the best choice. It is shaded with a water source, allowing the creature to successfully breed. Meanwhile, the plateau is too hot and any off-spring will die. However, when it rains, the situation is reversed: the valley floods and kills all the creatures, while the plateau provides a safe nest. 

If it is sunny 75% of the time, which is the best choice? 

Lo ran this experiment as a computer model for five sets of imaginary creatures. One group (the ‘Homo-economist’ creatures), always picked the valley. The second group selected the valley 90% of the time, the third 75% of the time and the forth and fifth groups 50% and 20% of the time. 

At the beginning of the simulation, the ‘Homo-economists’ had a population boom, successfully producing more children then any other group. However, then it finally rained in the valley. All of this group were killed, leaving only the groups which had some creatures on the plateau. After 25 generations of creatures (corresponding to 25 chances to rain or shine), the group which selected the valley 75% of the time were in the lead. In short, while the valley was the better bet for any one individual, the species did better if it probability matched. 

Lo’s model is simple, but it does offer an explanation to why humans may not make the most logical decision when confronted with two choices: it is an instinct that helped the human race survive. Perhaps even more interestingly, it may even explain the origins of intelligence.

Lo next posed the question: What is intelligence?

He admitted this is a difficult quantity to assess, and suggested the reverse is much easier to recognise: What is stupidity?

To demonstrate this, Lo presented three pictures. The first showed two men listening to the radio in a swimming pool. Said radio was attached to the electricity supply via a cable held out of the water on a floating flip-flop shoe. In the second image, a man was holding open a crocodile’s jaws so he could take a photograph within its gaping maw. The final photograph depicted a man working underneath his truck, which was raised at a precarious 120 degrees using two planks of wood. 

This caused laughter from the audience. We recognise this as stupid, Lo suggested, because we know these people are about to take themselves out of the gene pool. If intelligence is the opposite of stupidity, it must mean intelligent behaviour is linked with our chances of survival. By evolving to maximise the probability of our species surviving, we are selecting for our own impressive IQ. 

The Origins Institute is based at McMaster University in Canada. Their public lectures are free to attend and live recordings can be found on their website.

The search for our beginnings

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website. This post was also featured on Physics Focus.


“Volcanologists go to volcanoes for their field work, but meteoriticists never go to asteroids!”

Shogo Tachibana is an associate professor in the department of Department of Natural History Sciences and one of the key scientists on the team behind Japan’s new Hayabusa2 mission. As he talked, he plucked from a shelf in his office a small grey rock embedded in a cottonwool filled container. It was a fragment of a meteorite that had fallen to Earth in Australia in 1969.

Appearing as a bright fireball near the town of Murchison, Victoria, the later named ‘Murchison meteorite’ became notorious not just for its size, but because embedded in its rocky matrix were the basic building blocks for life. The amino acids found in the Murchison meteorite were the first extra-terrestrial examples of these biological compounds, and their discovery threw open the door to the possibility that the seeds for life on Earth came from space.

“Meteoriticists can only pick up stones, observe the asteroids and imagine what happened,” Tachibana pointed out, tapping the meteorite. “But by visiting the asteroids we can get a much clearer idea of our Solar System’s evolution.”

It is this goal that drives the Hayabusa2 mission. Translating as ‘peregrine falcon’ in English, Hayabusa2 will chase down an asteroid, before performing three surface touchdowns to gather material to return to Earth. It is a project that recalls the recent Rosetta mission by the European Space Agency, and with good reason, since the missions have a common goal.

All life on Earth requires water, yet in the most favoured planetary formation models, our world forms dry. To produce the environment for life, water must have been delivered to Earth from another source. One possible source is the comets, which Rosetta visited. A second, is the asteroids.

Situated primarily in a band between Mars and Jupiter, asteroids are left-over parts that never made it into a planet. Broadly speaking, they come in two main classes. The ’S-type’ asteroids match the most common meteorites that fall to Earth. During their lifetimes, they have been heated and their original material has changed. The second type are the ‘C-type’ asteroids and these are thought to have altered very little since the start of the Solar System 4.56 billion years ago. It is this early remnant of our beginning that Hayabusa2 is going to visit.

The asteroid target is ‘1999 JU3’: an alphabetical soup name derived from its date of discovery on the 10th May 1999. Observations of 1999 JU3 have tantalisingly hinted at the presence of clays that could only have formed with water. Since 1999 JU3 has a meagre size of approximately 1 km across, it does not have the gravity to hold liquid water. Instead, it must have formed from parts of a larger asteroid. Such a history fits with 1999 JU3’s orbit. The asteroid is termed a ‘near-Earth asteroid’, on an unstable orbit between Mars and the Earth. Since it cannot maintain its current orbit for more than about 10 million years, it must have originated from a different location. One of Hayabusa2’s missions is to investigate this possibility by examining the presence of radioactive elements in the asteroid’s body. Atoms that are radioactive decay into other elements over time. However, rays from the Sun may trigger their formation. As the asteroid has changed its path, the abundance of radioactive elements will shift depending on if it moved closer or further from the Sun. By measuring the quantities of these elements, Hayabusa2 can glean a little of the asteroid’s past.

The delivery of water alone is not the only reason why asteroids may be important. Discoveries from meteorites such as the one that fell in Murchison, have revealed a connection between the evidence of water and the presence of amino acids. Even more excitingly, these reactions with water appear to be correlated with the production of left-handed amino acids. Like your left and right hand, amino acids can exist as mirror images of one another. While laboratory experiments produce an equal number of left- and right-handed molecules, life on Earth strongly favours the left-handed version. How this preference came about is not clear, but it is possible the selection began on the asteroid that brought this organic material to Earth. When the samples from Hayabusa2 are analysed, scientists will be holding the final evolution of organic matter on the asteroid and possibly, the beginning of our own existence.

Hayabusa2 launched from the Japanese Aerospace Exploration Agency’s (JAXA) launch pad on the southern tip of Japan at the start of this month. It will reach 1999 JU3 by mid-2018, and spend the next 18 months examining the asteroid before return to Earth in December 2020. During its visit, three touchdowns to the asteroid’s surface are planned to gather material to return to Earth. The first touchdown will occur after a thorough examination to search for the presence of hydrated clays. Sweeping down to the surface, Hayabusa2 will launch a 5g bullet at the asteroid to kick-up the material to be gathered by the spacecraft. It will also dispatch a lander called ‘MASCOT’ that in turn, will deploy three small rovers. MASCOT (Mobile Asteroid Surface SCout) was developed by the German Aerospace Center and French space agency: the team also behind the successful Philae lander for Rosetta. MASCOT has a battery designed to last for 15 hours and the ability to make at least one move to a second location. The destination for the trio of rovers will be decided once the initial data from Hayabusa2 reaches Earth.

After its first landing, Hayabusa2 will drop to the surface another two times. Since the asteroid is thought to be formed from fragments of a larger body, different surface locations may have very different properties. Prior to the third decent, Hayabusa2 will launch an impactor carrying 4.5 kg of explosives at the asteroid’s surface. During the collision, Hayabusa2 will hide behind the asteroid to avoid damage, but send out a small camera to watch the impact. While the resultant crater size will depend on the structure of 1999 JU3, the maximum extent is expected to be around 10m in diameter. Hayabusa2 will then drop to gather its final sample from this area, which will consist of material below the surface of the asteroid.

The gathering of sub-surface material is particularly important for investigating 1999 JU3’s formation history. Exposure to cosmic and solar rays can cause ‘space weathering’ that changes the surface of the asteroid. This is something Japanese researchers are well aware, since this incredibly ambitious mission has actually been performed before.

As the name implies, Hayabusa2 is the second mission of its kind. In 2003, the first Hayabusa spacecraft began its seven year round-trip journey to the asteroid, Itokawa. Unlike 1999 JU3, Itokawa is a S-type asteroid and the evidence from Hayabusa suggests it had been previously heated up to temperatures of 800 celsius. While the mission to Itokawa was not designed to unravel the Solar System’s early formation, it did reveal a large amount of information about the later evolution of space rocks, including the effects of weathering. In addition to the science, Hayabusa’s journey was also a lesson in what was needed to pull off such an ambitious project.

While Hayabusa was ultimately successful, not everything went according to plan. During its landing sequence to the asteroid’s surface, one of Hayabusa’s guidance lasers found an obstacle at the landing site. This should have terminated the operation, but rather than returning to space, Hayabusa continued with its decent. In a problem that the Rosetta lander, Philae, would later mimic, Hayabusa bounced. It returned to the asteroid’s surface but stopped, staying there for half an hour rather than the intended seconds needed to gather a sample.

“This was really dangerous,” Tachibana explains. “On the surface of the asteroid, the temperature is very high and it’s not good for the spacecraft to be there long.”

Despite this, Hayabusa survived and was able to make a second landing. Yet even here there were issues, since neither landing had deployed the bullets needed to stir up the surface grains. This left scientists unsure how much material had been successfully gathered. Hayabusa’s lander also met an unfortunate end when an automated altitude keeping sequences mistakenly activated during the lander’s deployment. Missing the asteroid’s surface, the lander could not be held by the rock’s weak gravity and it tumbled into space.

While Hayabusa 2’s design strongly resembles its predecessor, a number of key alterations have been implemented to avoid these problems happening again.

“We are sure Hayabusa2 will shoot the bullets,” Tachibana says confidently. Then he pauses, “… but just in case, there is a back-up mechanism.”

The back-up mechanism uses teeth on the end of the sample-horn that can dig into the asteroid. As the teeth raise the grains on the surface, Hayabusa 2 will decelerate, allowing the now faster moving grains to rise-up inside the sample chamber.

The sample jar itself also has a different design from Hayabusa, since 1999 JU3 is likely to contain volatile molecules that will be mixed with terrestrial air on the return to Earth if the jar seal is not perfect. Tachibana is the Principal Investigator for the sample analysis once Hayabusa2 returns to Earth. In preparation for this,a new vacuum chamber must be set up in JAXA to ensure no terrestrial pollution when Hayabusa2’s precious payload is finally opened.

“It was not easy to get support for a second mission,” Tachibana explains. “JAXA have a much smaller budget than NASA and there are many people who want to do different missions. We had to convince the community and the researchers in various fields how important this project was.”

It is a view shared by the international scientific community and JAXA have no intention of completing this work alone. In addition to the MASCOT lander‘s international origins, NASA’s network of ground stations will help track the spacecraft to ensure continual contact with Earth. When Hayabusa2 returns to Earth, the analysis of the samples will then be performed by a fully international team. Hayabusa2 will also be able to compare results from the new OSIRIS-REx NASA mission, which will launch in 2016 to visit and sample the asteroid, ‘Bennu’. Such comparisons are vitally important when samples are this difficult to collect.

“International collaboration is very important for space missions,” Tachibana concludes. “For the future, we don’t want to be closed. We want to make the best team to analyse the samples.”

How not to catch a fish

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


At the University of the Algarve, Professor Teresa Cerveira Borges is trying to help fisherman to stop catching unwanted fish.

The Algarve stretches along the the southern coastline of Portugal. With a warm climate and beautiful beaches, it is one of the most popular holiday destinations in Europe. The region is also famous for fishing, with a lagoon system formed between the mainland and a string of barrier islands creating an ideal natural fish hatchery.

Like Japan, fish are a major part of the local diet. Within the European Union, Portugal is number one for fish consumption and the sea zone controlled by Portugal (more technically, the ‘Exclusive Economic Zone’ for exploration and marine resource management) is the third largest in the European Union and 11th in the world. Import and export of fish is also a major business, so much so that the country’s most traditional fish —a cod— is not found in Portuguese waters but is imported from Norway and Iceland.

However, acquiring the Christmas Bacalhau (the Portuguese word for ‘cod’) is not Teresa’s concern. What she is worried about is the fish that are caught, but then thrown away.

Portuguese fishermen use a variety of different fishing gear in their work. Some —like the seine or trammel nets— surrounds the fish and then draws closed around the catch. Others like the trawl nets, scrape the bottom of the sea and sweep up everything in their path. Longlines consist of a row of hooks baited with a small fish like a sardine while traps and pots entice prey into an enclosed space to be easily gathered up.

The problem is that not everything caught is used. As well as the species the fishermen are trying to catch, a large number of other species will be pulled out of the water. This accidental acquisition is known as the ‘by-catch’ and while some of it can be sold, a large fraction has no economic value and is thrown back into the sea.

This so called ‘discard’ is huge. In the Algarve, studies identified more than 1,200 species in the trawl catch but around 70% are thrown away since they lack any commercial value. World data from 1995 estimates the discard amounts to as much as 27 million tonnes of fish per year.

Since the process of being caught is not gentle on the fish, almost all discards are dead when they are tossed overboard. This results in serious ecological impacts, since food chains are strongly affected by the removal of some species and the addition of other dead. A photograph of Teresa in the 1980s shows her struggling to lift a hake half the length of her body. Such large hake are no longer found in Portugal’s fishing hauls. So much discard also has a more direct economic impact, as fishermen are forced to waste valuable time and boat space sorting through their catch.

Teresa’s work has focussed on minimising the quantity of the discard. Part of this research is technological, investigating better net designs that allow the targeted species to be trapped while other species can escape. One example of this has been to introduce sparsely placed square-mesh shaped holes into a diamond-mesh shaped net. While the diamond net holes draw closed when pulled from the top, the square holes remain open to allow small, unwanted fish to swim free.

Another part of this work is exploring new ways to use the fish accidentally caught. Teresa has been visiting Hokkaido University these last few weeks in the Department of Natural History Sciences. From one fish-loving nation to another, she has been examining the fish and preparation methods used in Japan and whether these could be used to minimise discards back home. During her stay, she shared her experiences in the ‘Sci-Tech’ talk series; general interest science seminars held in English in the Faculty of Science.

Teresa explained that fish are discarded when there is no commercial value, but the reasons it cannot be sold vary. In some cases, the fish is inedible. The Boarfish for example, is small and has a hard outer skin with spines that makes it impossible to eat. For this species, the only possible use is for animal meal, but the value would be very low which makes it economically unattractive.

In other cases, the fish is edible but there is no dietary tradition for that fish locally. Without people being in the habit of eating a particular fish, there is no demand in the market place. This can be remedied by finding new ways to prepare different fish or new markets where catch could be sold further afield.

When new ways of preparing fish are found, the next step is to educate and entice the consumers. Campaigns focussed around selling a particular species send chefs into markets and schools to demonstrate how to prepare the fish, while web and Facebook pages are set up with recipe ideas. Books are also being produced to showcase the fish in Portuguese waters. These publications come out both in Portuguese and English to attract the attention of the Algarve’s 10 million spring and summer visitors. The result is a world-wide fish-eating initiative!

Teresa also emphasises the importance of working with the fishermen. With a craft steeped in tradition, many fishermen are reluctant to try new techniques or are wary of interference from scientists. Teresa’s department in the Centre of Marine Sciences works hard to encourage a good relationship with the local fishermen, both by explaining their work and listening to what they say. The result has been a strong cooperation in favor of a cleaner catch with less laborious sorting; a win for both fisherman and fish.

So next time you see a sample of fish you’ve never tried before: why not give it a go?

How to stop the Earth slamming into the Sun

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Associate Professor Hidekazu Tanaka (right) with graduate student Kazuhiro Kanagawa

Associate Professor Hidekazu Tanaka (right) with graduate student Kazuhiro Kanagawa

4.56 billion years ago, our solar system was a dusty, gaseous disc encircling a young star. Buffeted by the gas, grains smaller than sand began to collide and stick together. Like a massive Lego assembly project, steadily bigger structures continued to connect until they had enough mass to pull in an atmosphere from the surrounding gas. Eventually, the sun’s heat evaporated away the remaining disc gas and our solar system of planets was born.

There is just one issue: As the baby planet reaches the size of Mars (roughly 1/10th of Earth’s mass), the gas disc tries to throw it into the sun.

The problem is gravity. When the juvenile planet is small, its low mass results in a weak gravitational pull. This has very little effect on the surrounding disc of gas and neighbouring rocks. However, once the planet reaches the size of Mars, its gravity starts to tug on everything around it.

Rather like a circular running track, gas closer to the Sun travels around a small orbit, while gas further out has to move a greater distance to perform one complete circuit. This makes the gas between the planet and sun draw ahead of the planet’s position, while the gas further away from the sun is left behind. Once the planet’s gravity becomes large, it pulls on the gas either side of it, trying to slow down the inner gas and speed up the outer gas. In turn, this gas also pulls back on the planet, attempting to both slow it down and speed it up. The disc’s structure gives the outside gas the edge, and it slows the planet down. In more technical language, this exchange of speed is called ‘angular momentum transfer’ and it gives the planet a big problem.

The planet’s position from the sun is determined by the balance between the sun’s gravitational pull and the centripetal force. The latter is the cause of the outwards push you feel riding on a spinning roundabout. When the planet loses speed, its centripetal force decreases and the sun’s gravitational force pulls it inwards before the two cancel again. As the gas disc continues to push on the planet, it begins a death spiral towards the Sun.

It is stopping the planet’s incineration that interests Associate Professor Hidekazu Tanaka in the Institute for Low Temperature Science. Dubbed ‘planetary migration’, Hidekazu explains that when this problem was first proposed back in the 1980s, nobody believed it.

“The migration happens at very high speed,” he says. “So most scientists thought there must be some mistake.”

Scientists were right to be suspicious. The migration time for an Earth-sized planet is about 100,000 years, far shorted than the time needed to build the planet. For the more massive Jupiter, the problem is even more severe, implying that migration should kill any planet forming in the disc.

The discovery of the first planets outside the solar system shut down this debate. In 1995, the first exoplanet orbiting a regular star was discovered. It was the size of Jupiter, but so close to its stellar parent that its orbit took a mere 4.2 days. Since the original gas disc would not have had enough material at close distances to birth such a massive planet, this hot Jupiter must have formed further out and moved inwards.

With migration accepted as a real process, the task became how to stop it.

“This is still a very difficult problem!” Hidekazu exclaims.

Since migration involves the interaction with the gas disc, it will stop once the sun’s heat has evaporated the last of the gas. But this process cannot have happen too early, or giant planets like Jupiter would not have been able to gain their massive atmospheres.

One possibility Hidekazu has proposed is that there could be a secondary force to stop the planet due to the different temperatures in the gas disc.

As the planet’s gravity drags on the disc, it pulls nearby gas towards it. On a larger orbit and therefore lagging behind the planet, the cold outer gas is dragged in to pile-up in front of the planet like slow-moving traffic. Its mass pulls the planet forward, causing it to move at a faster speed. This acceleration increases the planet’s centripetal force which means it must move outwards to balance the gravitational pull of the sun; an outward push could stop or slow the planet’s deathly migration.

This is just one idea that might turn the tables on the deathly walk of our young solar system. One problem with fully understanding this process is the lack of data. It has only been in the last 20 years that we have discovered any planets outside our solar system and we are still working on observing complete systems around other stars. When we do, Hidekazu’s research group will have much more information with which to match their models.

Meanwhile, we can be grateful that such a catastrophe only occurs during the early planet formation process and by this method or another, the Earth got lucky.

The vacuum cleaner that can tweet it can't fly

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


The vacuum cleaner that can tweet it can’t fly

Nestled under the desk in the Graduate School of Information Science and Technology is a Roomba robotic vacuum cleaner with a Twitter account. When there is a spillage in the laboratory, one tweet to ‘Twimba’ will cause her to leap into life to clean-up the issue.

Twimba’s abilities may not initially seem surprising. While the twitter account is a novel way of communicating (the vacuum cleaner is too loud when operating to respond to voice commands), robots that we can interact with have been around for a while. Apple iPhone users will be familiar with ‘Siri’; the smart phone’s voice operated assistant, and we have likely all dealt with telephone banking, where an automatic answer system filters your enquiry. 

Yet, Twimba is not like either of the above two systems. Rather than having a database of words she understands, such as ‘clean’, Twimba looks for her answers on the internet. 

The idea that the internet holds the solution for successful artificial intelligence was proposed by Assistant Professor Rafal Rzepka for his doctoral project at Hokkaido University. The response was not encouraging.

“Everybody laughed!” Rafal recalls. “They said the internet was too noisy and used too much slang; it wouldn’t be possible to extract anything useful from such a garbage can!”

Rafal was given the go-ahead for his project, but was warned it was unlikely he would have anything worth publishing in a journal during the degree’s three year time period: a key component for a successful academic career. Rafal proved them wrong when he graduated with the highest number of papers in his year. Now faculty at Hokkaido, Rafal heads his own research group in automatic knowledge acquisition.

“Of course, I’m still working on that problem,” he admits. “It is very difficult!”

When Twimba receives a tweet, she searches Twitter for the appropriate response. While not everyone has the same reaction to a situation (a few may love nothing better than a completely filthy room), Twitter’s huge number of users ensures that Twimba’s most common find is the expected human response. 

For instance, if Twimba receives a tweet saying the lab floor is dirty, her search through other people’s tweets would uncover that dirty floors are undesirable and the best action is to clean them. She can then go and perform this task.

This way of searching for a response gives Twimba a great deal of flexibility in the language she can handle. In this example, it was not necessary to specifically say the word ‘clean’. Twimba deduced that this was the correct action from people’s response to the word ‘dirty’. A similar result would have appeared if the tweet contained words like ‘filthy’, ‘grubby’ or even more colloquial slang. Twimba can handle any term, provided it is wide enough spread to be used by a significant number of people. 

Of course, there are only so many problems that one poor vacuum cleaner is equipped to handle. If Twimba receives a tweet that the bathtub is dirty, she will discover that it ought to be cleaned but she will also search and find no examples of robotic Roombas like herself performing this task.

“A Roomba doesn’t actually understand anything,” Rafal points out. “But she’s very difficult to lie to. If you said ‘fly’, she’ll reply ‘Roombas don’t fly’ because she would not have found any examples of Roombas flying on Twitter.”

The difficulty of lying to one of Rafal’s machines due to the quantity of information at its disposal, is the key to its potential. By being about to sift rapidly through a vast number of different sources, the machine can produce an informed and balanced response to a query more easily than a person. 

A recent example where this would be useful was during the airing of a Japanese travel program on Poland, Rafal’s home country. The television show implied that Polish people typically believed in spirits and regularly performed exorcisms to remove them from their homes; a gross exaggeration. A smart phone or computer using Rafal’s technique could swiftly estimate the correct figures from Polish websites to provide a real-time correction for the viewer. 

Then there are more complicated situations where tracking down the majority view point will not necessarily yield the best answer. A UFO conspiracy theory is likely to result in many more excitable people agreeing with its occurrence than those discussing more logically why it is implausible. To produce a balanced response, the machine must be able to assess the worth of the information it receives.

A simple way to assess or ‘weight’ sources is to examine their location. Websites with academic addresses ending ‘.edu’ are more trustworthy than those ending ‘.com’. Likewise, .pdf files or those associated with peer reviewed journals have greater standing. Although these rules will have plenty of exceptions, rogue points should be overwhelmed by their correctly weighted counterparts.

A computer can also check the source of the source, tracing the origin of an article to discover if it is associated with a particular group or company. A piece discussing pollution for example, might be less trustworthy if the author is employed by a major oil company. These are all tasks that can be performed by a human, but limited time means this is not often practical and can unintentionally result in a biased view point. 

With the number of examples available, one question is whether the machine can go a step further than sorting information and actually predict human behaviour. Rafal cites the example of ‘lying with tears’, where a person might cry not through sorrow, but from an ulterior motive. Humans are aware for such deceptions and frequently respond suspiciously to politicians or other people in power when they show public displays of emotion. Yet can a machine tell the difference?

Rafal’s research suggests it is possible when the computer draws parallels between similar, but not identical, situations that occur around the world. While the machine cannot understand the reasons behind the act, it can pick out the predicaments in which a person is likely to cry intentionally. 

This ability to apply knowledge across related areas allows a more human-like flexibility in the artificial intelligence, but it can come at a cost. If the machine is able to correlate events too freely, it could come to the conclusion that it would be perfectly acceptable to make a burger out of a dolphin. 

While eating dolphins is abhorrent in most cultures, a computer would discover that eating pigs was generally popular. In Japanese, the Kanji characters for dolphin are ‘海’ meaning ‘sea’ and ‘豚’ meaning ‘pig’. A machine examining the Japanese word might therefore decide that dolphin and pigs were similar and thus dolphins were a good food source. This means it is important to control how the machine reaches out of context to find analogies.

Such problems highlight the issue of ‘machine ethics’ where a computer must be able to identify a choice based more on morals than pure logic. 

One famous ethical quandary is laid out by the ‘Trolley problem’, first conceived by Philippa Foot in 1967. The situation is of a runaway train trolley barrelling along a railway line to where five people are tied to the tracks. You are standing next to a leaver that will divert the trolley, but such a diversion will send it onto a side line where one person is also tied. Do you pull that leaver?

Such a predicament might be one that Google’s driverless car may soon face. The issue with how the computer makes that decision and why is therefore central to the success of future technology.

“Knowing why a machine makes a choice is paramount to trusting it,” Rafal states. “If I tell the Roomba to clean and it refuses, I want to know why. If it is because there is a child sleeping and it has discovered that noise from the Roomba wakes a baby, then it needs to tell you. Then you can build up trust in its decisions.” 

If we are able to get the ethics right, a machine’s extensive knowledge base could be applied to a huge manner of situations.

“Your doctor may have seen a hundred cases similar to yours,” Rafal suggests. “A computer might have seen a million. Who are you going to trust more?”

Tracking the past of the ocean’s tiny organisms

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


While many people head to the ocean to spot Japan’s impressive marine life, graduate student Norico Yamada had a more unusual goal when she joined the research vessel, ‘Toyoshio Maru’ at the end of May. Norico was after samples of sea slime and she’s been collecting these from around the world. 

As part of Hokkaido University’s Laboratory of Biodiversity II, Norico’s research focusses on the evolution of plankton: tiny organisms that are a major food source for many ocean animals.

Plankton, Norico explains, are a type of ‘Eukaryote’, one of the three major super groups into which all forms of life on Earth can be divided. To make it into the Eukaryote group, the organism’s cells must contain DNA bound up in a nucleus. In fact, the esoteric group name is just an allusion to the presence of the nucleus, since ‘krayon’ comes from the Greek meaning ‘nut’. 

The Eukaryote group is so large, it contains most of what we think of as ‘life’. Humans belong to a branch of Eukaryotes called ‘Opistokonta’, a category we share with all forms of fungus. This leads to the disconcerting realisation that to a biodiversity expert like Norico, the difference between yourself and a mushroom is small. 

Of the five other branches of Eukaryote, one contains all forms of land plants and the sea-dwelling green and red algae. Named ‘Archaeoplastida’, these organisms all photosynthesise, meaning that they can convert sunlight into food. To do this, their cells contain ‘chloroplasts’ which capture the sunlight’s energy and change it into nutrients for the plant.

This seems very logical until we reach Norico’s plankton. These particular organisms also photosynthesise, but they do not belong to the Archaeoplastida group. Instead, they are part of a third group called ‘SAR’ whose members originally did not have this ability, but many acquired it later in their evolution. So how did Norico’s plankton gain the chloroplasts in their cells to photosynthesise? 

Norico explains that chloroplasts were initially only found a single-cell bacteria named ‘cyanobacteria’. Several billion years ago, these cyanobacteria used to be engulfed by other cells to live inside their outer wall. These two cells would initially exist independently in a mutually beneficial relationship: the engulfing cell provided nutrients and protection for the cyanobacteria which in turn, provided the ability to use sunlight as an energy source. Over time, DNA from the cyanobacteria became incorporated in the engulfing cell’s nucleus to make a single larger organism that could photosynthesise. The result was the group of Archaeoplastidas.

To form the photosynthesising plankton in the SAR group, the above process must be repeated at a later point in history. This time, the cell being engulfed is not a simple cyanobacteria but an Archaeoplastida such as red algae. The cell that engulfs the red algae was already part of the SAR group, but then gains the Archaeoplastida ability to photosynthesise. 

To understand this process in more detail, Norico has been studying a SAR group organism that seems to have had a relatively recent merger. Dubbed ‘dinotom’, the name of this plankton combines its most recent heritage of a photosynthesising ‘diatom’ plankton being engulfed by a ‘dinoflagellate’ plankton. Mixing the names ‘dinoflagellate’ and ‘diatom’ gives you the new name ‘dinotom’. This merger of the two algae is so recent that the different components of the two cells can still be seen inside the dinotom, although they cannot be separated to live independently.

From samples collected from the seas around Japan, South Africa and the USA, Norico identified multiple species of dinotoms. Every dinotom was the product of a recent merger between a diatom and dinoflagellate, but the species of diatom engulfed varied to give different types of dinotoms. As an analogy, imagine if it were possible to merge a rodent and an insect to form a ‘rodsect’. One species of rodsect might come from a gerbil and a bee, while another might be from a gerbil and a beetle. 

Norico identified the species by looking at the dinotom’s cell surface which is covered by a series of plates that act as a protective armour. By comparing the position and shape of the plates, Norico could match the dinotoms in her sample to those species already known. However, when she examined the cells closer, she found some surprises. 

During her South Africa expedition, Norico had collected samples from two different locations: Marina Beach and Kommetjie. Both places are on the coast, but separated by approximately 1,500 km. An examination of the plates suggested that Norico had found the same species of dinotom in both locations, but when she examined the genes, she discovered an important difference. The engulfed diatom that had provided the cells with the ability to photosynthesise were not completely identical. In our ‘rodsect’ analogy, Norico had effectively found two gerbil-beetle hybrids, but one was a water beetle and the other a stag beetle. Norico therefore concluded that the Marina Beach dinotom was an entirely separate species from the Kommetjie dinotom.

This discovery did not stop there. Repeating the same procedure with a dinotom species collected in Japan and the USA, Norico found again that the engulfed diatom was different. In total, she found six species of dinotoms in her sample, three of which were entirely new. 

From this discovery, Norico concluded that the engulfing process to acquire the ability to photosynthesis likely happens many times over the evolution of an organism. This means that the variety of species that can mix is much greater, since the merger does not happen at a single point in history. Multiple merging events also means that the organism can sometimes gain, lose and then re-gain this ability during its evolution. 

Next year, Norico will graduate from Hokkaido and travel to Osaka prefecture to begin her postdoctoral work at Kwansei Gakuin University, where she hopes to uncover still more secrets of these minute lifeforms that provide so much of the ocean’s diversity.

Fighting disease with computers

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


When is comes to students wishing to study the propagation of diseases, Heidi Tessmer is not your average new intake.

“I did my first degree in computer science and a masters in information technology,” she explains. “And then I went to work in the tech industry.”

Yet it is this background with computing that Heidi wants to meld with her new biological studies to tackle questions that require the help of some serious data crunching.

Part of the inspiration for Heidi’s change in career came from her time working in the UK, where she lived on a sheep farm. It was there she witnessed first hand the devastating results from an outbreak of the highly infectious ‘foot and mouth’ disease. This particular virus affects cloven-hoofed animals, resulting in the mass slaughter of farm stock to stem the disease’s spread.

“The prospect of culling of animals was very hard on the farmers,” she describes. “I wanted to know if something could be done to save animal lives and people’s livelihoods. That was when I began to look at whether computers could be used to solve problems in the fields of medicine and disease.”

This idea drove Heidi back to the USA, where she began taking classes in biological and genetic sciences at the University of Wisconsin-Madison. After improving her background knowledge, she came to Hokkaido last autumn to begin her PhD program at the School of Veterinary Medicine in the Division of Bioinformatics.

“Bioinformatics is about finding patterns,” Heidi explains.

Identifying trends in data seems straight forward enough until you realise that the data sets involved can be humongous. Heidi explains this by citing a recent example she has been studying that involves the spread of the influenza virus. While often no more than a relatively brief sickness in a healthy individual, the ease at which influenza spreads and mutates gives it the ongoing potential to become a global pandemic, bringing with it a mortality figure in the millions. Understanding and controlling influenza is therefore a high priority across the globe.

Influenza appears in two main types, influenza A and B. The ‘A’ type is the more common of the two, and its individual variations are named based on the types of the two proteins that sit on the virus’ surface. For example, H1N1 has a subtype 1 HA (hemagglutinin) protein and subtype 1 NA (neuraminidase) protein on its outer layer while H5N1 differs by having a subtype 5 HA protein.

The inner region of each influenza A virus contains 8 segments of the genetic encoding material, RNA. Similar to DNA, it is this RNA that forms the virus genome and allows it to harm its host. When it multiplies, a virus takes over a normal cell in the body and injects its own genome, forcing the cell to begin making the requisite RNA segments needed for new viruses. However, the process which gathers the RNA segments up into the correct group of 8 has been a mystery to researchers studying the virus’ reproduction.

In the particular case study Heidi was examining (published by Gog et al. in the journal of Nuclear Acids Research in 2007), researchers proposed that this assembly process could be performed using a ‘packaging signal’ incorporated into each RNA segment. This packaging signal would be designed to tell other RNA segments whether they wished to be part of the same group. If this signalling could be disrupted, proposed the researchers, then the virus would form incorrectly, potentially rendering it harmless.

This, explains Heidi, is where computers come in. Each RNA segment is made up of organic molecules known as ‘nucleotides’, which bunch together in groups of three called ‘codons’. The packaging signal was expected to be a group of one or more codons that were always found in the same place on the RNA segment; signalling a crucial part of encoding. In order to find this, codon positions had to be compared across 1000s of RNA segments. This job is significantly too difficult to do by hand, but it is a trivial calculation for the right bit of computer code. Analysing massive biological data efficiently in this way is the basis for bioinformatics.

In addition to the case above, Heidi cites the spread of epidemics as another area that is greatly benefiting from bioinformatics analysis. By using resources as common as Google, new cases of disease can be compared with historical data and even weather patterns.

“The hardest part about bioinformatics is knowing what questions to ask,” Heidi concludes. “We have all this data which contains a multitude of answers, but you need to know what question you’re asking to write the code.”

It does sound seriously difficult. But Heidi’s unique background and skill set is one that just might turn up some serious answers.

Reactions in the coldest part of the galaxy

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Professor Naoki Watanabe with graduate student, Kazuaki Kuwahata.

Professor Naoki Watanabe with graduate student, Kazuaki Kuwahata.

Within the coldest depths of our galaxy, Naoki Watanabe explores chemistry that is classically impossible.

At a staggering -263°C, gas and dust coalesce into clouds that may one day birth a new population of stars. The chemistry of these stellar nurseries is therefore incredibly important for understanding how stars such as our own Sun were formed.

While the cloud gas is made primarily from molecular (H2) and atomic (H) hydrogen, it also contains tiny dust grains suspended like soot in chimney smoke. It is on these minute surfaces that the real action can begin.

Most chemical reactions require heat. In a similar way to being given a leg-up to get over a wall, a burst of energy allows two molecules to rearrange their bonds and form a new compound. This is known as the ‘activation energy’ for a reaction. The problem is that at these incredibly low temperatures, there’s very little heat on offer and yet mysteriously reactions are still happening.

It is this problem that Professor Naoki Watanabe in the Institute of Low Temperature Science wants to explore and he has tackled it in a way different from traditional methods.

“Most astronomers want to work from the top down,” Naoki describes. “They prepare a system that is as similar to a real cloud as possible, then give it some energy and examine the products.”

The problem with this technique, Naoki explains, is that it is hard to see exactly which molecule has contributed to forming the new products: there are too many different reactions that could be happening. Naoki’s approach is therefore to simplify the system to consider just one possible reaction at a time. This allows his team to find out how likely the reaction is to occur and thereby judge its importance in forming new compounds.

This concept originates from Naoki’s background as an atomic physicist, where he did his graduate and postdoctoral work before moving fields into astrochemistry. Bringing together expertise in different areas is a primary goal for the Institute of Low Temperature Science and Naoki’s group is the top of only three in the world working in this field.

Back in the laboratory, Naoki shows the experimental apparatus that can lower temperatures down to the depths of the galactic clouds. Dust grains in the clouds are observed to be covered with molecules such as ice, carbon monoxide (CO) and methanol (CH3OH). Some of these can be explained easily: for instance, carbon monoxide can form from a positively charged carbon atom that is known as an ion (C+). The charge on the ions make them want to bond to other atoms and become neutral, resulting in no activation energy being required. A second possible mechanism is by a carbon atom (C) attaching to a hydroxyl molecule (OH): a particularly reactive compound known as a ‘radical’ due it possessing a free bond. Like the charge on the C+, this lose bond wants to be used, allowing the reaction to proceed without an energy input. These paths allow CO to form in the cloud even at incredibly low temperatures and freeze onto the dust.

However, methanol is more of a mystery. Since the molecule contains carbon, oxygen and hydrogen (CH3OH), it is logical to assume that initially CO forms and then hydrogen atoms join in the fun. The problem is that to add a hydrogen atom and form HCO requires a significant activation energy, and there is no heat in the cloud to trigger that reaction. So how does the observed methanol get formed?

Naoki discovered that while the low temperature prevented HCO forming through a thermal (heat initiated) reaction, it was that same coldness that opened the door to a different mechanism: that of quantum tunnelling. According to quantum mechanical theory, all matter can behave as both a particle and a wave. As a wave, your position becomes uncertain. Your most likely location is at the wave peak, but there is a small chance you will be found to either side of that, in the wave’s wings.

Normally, we do not notice the wave nature of objects around us: their large mass and energy makes their wavelength so small this slight uncertainty goes unnoticed. Yet in the incredibly cold conditions inside a cloud, the tiny hydrogen atom starts to show its wave behaviour. This means that when the hydrogen atom approaches the carbon monoxide molecule, the edge of its wave form overlaps the activation energy barrier, giving a chance that it may be found on its other side. In this case, the HCO molecule can form without having to first acquire enough heat to jump over the activation barrier; the atoms have tunnelled through.

Such reactions are completely impossible from classical physics and the chance of them occurring in quantum physics is not high, even at such low temperatures. This is where the dust grain becomes important. Stuck to the surface of the grain, reactions between the carbon monoxide and hydrogen atom can occur many many times, increasing the likelihood that a hydrogen atom will eventually tunnel and create HCO.

After measuring the frequency of such reactions in the lab, Naoki was then able to use simulations to predict the abundance of the molecules over the lifetime of a galactic cloud. The results agreed so well with observations that it seems certain that this strangest of mechanisms plays a major role in shaping the chemistry of the galactic clouds.

Are you likely to be eaten by a bear?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


The Hokkaido Prefectural Government…” The article published on the Mainichi newspaper’s English website at the end of September announced, “...has predicted that brown bears will make more encroachments on human settlements than usual this fall.

While written in calm prose, it was a headline to make you hesitate stepping outside to visit the convenience store: exactly how many bears were you prepared to face down to get that pint of milk?

Yet exactly how serious was this threat and what can be done to protect both the bears and the humans from conflict?

Professor Toshio Tsubota in the Department of Environmental Veterinary Sciences is an expert on the Hokkaido brown bear population. He explains that bears normally live in the mountains and only come into residential areas if there is a shortage of food. Sapporo City’s high altitude surroundings make it a popular destination when times are tough and bears have been sighted as far in as the city library in the Chuo Ward.

The problem this year came down to acorns. Wild water oak acorns are a major source of food for bears in the Autumn months, and this year the harvest has been low. Despite the flurry of concern, a low acorn crop is not an uncommon phenomenon. The natural cycle of growth produces good years with a high yield of nuts and bad years with a lower count in rough alternation. However, a bad yield year does increase the risk of bear appearances in the city.

With food being the driving desire, bears entering residential streets typically raid trees and garden vegetables. The quality of your little allotment not withstanding, bears will return to the mountains once the food supply improves, meaning their presence in the city is a short-lived, intermittent issue.

That is, unless the bears get into the garbage.

High in nutrients and calories, bears that begin to eat garbage will not return to their normal hunting habits. This is when more serious problems between humans and bears start to develop. Since yellow bagged waste products do not strongly resemble the bears’ normal food, they will not initially know that garbage is worth eating. It is therefore essential, Toshio says, to ensure your garbage is properly stored before collection.

In Japan, brown bears are found exclusively in Hokkaido with Honshu hosting a black bear population. The brown bear is the larger of the two species, with the females weighing in around 150 kg and the males between 200 – 300 kg. The exact population number in Hokkaido is unknown, but estimates put it around 3000 – 3500 bears on the island.

On the far eastern shore of Hokkaido is the Shiretoko World Heritage Site. Free from hunting, bears living in this area have little to fear from humans, making it the ideal location to study bear behaviour and habitat. Able to approach within 10 – 20 m of the bears, it is here that Toshio collects data for his research into bear ecology.

The acorn feast in the autumn comprises of the final meal the bears will eat before hibernation. Hokkaido’s long winters are passed in dens, with bears beginning their deep sleep near the end of November and not emerging until April or May. In warmer climates, this hibernation time is greatly reduced, with black bears on Honshu sleeping for a shorter spell and bears in southern Asia avoiding hibernation altogether.

For bears that do hibernate, this sleepy winter is also the time when expectant mothers give birth. Despite this rather strenuous sounding activity, the mother bear’s metabolic rate remains low during this period, even while she suckles her cubs. This is in stark contrast to the first few months after a human baby’s arrival where the mother typically gets very little sleep at all.

Another feature human expectant mothers may envy is the ability of the bear to put her pregnancy on hold for several months. Bears mate in the summer whereas the cubs are born in the middle of winter, making the bear pregnancy official clocking in at 6 – 7 months. However, the foetus only actually takes about two months to develop; roughly the same time scale as for a cat or dog. This pregnancy pausing is known as ‘delayed implantation’ and stops the pregnancy at a very early stage, waiting until the mother has gained weight and begun her hibernation before allowing the cubs develop.

Brown bears typically have two cubs per litter and the babies stay with their mother during the first 1.5 – 2.5 years of their life. With an infant mortality of roughly 40 – 50%, this means that drops in the bear population are difficult to replenish. Toshio’s work in understanding brown bear ecology is therefore particularly important to preserving the species in Hokkaido.

As hunting drops in popularity, Hokkaido’s bear population has improved, yet this could be seriously damaged if humans and bears come into conflict.

“A bear does not want to attack a person,” Toshio explains. “He just want to find food. But sometimes bears may encounter people and this leads to a dangerous situation.”

To date, bears venturing into Sapporo have not resulted in serious accidents, although people have been killed by bears in the mountains during the spring. These attacks occur when both people and bears seek out the wild vegetables, producing a conflict over a food source.

Toshio believes that education is of key importance to keeping both bear and human parties safe, and advocates that a wildlife program be introduced in schools.

“If there are no accidents, people love bears! They are happy with their life and population,” Toshio points out. “But if people see bears in residential areas they will become scared and want to reduce their number.”

If you do see a bear in the street, Toshio says that keeping quiet and not running are key. In most cases, the bear will ignore you, allowing you to walk slowly away and keep your distance.

“We talk about ‘one health’,” Toshio concludes. “Human health, animal health and ecological health. These are important and we need to preserve them all.”

Baby Galaxies

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


Hold an American dime (a coin slightly smaller than a 1 yen piece) at arms length and you can just make out the eye of Franklin D. Roosevelt, the 32nd President of the USA whose profile is etched on the silver surface. Lift the eye on that coin up to the sky and you’re looking at a region containing over 10,000 galaxies. 

In 1995, the Hubble Space Telescope was trained on a small patch of space in the constellation Ursa Major. So tiny was this region that astronomers could only make out a few stars belonging to our Milky Way. To all intents and purposes, the space telescope was pointing at nothing at all.

After 10 days, scientists examined the results. Instead of a pool of empty blackness, they saw the light of 3,000 galaxies packed into a minute one 24-millionth of the entire sky. In 2003, the observation was repeated for an equally tiny region of space in the constellation Fornax. The result was over 10,000 galaxies. These images are known as the ‘Hubble Deep Field’ and ‘Hubble Ultra Deep Field’ and became some of the most famous results in the space telescope’s legacy. 

For their second public Sci-Tech talk, the Faculty of Science’s Office of International Academic Support invited Professor John Wise from the Georgia Institute of Technology in the USA to talk about his search to understand the formation of the very first galaxies in the Universe. 

John explained that when you look at the Hubble Deep Field, you do not see the galaxies as they are now. Rather, you are viewing galaxies born throughout the Universe’s life, stretching back billions of years. The reason for this strange time alignment is that light travels at a fixed, finite speed. This means that we see an object not as it is now, but how it was at the moment when light left its surface. 

Since light travels 300,000 km every second, this delay in light leaving an object and reaching your eyes is infinitesimally small for daily encounters. For light to travel to you from a corner shop 100 meters away, takes a tiny  0.0000003 seconds. However, step into the vastness of space and this wait becomes incredibly important. 

Light travelling to us from the moon takes 0.3 seconds. From the Sun, 8 minutes. For light to reach us from the Andromeda Galaxy, we have to wait 2.4 million years. The Andromeda we see is therefore the galaxy as it was 2.4 million years ago, before humans ever evolved on Earth. 

At a mere 24,000,000,000,000,000,000 km away, Andromeda is one of our nearest galactic neighbours. Look further afield, and we start to view galaxies that are living in a much younger universe; A universe that has only just began to build galaxies. 

“At the start of the Universe,” Wise explains. “There were no stars or galaxies, only a mass of electrons, protons and particles of light. But when it was only 380,000 years old, the Universe had cooled enough to allow the electrons and protons to join and make atoms.”

This point in the Universe’s 13.8 billion year history is known as ‘Recombination’; a descriptive name to mark electrons and protons combining to make the lightest elements: Hydrogen, Helium and a sprinkling of Lithium. 

Then, the Universe goes dark. 

For the next 100 million years, the Universe consists of neutral atoms that produce no light nor scatter the current populations of light particles. Astrophysicists, who rely on these light scatterings to see, refer to this time as the ‘Dark Ages’ where they are blind to the Universe’s evolution. 

Then, something big happens.

An event big enough that it sweeps across the Universe, separating electrons from their atom’s nucleus in an event called ‘Reionisation’. 

“It was like a phase transition,” Wise describes. “Similar to when you go from ice to water. Except the Universe went from neutral to ionised and we were now able to see it was full of young galaxies.”

But what caused this massive Universe phase transition? Wise suspected that the first galaxies were the cause, but with nothing visible during the Dark Ages, how is this proved?

To tackle this problem, Wise turned to computer models to map the Universe’s evolution through its invisible period. Before he could model the formation of the first galaxy, however, Wise had to first ask when the first stars appeared.

“Before any galaxies formed, the first stars had to be created,” Wise elaborates. “These were individual stars, forming alone rather than in galaxies as they do today.”

As the pool of electrons and protons join at ‘Recombination’ to create atoms, small differences in density were already in place from the first fraction of a second after the Big Bang. The regions of slightly higher density began to exert a stronger gravitational pull on the atoms, drawing the gas towards them. Eventually, these clouds of gas grew massive enough to begin to collapse and star was born in their centre. 

Their isolated location wasn’t the only thing that made the first stars different from our own Sun. With only the light elements of hydrogen, helium and lithium from which to form, these stars lacked the heavier atoms astrophysicists refer to as ‘metals’. Due to their more complicated atomic structure, metals are excellent at cooling gas. Without them, the gas cloud was unable to collapse as effectively, producing very massive stars with short lifetimes.

“We don’t see any of these stars in our galaxy today because their lifetimes were so short,” Wise points out. “Our Sun will live for about 10 billion years, but a star 8 times heavier would only live for 20 million years; thousands of times shorter. That’s a huge difference! The first star might be ten times as massive with an even shorter lifetime.” 

In the heart of the newly formed star, a key process then began: that of creating the heavier elements. Compressed by gravity, hydrogen and helium fuse to form carbon, nitrogen and oxygen in a process known as ‘stellar nucleosynthesis’. As it reaches the end of its short life, the first stars explode in an event called a ‘supernovae’, spewing the heavier metals cooked within their centre out into the Universe. 

The dense regions of space that have formed the first stars now begin to merge themselves. Multiple pools of gas come together to produce a larger reservoir from which the first galaxies begin to take shape. 

Consisting of dense, cold gas, filled with metals from the first stars, these first galaxies turn star formation into a business. More stars form, now in much closer vicinity, and the combined heat they produce radiates out through the galaxy. This new source of energy hits the neutral atoms and strips their electrons, ionising the gas. 

As the fields of ionised gas surrounding the first galaxies expand, they overlap and merge, resulting in the whole Universe becoming ionised. Wise’s simulations show this in action, with pockets of ionised gas and metals spreading to fill all of space. 

While this uncovers the source of the Universe’s ionisation, the impact of the first galaxies does not stop there. In the same way that the regions of first star formation had combined to form galaxies, further mergers pull the galactic siblings together. The result is the birth of a galaxy like our own Milky Way. 

These first galaxies, Wise concludes, therefore form a single but important piece of the puzzle that ultimately leads to our own existence here in the Milky Way. 

Can you catch your dog's illness?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


The Research Center for Zoonosis Control is concerned with the probability that you will catch a disease from your dog. Or –to put it more generally– zoonosis looks at the transmission of diseases between humans and animals. 

While we do not normally consider an illness in an animal a risk to humans, you can almost certainly name a number of exceptions: Severe Acute Respiratory Syndrome (SARS), avian Influenza (bird flu), rabies, Escherichai coli (E. coli ) and Creutzfeldt-Jakob disease (CJD), to offer a few examples. 

Diseases can be produced by different types of agents: viruses, bacteria, prions (a type of protein), fungi and parasites. Postdoctoral researcher, Jung-Ho Youn, is interested in the second of these types; diseases caused by bacteria. 

Bacteria, Jung-Ho explains, are everywhere and most are harmless or even essential to our health. For the bacteria which do cause disease, treatment usually involves antibiotics. 

In 1928, Scottish scientist Alexander Fleming discovered that a sample of the disease-producing bacterium, Staphylococcus aureus (S. aureus), had become contaminated by a mould which was inhibiting its growth. This fungus turned out to be producing a substance that would later be known as the antibiotic, penicillin. So successful was penicillin at treating previously fatal bacterial diseases that it was hailed the ‘miracle drug’. 

Yet, 85 years later we are far from celebrating the demise of all bacterial illness. Indeed, a variant of the very bacteria Fleming was studying can be a serious problem in hospitals due to its strong resistance to many antibiotics. 

So what went wrong?

The problem, Jung-Ho explains, is that bacteria are continuously changing. With the ability to divide in minutes, bacterial numbers can rise exponentially and these microorganisms are not always identical. Random mutations to the genetic structure of a bacterium usually have little effect but a small number of cases can arise that produce a spontaneous resistance to the drug set to kill them. For example, the bacteria’s surface may change to prevent the antibiotic bonding or it could reduce the build up of the drug by creating a pump to dispel it from its system. These changes are the same principal as evolution in mammals but on a much faster time scale due to the bacteria’s fast reproduction. This means that when a sick person takes a course of antibiotics such as penicillin, they kill most but not all of the bacteria. 

“It’s like people wearing bullet-proof vests,” Jung-Ho described. “The antibiotics are the guns. They kill most of the bacteria but not all because a few are wearing these bullet-proof vests. This is why bacteria have survived until now.” 

But the problems with bacteria do not stop at the ability to develop resistance once exposed to a drug. Not only can a resistant bacterium multiply to produce more bacteria will the same protection, it can share this information with different types of bacteria. The bacteria that gain this protection may never have been exposed to the antibiotic and –a key issue for zoonosis– they may not even cause disease in the same species. 

Jung-Ho’s research is with Staphylococcus pseudintermedius (S. pseudintermedius), a bacteria commonly found in dogs. While playing with your pet, this bacteria can transfer onto your skin where it can meet the human variant, S. aureus; the same bacteria Fleming was studying during his discovery of penicillin. 

“Normally there are no problems,” Jung-Ho assures animal lovers. “You don’t need to get rid of your pet. It’s just in some cases…”

In some cases, the type of S. pseudintermedius bacteria on your dog may be resistant to an antibiotic and pass this resistance onto the S. aureus. The result is a disease-spreading human bacteria that is resistant to an antibiotic it has never been exposed to. Likewise, the reverse may occur where a resistant human bacteria passes on its resistance to an animal  disease.

How can this spread of resistance between animals and humans be prevented? The first step, Jung-Ho explains, is to find out where these antibiotic resistant bacteria are and how they are spreading. 

Jung-Ho describes an experiment conducted in a veterinary waiting room. Swabs are taken from the patient pets, veterinary surgeons and the environment, such as the waiting room chairs and computers. The bacteria are then analysed to see if the same microorganism variant (or ‘strain’) is found in more than one place. For example, if the same bacteria strain are found on the vets’s skin and the dog, this suggests transmission is occurring between the two parties. 

The direction of this transmission (vet to dog or dog to vet) is not possible to determine, but in the case of S. pseudintermedius, Jung-Ho knows it is predominantly found on dogs, so must be moving from animal to human. If a large quantity of this bacteria is also found on the furnishings of the clinic, it is likely that significant extra transmission is occurring by people touching their keyboard or chair, even if they have never interacted with an animal. This kind of spread can then be prevented by sterilising the area more thoroughly: a find that can reduce disease and the spread of resistant bacteria. 

This approach sounds straightforward until the sheer number of bacteria are considered. Even when narrowed down to a specific strain, it is not immediately obvious whether the bacteria found have the same source, or if they have originated from entirely different locations. Jung-Ho described two main methods for checking to see if the samples are the result of a single transmitted population:

(1) The identification of ‘Housekeeping genes’. These genes are particular to a bacteria type and essential to its survival, resulting in these genes being present in each cell of the microorganism. ForStaphylococcus bacteria, there are seven housekeeping genes but their order may differ between bacteria populations. By identifying these genes in bacteria found in different locations, scientists can establish if they were transmitted between the two places. This technique is known as ‘Multilocus sequence typing’ or MLST. 

(2) The use of another molecule that cuts the bacteria’s DNA into pieces. Called ‘Pulsed-field gel electrophoresis’ (PFGE) this technique is also used in genetic fingerprinting to identify an individual from their DNA. In this case, the bacteria’s DNA is sliced using a gene cutting enzyme that only produces the same pieces if the bacterial populations are identical. 

Jung-Ho points out that such methods –while effective– are cumbersome compared with the prospect of being able to sequence the whole DNA of a bacteria. He hopes that with the new generation of machines capable of quickly finding complete gene sequences, it will become easier to track the antibiotic resistant bacteria. This will not only allow scientists to minimise infection but also warn doctors that a disease may be resistant to certain drugs. Such an early detection of the bacteria type can save vital time when treating a patient. 

One of the difficulties with this research is that it sits between human and animal medicine. Jung-Ho’s own background is in veterinary science, yet to understand the bacterial spread across species he must now gain expertise on the medical side. 

“It is a big question in zoonsis research,” he explains. “Who should do it? Doctors or vets? Someone has to, I think.”


Were Dinosaurs adaptable enough to survive the meteor?

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


Based on the third floor of the Hokkaido University museum, Assistant Professor Yoshitsugu Kobayashi is a dinosaur hunter. His searches for the fossilised remains of Earth’s previous rulers has taken him on a pan Pacific sweep of countries that has most recently landed him in Alaska. 

With its freezing winters and long nights, Alaska is not the first place one might suppose 

to find the remains of beasts resembling giant lizards. Nor has the landmass moved from more comfortable tropics over the last 66 million years since the dinosaurs became extinct. To understand how these creatures could have thrived in such a hostile environment, Yoshitsugu suggests that our view of dinosaurs needs to change. 

Dinosaurs have long been considered the enormous ancestors of cold blooded reptiles. Unable to control their own body temperature, these creatures would need to live in warm climates to maintain a reasonable metabolism. Finding fossils in Alaska therefore, presents scientists with some problems. 

“To survive winters in Alaska, the dinosaurs would need to be warm blooded,” Yoshitsugu explains. “This allows them to adapt more easily to different environments.”

Which brings us to Yoshitsugu’s next big question; to what extent were the dinosaurs able to flourish in different living conditions? 

To investigate this, Yoshitsugu and his team examined the fossils found in Alaska and compared them with known dinosaur groups elsewhere in the world. There were three main possibilities for their findings:

The Alaskan population of dinosaurs could match that of North American dinosaurs at lower latitude, implying that same group had migrated north.


Rather than North American, the dinosaurs could be kin to those found in Asia. This would require a migration across from Russia to Alaska and its likelihood is determined by the condition of the Bering Strait; an 82 km stretch between Russia’s Cape Dezhnev and Cape Prince of Wales in Alaska. In the present time this gap is a sea passage but in the past, a natural land bridge has existed and is thought to be responsible for the first human migration into America. Could the dinosaurs have taken the same path millions of years before? 

The third option is that the Alaskan dinosaurs resemble neither their North American or Russian cousins and a different kind of dinosaur inhabited these frozen grounds.  

Examining their finds from this region, Yoshitsugu’s team concluded that what they had were North American dinosaurs. This meant that dinosaurs --like humans-- were capable of living across an incredibly wide range of environments, with different climates, food sources and dangers.  

“People often picture the dinosaurs as having small brains,” Yoshitsugu amends. “But the same dinosaurs lived in very different places. They were intelligent and adaptable.”

This adaptability has one very important consequence; the meteor that hit the Earth 66 million years ago is unlikely to have been solely responsible for the dinosaurs’ demise. 


As it crashed into the Earth, the meteor raised a huge cloud of dust that blotted out the sun, creating a cold and dark new world. Had the dinosaurs been cold blooded, such a massively extended winter would have proved fatal, but Yoshitsugu’s research supports suggestions that not only were the dinosaurs warm blooded, they were also equipped to deal with this new harsh environment. 

This doesn’t mean the meteor failed to have an impact on the dinosaur population. With a drop in sunlight, the vegetation would have decreased, reducing the food source for herbivores and subsequently, the carnivores who fed on them. Yet, it does appear that the dinosaurs could have survived if this were the only disaster.  

So what else could have happened to destroy this species? 


Yoshitsugu pointed to two other possible causes: The first is major volcanic activity in the vast mountain range that covers large parts of India. In a series of prolonged eruptions that persisted for thousands of years, lava poured over hundreds of miles of land and the ejected carbon and sulphur dioxide combined to acidify the oceans. The impact on life was catastrophic and longer lasting than that from the meteor.

The second possible cause of dinosaur extinction is linked with the drop in the sea levels that occurred towards the end of the 66 million year mark. As the water receded, land bridges were formed between the continents, allowing dinosaurs to travel freely across the globe. The result was a drop in genetic diversity among the dinosaur species. The same effect is seen today if a foreign breed of animal is introduced into a new ecosystem. In the UK, the introduction of the eastern grey squirrel has driven the native red squirrel to the point of extinction. The grey squirrel can digest local food sources more effectively and also transmits a disease fatal to red squirrels, driving down their numbers. This uniformity in dinosaur genetics made them highly susceptible to eradication, since a physiological or environmental impact will equally affect the entire, near-identical, population. 

It was most likely a combination of these three catastrophes that caused the death of the dinosaurs, Yoshitsugu concludes. 


However, dinosaurs were not the only giants walking the Earth 230 million years ago. Living alongside these warm blooded creatures were genuine cold blooded reptiles; the giant crocodiles. With limbs that came out from the body sideways, not straight, these huge beasts were a distinct species from the dinosaurs. Exactly how these monsters dwelt together is currently being explored in the new exhibit at the Hokkaido University Museum, ‘Giant Crocodiles vs Dinosaurs’, which opened last Friday. The exhibit includes full sized casts of both adult and juvenile crocodiles and dinosaurs, describing their location and where they overlapped. There is also a movie sequence Yoshitsugu was involved in creating that shows a simulated fight between a giant crocodile and a T-Rex over prey. 

This unique view of the world of the dinosaurs is open until October 27th. We hope to see you there!


[Photo captions from top to bottom: (1) Yoshitsugu Kobayashi standing in front of the skeleton of an afrovenator dinosaur. (2) The skull of a triceratops. (3) The carnivorous  afrovenator dinosaur. (4) The herbivorous camarasaurus. (5) The skull of a giant crocodile that lived alongside the dinosaurs.]

The trouble with bicycles

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.


Walking through the Hokkaido University campus is a risky business. In winter, snow and ice lie ready to send the even slightly unbalanced into a horizontal slide. In the summer, you are likely to be crushed by a bicycle.

An astounding 19% of daily trips in Japan are made by bike. This is in comparison to less than 2% of trips in the UK. We live in a nation that loves the peddle-powered friend.

While such a mode of transport is great for the environment, it is rather less friendly for those on foot. Unlike in other countries where the bike is considered a road vehicle, here in Japan, it is commonly treated the same as pedestrians with both groups generally using the same sidewalks to travel. Not only is this problematic for pedestrians, it is also unsustainable, Assistant Professor Katia Andrade from the Department of Engineering explains.

“People use bikes because they feel secure,” Katia tells us. “They’re on the sidewalk which seems safe, but the accident rate is actually booming.”

In the last 20 years, the number of accidents involving bicycles and pedestrians has multiplied by over four times. Yet, this huge increase has been largely overlooked by the police because the incidents are typically small with few (although not zero) fatalities. However, as bicycle usage continues to increase, there will soon be no safe route for cyclists or pedestrians unless action is taken.

Action is not easy. Katia points out that a change to the way bicycles are handled in Sapporo requires expenditure of public money; always a limited asset. Additionally, implementing a system such as bicycle lanes is unlikely to be immediately effective, since people must feel confident in the available infrastructure before they will use it.

“If there is less infrastructure, less people will use it,” Katia elaborates. “But, even if you increase the infrastructure, people won’t immediately use it because they need time to feel secure.”

This means the task of safe cycling cannot be resolved by just laying down bike lanes all through the city; its too expensive and the uptake is too small. So how can a system be introduced that would have a high impact on safety for a reasonable investment? Katia’s research suggests that a key consideration might be land use.

Katia has been exploring the connection between the way land is physically used –for example by commercial buildings, school areas, residential housing blocks or industry– and the different modes of transport and resulting accidents.

To explain this connection, Katia offers a simple example of a large shopping mall being built in a suburban area. Prior to its construction, only a single bus route might have been needed to connect the surrounding houses and the centre of town. However, with more people now travelling to the mall, extra bus routes or a subway line might be required. The change in the land use has resulted in a change in the transportation.

Carry the same example another 5 years down the line and smaller shops have sprung up near the mall and the houses have expanded. This results in the number of people walking or cycling to the mall has now increased, coming into conflict with each other and the stream of motorised vehicles. The balance between the transportation modes has now changed and with it, the types of accidents occurring.

Such a correlation is often ignored in present day research. When an accident occurs, human factors such as drink driving or age are considered along with vehicle issues such as an old or faulty car. Yet, possibly such accidents could be avoided by better transport planning based on the given land use.

For instance, mixed land use where the distances between amenities such as shops, houses and work are small, are likely to be popular with cyclists. By anticipating this, city planners could keep this group safe by reducing the speed limit or providing cycle routes in this area. Investing in appropriate measures where there is a high demand will encourage cyclists to come off the crowded sidewalks and onto the safe, designated routes.

Reducing accidents by consideration of the commercial, residential and industrial composition is a key area in Katia’s current research. She explains though, that the results from her research will not be a worldwide ‘one size fits all’ due to the social nature of the problem.

Coming from Brazil, Katia understands bicycle usage between countries is very different. Here in Japan, it is common to see men cycling to work in their business suits and women in high heels off for a night’s dancing. In Brazil, both those activities would be reached using a car because it is associated with a higher social status. This social effect on her research makes the topic both vexing and fascinating. Nevertheless, at the end of the day, the goal is the same:

“We need to reduce the conflict between different modes of transport,” Katia concludes. “The less conflict, the less accidents.”

Locking-out Parasitic Worms in an International Lab

Spotlight on Research is the research blog I author for Hokkaido University, highlighting different topics being studied at the University each month. These posts are published on the Hokkaido University website.

Yu Hasegawa stands by the laboratory fridge in which samples of infected plants are stored.

Yu Hasegawa stands by the laboratory fridge in which samples of infected plants are stored.

When it came time to select a laboratory for her final undergraduate studies, Yu Hasegawa did not make an easy choice. 

I’ve wanted to study abroad for a long time,” she explains. “So when I had to choose a laboratory to join, I wanted to work with Derek.”

Professor Derek Goto moved to Japan from Australia and his laboratory, with its group of researchers heralding from Australia, Japan and Malaysia, is one of the most international in Hokkaido University. 

Goto’s group is part of the School of Agriculture and its focus is a crop-destroying worm known as a root-knot nematode. The name is accurately descriptive; this minute parasite burrows into the roots of plants and sets up camp, creating a distinctive knotty bulbous growth around its new home. Such an infestation may kill younger seedlings and also decimates the adult plants to cause huge reductions in yield. Able to infect around 2000 different plant species, root-knot nematodes are single handedly responsible for about 5% of the global crop loss. In short, this is a serious world-wide problem.

With plans in mind to study outside Japan after graduation, Yu had practised English alone  in addition to her agricultural studies. While it wasn’t always easy to understand what everyone was saying at first, Yu found she could generally follow the discussions of her new group. However, language was not the only change she had to deal with. 

In Japan,” Yu describes, “You have to remember facts from the textbook or you are shown how to do a task and then tested on this. Derek didn’t work like that; he would give me hints but I had to think for myself.” 

This change in style was also what Yu was keen to experience when she joined Derek’s lab, but she admits it wasn’t easy.

When I first came here, I was so confused. I was used to the Japanese style and I felt sad and angry. My friends in other laboratories were making progress on their projects but mine wasn’t going anywhere.

Even with the help of the other members of the laboratory (who Yu describes as ‘awesome’) it took Yu four to five months to feel comfortable with her work. 

While everyone in Derek’s group studies the problem of the root-knot nematode worms, each member investigates a different mechanism for stopping their destructive life cycle. Yu is examining what happens when the worm has infiltrated the root and starts to build its signature nest. In order to create the bulbous knots, the worm produces a secretion of compounds. This mix transforms the plant cells from their normal shape and function into enlarged giant cells with multiple nuclei in which their DNA is packaged. Not only are these inflated cells produced, but the cells surrounding them start to multiply, swiftly producing the knotted structure that signifies the infestation. Once transformed, the giant cells begin to suck nutrients away from the rest of the plant for the worm to gorge upon.

What Yu wants to know is which is the specific ingredient in the cocktail of compounds the worm secretes that causes the plant cells to transform. If the process could be broken down to a single biological key, then maybe a plant variation could be grown that was resilient to that one compound, effectively creating a locked door to the worm. 

To investigate this possibility, Yu analyses the worm’s secretion and separates out of the mix a possible candidate for this cell deforming key. She then exposes a fresh batch of plant cells to only this single compound and watches how they develop. 

Yu talks confidently about the complex procedures involved in her work and says that now, six months into her time in the laboratory, she does not feel her research is behind that of her peers. 

I think we are almost at the same level, and hopefully I can think more by myself so perhaps my ability to conduct experiments and convey my ideas efficiently in English and Japanese is better,” she adds, hesitantly. 

After this year, Yu plans to pursue her dream of going abroad. She is currently applying to the Japanese Government for a scholarship to attend graduate school outside Japan. There is no denying that both in research and science communication, Yu had worked impressively hard to achieve this goal and we are excited to see what comes next for her.

The $1000 genome

The Postdoc Perspective was a blog for the Physics and Astronomy Department at McMaster University in Canada that I kept while I was a postdoctoral researcher. Many of the topics were talks presented at the McMaster Origins Institute seminar series.  

My Dad has high blood pressure and my Mum had to receive treated breast cancer but what does this say about my future health? It is possible I have a predisposition to both these conditions but I may also never develop either. The difference comes down to what combination of genes I have inherited and for me to know for sure, my genome would have to be mapped.

The U.S. Department of Energy Human Genome Project Information Web site estimates it would take "about 9.5 years to read out loud (without stopping) the more than three billion pairs of bases in one person's genome sequence"[*]. It therefore unlikely to surprise you that the mapping of your personal genome does not come cheaply. Currently, you're looking at around $50,000 - $100,000 which only seems affordable in light of the fact the first genome to be mapped in 2003 cost $3,000,000,000.

Now, however, a new technique for gene mapping is being developed that could bring the cost down to under $1000. This would allow personal genomics to become available for predictive medicine. As our Origins' colloquium speaker, Professor David Deamer from the University of California Santa Cruz, suggested, you could imagine having your own genome stored on a thumb drive to take with you when you visited your doctor.

Professor Deamer first conceived the idea for the $1000 genome over twenty years ago. He postulated that if it were possible to create a hole in a biological cell that was sufficiently narrow that only a single strand of DNA could pass through it, then the DNA components ("nucleotides") could be analysed and recorded as they were dragged through. Combined, this pattern of DNA components make up your genes[**]. The question was what could be used to create such a tiny channel?

The answer to this did not emerge until ten years later and turned out to be a toxin called alpha-hemolysin. As its description suggests, hemolysin is not normally remotely desirable and is released during staph infections where it burrows into red blood cells and makes them explode (not good). In this case, however, its burrowing ability is exactly what Professor Deamer's team were looking for.

Alpha-hemolysin adheres to a cell's surface and makes a hole through the cell's structure known as a 'nano pore'. When a small voltage is applied, charged particles pass through the cell to create a tiny, but measurable, electric current. When a DNA strand attempts to pass through the hole, it can only just fit. This means it temporarily blocks the channel while it is squeezing through, causing the electric current to drop. The amount the current falls by turns out to be determined by which nucleotide is currently in the way. By measuring the change in current, the genome can be mapped.

The familiar picture of DNA is not of a single strand, but of the double helix. Tied up in this manner, the DNA cannot fit through the nano pore. Instead, it enters the broader, top part of the channel and get struck. From this position, it becomes unzipped until it can finally pass through the hole and out of the cell. The very exact size of the hole is important, since to record the genome accurately, only one nucleotide at a time must exit the cell.

Genome mapping using this technique is not yet available, but Oxford Nanopore Technologies have plans to produce a commercial device using this process. That being the case, there is only really one question left:

Are you ready to know what you really are?

[*] In case anyone is really curious, this figure is calculated by assuming a reading rate of 10 bases per second, equaling 600 bases/minute, 36,000 bases/hour, 864,000 bases/day, 315,360,000  bases/year. So there.

[**] Nucleotides make up DNA strands and stretches of DNA strands make up genes (in case anyone else was confused about the order of the extremely small).

Networks in the brain

The Postdoc Perspective was a blog for the Physics and Astronomy Department at McMaster University in Canada that I kept while I was a postdoctoral researcher. Many of the topics were talks presented at the McMaster Origins Institute seminar series.  

So your friend Ben is married to Margaret who is friends with Rachel who shares an office with Rory who worked on a planetarium show with Rob who once received a detention at school for mooning Prince Harry [*]. 

According to the theory of the six degrees of separation, you are no more than half a dozen people away from receiving that front row invite to the Royal Wedding. The idea is that you are connected to every other person on Earth through an average of six people. It is a concept huge social network sites such as Facebook have been testing, but surprisingly it is an arrangement that is reflected in the structure of your brain.

All this I learned at the Royal Canadian Institute (RCI) 2011 Gala. The RCI was formed in 1849 by Sir Sandford Fleming. One of its original roles was to publish a scientific research journal in Canada but now its emphasise is on a weekly public lecture series which covers a wide range of scientific topics. In addition, the RCI helps with grants for students wishing to study science at university and it hosts an annual Gala dinner. The Gala is an opportunity to have a discussion over a great meal with a scientist. One of the twenty five tables at this year's event was hosted by my adviser, Professor Ralph Pudritz, but I shunned his table in favour for one led by a scientist working on the structure of the brain; a topic I knew nothing about. (When I told Ralph I'd rejected his table in favour of another he assured me he 'expected nothing less'. I don't think he meant this to be a reflection of my attention in our research group meetings.)

Our table was led by Professor Mark Daley who worked on models of the brain at the University of Western Ontario. When newly arrived at his institute, Mark explained that he had known very few people.

"But, I did know Mike." He gestured towards one of the other diners seated with us. "And Mike knew everybody. So if I needed to contact somebody elsewhere in the University, I could go to Mike and the chances were he knew them. This meant although I only knew a few people, I was connected to almost everyone else via only one person."

This, Mark explained, was the premise behind the six degrees of separation. There are a few people who know a huge number of others and these individuals act like hubs. People preferentially attach themselves to hubs (since the hub is likely to meet them through their enormous list of contacts) resulting in them being connected to a great many others through a very small number of steps.

What Mark said about Mike turned out to be entirely true. When chatting to him before dinner he had declared, "Oh, you're at McMaster! Do you know Hugh Couchman and James Wadsley?" I had to confess I did.

Mark continued by explaining that the brain organises its neurons along similar principals. There are hub areas in the brain which have a huge number of neurons connected to them and these link up regions which have sparsely few connections.

This structure can be explored with two major methods. The first is to take thin slices of the brain's grey matter and the second (more desirous for live volunteers) is to watch water flows via an MRI scan.

The consequences of this neural structure have important ramifications both for the effect of brain-damage and in understanding mental illnesses. Damage to one of these hub region, for instance, can result in the head injury being fatal because the brain simply cannot rewire to compensate from such a large loss of connections. Other times, the damage can be severe but limited to one specific area. Mark cited an example of a woman with damage to one hub who was left unable to see.

In most people, the number of hub regions is small and they are found is quite specific areas. One exception to this is in the case of people suffering from schizophrenia, where many smaller hub nodes are seen and in farther flung areas in the brain than for a healthy person. 

A question I asked was whether this was the underlying concept in electric shock treatment for depression? Was the idea to try and forcibly rewire the neurons by destroying their electrical signals and thereby forcing the brain to choose another (hopefully better) structure? Mark said that while this was the correct premise, such treatments were now strongly out of favour. He compared it with chemotherapy, saying you effectively killed a lot of neurons in the hope that you destroyed the bad pathways before you took out all the good. He did describe less invasive treatments which included asking the patient to think of something pleasurable directly after thinking of a traumatic event. Over time, the association can force the brain to rewire and help with post-traumatic stress disorder.

So what is is that governs our thoughts? Is the brain, as Penrose claims in 'The Emperor's New Mind', a system governed by random probabilities via quantum mechanics? Or are we, as Mark assumes in his work, simple Turing machines whose thoughts and actions can be completely predicted based on our experiences? Neither sounded particularly appealing.

"I want another option," I told Mark. He nodded and promised me one after he'd finished his dinner. The problem with being the guest speaker at a meal was the actual food was hard to fit in amidst the barrage of questions.

The third option, he explained as the plates were cleared, was that our mind is like a Bayesian machine which using a mixture of probabilities and input from its surroundings to make decisions. So when faced with the delectable crumble for desert, there was a very high chance that I would take the logical choice and eat it. Then there was the small probability I'd lob it across the table. I love feeling I have choice.

The crumble was rhubarb, in case anyone was wondering.

At the end of the dinner, each table was allowed to pose a question to another group to allow diners the chance to hear about the different areas being discussed that evening. The most important question was posed first and was directed at Professor Jeffrey Rosenthal from the department of statistics at the University of Toronto:

"What is the probability that Kate Middleton will wear a slinky wedding dress?"

"Slinky?" Jeffrey rose to answer the question. "This isn't as close to my area of expertise as you were led to believe!"

[*] Editor's note: any resemblance to real people, in the Physics and Astronomy Department or otherwise, is purely coincidental and Rob has never yet admitted to knowing Prince Harry. Or mooning.