How the Rosetta Mission Works

Comets light up our night sky and inspire wonder in children and adults alike.  Their burning up in the atmosphere creates a light-show for everyone to admire.  Today, scientists know that the lightshow we see whenever comets hit our atmosphere is actually the result of the intense heat generated by hitting dense air at high speed. The outer layers of the comet then combust and burn away. Some scientists postulate that organic molecules may have reached earth in its formative period by hitching a ride onto comets. Little is known about comet formation; however, scientists believe that most of today’s comets formed around the time that the gas giants of our solar system, Jupiter and Saturn, were beginning to condense from the disk of gas that surrounded our sun.  Since we know so little about comets and their age, a team at the European Space Agency (ESA) in collaboration with the National Aeronautics and Space Administration (NASA) designed a mission that would land a spacecraft on a comet to study its composition and test for organic matter.  This mission was named “Rosetta” and recently succeeded.

Goals and Areas of Investigation

Rosetta has just completed its ten year mission of catching the comet “67P/Churyumov-Gerasimenko” (C-G).  It became the first spacecraft to land on a comet and also the first spacecraft to observe this specific comet from such a short range.  Rosetta will closely study how the Sun’s heat transforms the comet, changing the block of rock and ice. Aside from observing changes in the comet, another primary goal of the Rosetta mission will be to document the typical makeup of the comet and investigate the possibility of organic compounds underneath the comet’s surface.

One of the main objectives of the Rosetta mission is to develop a better understanding of the “nucleus” of a comet, or its dense inner core.  In order to realize this aim, Rosetta will be carrying radar and microwave equipment that will attempt to “see” deep into the comet without directly drilling through it.  Additional thermal and spectroscoping imaging will investigate the levels of noble gases in the core of the comet.

The possibility of finding organic molecules on C-G excites proponents of the theory that life on earth originated from building blocks brought in from outer space on comets that burnt up in the earth’s atmosphere.  Rosetta’s Philae lander will test for the potential presence of nucleotides, similar to those that make up DNA and RNA, and amino acids, which are the molecules that make up proteins. Additionally, the lander will carry out an experiment to determine the “handedness” of molecules, identifying if left-handed or right-handed (Left-handed and right handed merely refer to the way atoms arrange themselves around an asymmetric carbon) isomers are more common on the comet.  Life on Earth, due to what appears to be a strange fluke, uses only left-handed isomers. Many theories circulate about why this is true, the most well supported says that this bias might be the result of light shining on these molecules in space.  Light waves behave similarly to corkscrews, meaning they can twist in either of two directions. Light circularly polarized one way can preferentially destroy molecules with one kind of handedness, while light circularly polarized the other way might suppress the other handedness.

If the comet also contains a majority of left-handed organic molecules, this discovery could give credence to those scientists who believe that life originated from the organic matter brought to earth by comets.

Behind the Name

The Rosetta mission is named after the famous Rosetta Stone, used by historians and archeologists to decipher Egyptian hieroglyphics.  Scientists hope Rosetta, similar to its namesake, will illuminate the language of the universe and improve understanding of Earth’s origins as well as those of comets.

Early Results

Since Rosetta has reached its target comet, it has already begun to take readings of levels of various compounds on and inside the comet.  Most notably, Rosetta has first started to measure H2O levels on C-G.  Why H2O?  The answer lies in Earth’s oceans.  The origin of the Earth’s oceans has yet to be determined.  The sheer amount of water on Earth suggests that the planet was bombarded by comets and asteroids that delivered water when they collided with its surface and early atmosphere.   In order to determine where the water came from, Rosetta will analyze at the proportion of deuterium – a hydrogen isotope –  in relation to normal hydrogen.  Preliminary results show that the D/H ratio is two or three times greater than in Earth’s oceans.

“This surprising finding could indicate a diverse origin for the Jupiter-family comets (The family that C-G is in) – perhaps they formed over a wider range of distances in the young Solar System than we previously thought,” claims Dr. Kathrin Altwegg, principal investigator on the Rosetta mission.  Moreover, previous comets from this family have contained varying Deuterium/Hydrogen (D/H) levels, with only one comet ever showing D/H levels similar to those of the earth’s oceans.  These findings suggest that asteroids, not comets, were the primary contributors to the formation of our planet’s massive oceans. 

Conclusion

As Rosetta follows C-G into the inner solar system and observes how the comet changes as it approaches the sun, it will continue to deliver valuable data back to scientists in Germany.  Even though its journey is far from over, it has already has a massive impact.  The launch of the mission was one of the most ambitious in history for the ESA, requiring years to plan and build. Already, that effort has paid off due to meaningful results that have allowed us to more fully understand our origins and the origins of elements from the natural world. Furthermore, the landing of Philae on C-G demonstrates the feasibility of comet mining in the future by demonstrating that it is feasible to land on fast moving comets with landing probes. 

How Place Cells Will Change Neuroscience As We Know It

Neuroscience is the fastest developing field in medicine.  It seems as though everyday there is a new discovery about how, or why our brains work the way they do. On Monday, October 6, the Nobel Prize for Medicine was announced. Researchers John O´Keefe, May-Britt Moser and Edvard I. Moser were awarded the prize jointly for their combined efforts in identifying how the brain understands and processes information about location. Their discoveries open many doors for future Alzheimer’s research and for researchers eager to exploit the brain’s spatial mapping for anti-nausea drugs. However, to understand how these fields will utilize the research, we first must understand the original research.

In 1971, Dr. John O’Keefe published a paper titled “The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat.” This title may seem nondescript, but it utilized novel methods to study the brains of moving subjects. Rats were anesthetized before having a microdrive assembly placed upon their heads. Reaching through the skull and into the upper layers of the cortex of the rat, the assembly was able to measure the electrical impulses inside the rat’s brain as it moved around freely. This research was soon followed up by another study in 1976. O’Keefe had identified areas in the rat’s brain that he called “place units,” which he defined as placeswhere the rat’s position on the maze was a necessary condition for maximal cell firing. Some of these place units fired maximally when the animal sniffed in a certain area, either because it found something new there or failed to find something that was usually there. Displace units increased their rates during behaviors associated with theta activity, strong oscillations in brain waves, in the hippocampal slow waves. In general these were behaviors that changed the rat’s position relative to the environment. The results are interpreted as strong support for the cognitive map theory of hippocampal function. The “cognitive map theory” stated that the brain created its own map from which it was able to place the body.

After these early discoveries of “place units” in a mammalian brain, May-Britt Moser and Edvard I. Moser followed up by researching how information is represented in the interface between the hippocampus and the neocortex, part of the cerebral cortex. This area is known as the entorhinal cortex, and it contains billions of entorhinal neurons. After several trials, it was determined that near the postrhinal-entorhinal border, entorhinal neurons had discrete place fields. They were predicting the rat’s location as accurately as place cells in the hippocampus. This discovery confirmed that the human brain creates a directionally oriented, topographically organized neural map of the spatial environment in the dorsocaudal medial entorhinal cortex (dMEC). In other words, the human brain tracked direction, height, and placement simultaneously in the same cells. This neural map uses a type of cell the research duo dubbed “grid cells.” These cells are activated whenever the animal’s position in space coincided with vertexes of a regular grid of triangles spanning the surface of the environment the rat was placed in. This grid creates a hexagonal pattern. This research, however, begged the question of whether information about location, direction and distance is integrated into this complex neural map and how. This time, the researchers focused their efforts on the medial entorhinal cortex (MEC). They recorded from each principal cell layer of MEC in rats that explored two-dimensional environments. These two dimensional environments were often mazes or just an open floor. They found that grid cells were changed by head direction cells, meaning that direction and speed did indeed affect the brain’s neural map. The combination of positional, directional, and translational information in a single MEC cell type enables grid coordinates to be updated during navigation just like a GPS.

The discovery of real time updating neural maps in mammalian brains opens up many possibilities for the future. One of the brightest fields of research lies in cures for Alzheimer’s. Problems with spatial memory and navigation are known to be early indicators of Alzheimer’s disease, a deadly neurological disease that leads to memory loss and death. Researchers compared patients with Alzheimer’s and those without any neurological impairment and found that patients with Alzheimer’s were significantly more likely to get lost. These demonstrations showed that misfiring place cells are early and regular indicators of Alzheimer’s disease. Place cells can be used not only for early diagnosis, but also as a target for next generation drugs and therapies.

Another one of the exciting possibilities for place cells research is in lessening the effects of Post-Traumatic Stress Disorder (PTSD). A group of researchers has recently succeeded in changing the memories associated with certain areas from positive to negative. Even though these experiments were done in rats, these results are immediately applicable to human patients who suffer from place-related PTSD.

Additionally, place cells may hold importance in the study of aging. It has been observed that the function of place cells changes with age. Older rats are less likely to remember paths they have learned recently and are less likely to learn them in the first place. It has also been observed that younger rats have a “plasticity” in their place fields that senile rats do not. When running along a path, younger rats are able to strengthen the links between place cells and allow for faster firing when the route is traversed again. Work has been done to attempt to restore some sort of place-field plasticity to aged rats. Different drugs have been developed that target neurogenesis, the creation of new neurons. However, these drugs have had mixed results, sometimes becoming detrimental when too many neurons are produced.

Scientists across the world are slowly solving the mystery that is place cells. The recent Nobel prizes will add publicity to this already exciting field of research. From Alzheimer’s drugs to PTSD treatment to reversing the effects of aging, place cells hold promise as a way to make some of the most debilitating and inevitable illnesses a thing of the past.

UK General Elections: Why The Models Were Wrong

The United Kingdom went to the polls on May 7th to vote in a general election for whom would be the occupant of 10 Downing Street.  Would it be the incumbent Conservative David Cameron or Labor’s Ed Miliband?  There was much excitement by the news media prior to the election as all of the major polls had the two politicians and their respective coalitions within one percentage point of each other, even down to election night.  However, all of that polling and almost every statistical forecast by almost every modeler called the election wrong.  The final results had the Conservatives winning not only their predicted 270 seats, but 330, securing a marginal majority without a coalition with the Liberal Democrats.  Speaking of the Liberal Democrats, they are now similar to the Northern White Rhinoceros: not technically extinct, but will require many decades of breeding in captivity before they are seen in the wild again.  In fact, their performance was so bad that the party leader, Nick Clegg, resigned.  The party’s performance was worse than even the worst prediction by any pollster.  Labor’s Ed Miliband joined Nick Clegg by retiring, as did UKIP’s divisive leader Nigel Farage.  All three leaders were confronted with disappointing vote totals.   Why were so many polls wrong?  The answer lies in failure to adjust and not giving proper emphasis to the likelihood of error.

The most accurate forecast for the UK General Election were by Steve Fisher at ElectionsEtc.  538, one of the largest forecasters of the election, adjusted polling numbers to account for polls overstating changes from the last election (They believed that the polls would overstate Labor’s rise from the previous election’s failure).  This adjustment meant that the tories (Conservatives) were to win by 1.6 percentage points (as per 538’s predictions).  What Fisher did differently than 538 and other major pollsters was not only adjusting for polls overstating changes, but also making additional adjustments that were party specific.  This adjustment was based off historical data of last minute swings such as the infamous 1992 election which had pollsters reporting a dead heat between the two parties even though the tories came out a whopping 7.5% points ahead.  This party specific adjustment moved Fisher’s 95% confidence interval upwards of those of other forecasters.

On the topic of error, 538 was quick to zero in on its flaws as was Fisher of ElectionsEtc.  National polling error was immediately identified as suspect as it did not reflect final exit poll results which showed a swing towards the Conservatives.  The massive forecast miss of the 1992 general election was accounted for in all adjustments to the polling error; however, that was apparently not enough to introduce significant variance in polling data.  Aside from national polling, it seems as though constituency level (akin to congressional district) polling was not accounted for properly.  This year’s polling on the constituency level was sponsored by Lord Ashcroft, a Conservative billionaire who sponsored polls in small districts that would otherwise never be under statistical scrutiny.  Ashcroft, with his financial backing of widespread polling, has acknowledged that publicized constituency level polls might induce voters to vote tactically in Britain multiparty system rather than true to their beliefs e.g. voting for the conservative candidate if they hear that Labor might win their constituency instead of voting for their original far-right choice of UKIP.

Ashcroft asked two voting-intention questions in all his constituency polls. The first was the “generic” question that is widely used: “If there was a general election tomorrow, which party would you vote for?” This was followed up with a more “specific” question: “Thinking specifically about your own parliamentary constituency at the next general election and the candidates who are likely to stand for election to Westminster there, which party’s candidate do you think you will vote for in your own constituency?” The Liberal Democrats did far better in the latter question, particularly where they were incumbents.  Many forecasters used the latter question as a better indicator of parliamentary success to their detriment, overestimating the Liberal Democrats’ voting share.

The final problem that lay in every forecaster’s model was an inadequate way for accounting for the possibility of substantial error.  Simply put, no model successfully captured the fickleness of the general public and the tendency for last minute swings.  No one predicted that the level of error recorded in the national polling data would ever be close to that recorded in 1992.  Therefore, cumulative seat totals were calculated on the incorrect assumption that nationwide, Labor and Conservatives had reasonably equal levels of support when the opposite was true. 

Interestingly, Labor did in fact have a 1 point swing vote advantage over the Conservatives in key constituencies.  However, why did this not convert to MP’s as predicted by models?   One of the key reasons is that they were fighting against first-term incumbent MPs that won their seats from Labour in 2010.  Most statisticians are aware of a phenomenon known as the “sophomore surge” whereby new incumbents build up a substantial personal vote and thus buck any attempt at predicting voting behavior.  The Conservatives utilized this effect to maintain a grip, although tenuous, over key districts in Wales particularly.

Overall, the 2015 General Elections were a statistical disaster.  Most forecasters called the election correctly, predicting David Cameron keeping his residence at 10 Downing Street; however, over-predicted both Liberal Democrat and Labor performance.  This was because they either failed to account for historical error in national polling data, weighted certain polling responses over others, or ignored established statistical phenomenon such as the sophomore surge.  We can only look forward to a more exciting 2016 election as Hillary Clinton faces off with the Republican primary candidate.  With this experience under their belt, we can only expect better and far more accurate predictions from the forecasting community.