# Particle Physics Planet

## November 24, 2014

### ZapperZ - Physics and Physicists

Fermilab Physics Slam 2014
A very entertaining video to watch if you were not at this year's Physics Slam.

Zz.

## November 23, 2014

### Christian P. Robert - xi'an's og

an ABC experiment

In a cross-validated forum exchange, I used the code below to illustrate the working of an ABC algorithm:

#normal data with 100 observations
n=100
x=rnorm(n)
#observed summaries

#normal x gamma prior
priori=function(N){
return(cbind(rnorm(N,sd=10),
1/sqrt(rgamma(N,shape=2,scale=5))))
}

ABC=function(N,alpha=.05){

prior=priori(N) #reference table

#pseudo-data
summ=matrix(0,N,2)
for (i in 1:N){
xi=rnorm(n)*prior[i,2]+prior[i,1]
}

#normalisation factor for the distance
#distance
#selection
posterior=prior[dist<quantile(dist,alpha),]}


Hence I used the median and the mad as my summary statistics. And the outcome is rather surprising, for two reasons: the first one is that the posterior on the mean μ is much wider than when using the mean and the variance as summary statistics. This is not completely surprising in that the latter are sufficient, while the former are not. Still, the (-10,10) range on the mean is way larger… The second reason for surprise is that the true posterior distribution cannot be derived since the joint density of med and mad is unavailable.

After thinking about this for a while, I went back to my workbench to check the difference with using mean and variance. To my greater surprise, I found hardly any difference! Using the almost exact ABC with 10⁶ simulations and a 5% subsampling rate returns exactly the same outcome. (The first row above is for the sufficient statistics (mean,standard deviation) while the second row is for the (median,mad) pair.) Playing with the distance does not help. The genuine posterior output is quite different, as exposed on the last row of the above, using a basic Gibbs sampler since the posterior is not truly conjugate.

Filed under: Books, pictures, R, Statistics, University life Tagged: ABC, Gibbs sampling, MCMC, mean, median, median absolute deviation, Monte Carlo Statistical Methods, normal model, summary statistics

### Lubos Motl - string vacua and pheno

Anton Kapustin: Quantum geometry, a reunion of math and physics
I think that this 79-page presentation by Caltech's Anton Kapustin is both insightful and entertaining.

If you are looking for the "previous slide" button, you may achieve this action simply by clicking 78 times. Click once for the "next slide".

If you have any problems with the embedded Flash version of the talk [click for full screen] above, download Anton's PowerPoint file which you may display using a Microsoft Office viewer or an OpenOffice or a LibreOffice or a Chrome extension or Google Docs or in many other ways.

Spoilers are below.

Anton describes the relationship between mathematics and physics, mathematicians and physicists, and so on. He focuses on the noncommutative character of algebras of observables in quantum mechanics. No mathematician really believed the Feynman's path integral and no physicist was interested in the mathematics by people like Grothendieck.

However, some smart opportunists in the middle – for example, Maxim Kontsevich – were able to derive interesting results (from mathematicians' viewpoint) using the path integral methods applied to the Poisson manifolds. And it wasn't just some lame undergraduate Feynman path integral that was needed. It was the stringy path integral that may be formulated using an associative product.

Hat tip: John Preskill, Twitter

### ZapperZ - Physics and Physicists

Research Gate
Anyone else here on Research Gate?

First of all, let me first declare that I'm not on Facebook, don't have a Twitter account, etc.. etc. This blog is my only form of "social media" involvement in physics, if you discount online physics forums. So I'm not that into these social media activities. Still, I've been on Research Gate for several years after being invited into it by a colleague.

If you're not familiar with it, Research Gate is a social media platform for ... you guessed it ... researchers. You reveal as much about yourself as you wish in your profile, and you can list all your papers and upload them. The software also "trolls" the journals and online to find publications that you may have authored and periodically asks you to verify that they are yours. Most of mine that are currently listed were found by the software, so it is pretty good.

Of course, the other aspect of such a social media is that you can "follow" others. The software, like any good social media AI, will suggest people that you might know, such as your coauthors, people from the same institution as yours, or any other situation where your name and that person's name appear in the same situation or document. It also keeps tabs on what the people that follows you or ones that you follow are doing, such as new publications being updated, job change, etc.. etc. It also tells you how many people viewed your profile, how many read your publications, and how many times your publications have been downloaded from the Research Gate site.

Another part of Research Gate is that you can submit a question in a particular field, and if that is a field that you've designated as your area of expertise, it will alert you to it so that you have the option of responding. I think this is the most useful feature of this community because this is what makes it "science specific", rather than just any generic social media program.

I am still unsure of the overall usefulness and value of this thing. So far it has been "nice", but I have yet to see it as being a necessity. Although, I must say, I'm pleasantly surprised to see some prominent names in my field of study who are also on it, which is why I continued to be on it as well.

So, if you are also on it, what do you think of it? Do you think this will eventually evolve into something that almost all researchers will someday need?

Zz.

### arXiv blog

Linguistic Mapping Reveals How Word Meanings Sometimes Change Overnight

Data mining the way we use words is revealing the linguistic earthquakes that constantly change our language.

## November 22, 2014

### Christian P. Robert - xi'an's og

Challis Lectures

I had a great time during this short visit in the Department of Statistics, University of Florida, Gainesville. First, it was a major honour to be the 2014 recipient of the George H. Challis Award and I considerably enjoyed delivering my lectures on mixtures and on ABC with random forests, And chatting with members of the audience about the contents afterwards. Here is the physical award I brought back to my office:

More as a piece of trivia, here is the amount of information about the George H. Challis Award I found on the UF website:

This fund was established in 2000 by Jack M. and Linda Challis Gill and the Gill Foundation of Texas, in memory of Linda’s father, to support faculty and student conference travel awards and the George Challis Biostatistics Lecture Series. George H. Challis was born on December 8, 1911 and was raised in Italy and Indiana. He was the first cousin of Indiana composer Cole Porter. George earned a degree in 1933 from the School of Business at Indiana University in Bloomington. George passed away on May 6, 2000. His wife, Madeline, passed away on December 14, 2009.

Cole Porter, indeed!

On top of this lecturing activity, I had a full academic agenda, discussing with most faculty members and PhD students of the Department, on our respective research themes over the two days I was there and it felt like there was not enough time! And then, during the few remaining hours where I did not try to stay on French time (!), I had a great time with my friends Jim and Maria in Gainesville, tasting a fantastic local IPA beer from Cigar City Brewery and several fantastic (non-local) wines… Adding to that a pile of new books, a smooth trip both ways, and a chance encounter with Alicia in Atlanta airport, it was a brilliant extended weekend!

Filed under: Books, pictures, Statistics, Travel, University life, Wines Tagged: ABC, Cigar City Brewery, Cole Porter, finite mixtures, Florida, Gainesville, George H. Challis Award, random forests

### Georg von Hippel - Life on the lattice

Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

Please do not hesitate to contact us at coordinator@mitp.uni-mainz.de if you have any questions.

We hope you will be able to join us in Mainz in 2015!

With best regards,

the organizers:
Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

### Clifford V. Johnson - Asymptotia

Luncheon Reflections
You know, I never got around to mentioning here that I am now Director (co-directing with Louise Steinman who runs the ALOUD series) of the Los Angeles Institute for the Humanities (LAIH), a wonderful organisation that I have mentioned here before. It is full of really fascinating people from a range of disciplines: writers, artists, historians, architects, musicians, critics, filmmakers, poets, curators, museum directors, journalists, playwrights, scientists, actors, and much more. These LAIH Fellows are drawn from all over the city, and equally from academic and non-academic sources. The thing is, you'll find us throughout the city involved in all sorts of aspects of its cultural and intellectual life, and LAIH is the one organisation in the city that tries to fully bring together this diverse range of individuals (all high-acheivers in their respective fields) into a coherent force. One of the main things we do is simply sit together regularly and talk about whatever's on our minds, stimulating and shaping ideas, getting updates on works in progress, making suggestions, connections, and so forth. Finding time in one's schedule to just sit together and exchange ideas with no particular agenda is an important thing to do and we take it very seriously. We do this at [...] Click to continue reading this post

### Emily Lakdawalla - The Planetary Society Blog

Quick update about our website
The last two weeks have been extraordinary for The Planetary Society. As amazing as this increased traffic is, it has brought to light some issues with our website including latency and missing content that we are still working on fixing.

## November 21, 2014

### Christian P. Robert - xi'an's og

a pile of new books

I took the opportunity of my weekend trip to Gainesville to order a pile of books on amazon, thanks to my amazon associate account (and hence thanks to all Og’s readers doubling as amazon customers!). The picture above is missing two  Rivers of London volumes by Ben Aaraonovitch that I already read and left at the office. And reviewed in incoming posts. Among those,

(Obviously, all “locals” sharing my taste in books are welcome to borrow those in a very near future!)

Filed under: Books, Travel, University life Tagged: amazon associates, book reviews, Booker Prize, Florida, Gainesville, Hugo Awards, John Scalzi, Robin Hobb, The Name of the Wind, Walter Miller

### Emily Lakdawalla - The Planetary Society Blog

Don't Miss This Great New Video About Europa
JPL released a slick new video highlighting the significance of Europa, the moon of Jupiter with more liquid water than the Earth.

### The Great Beyond - Nature blog

Gates Foundation announces world’s strongest policy on open access research

The Bill & Melinda Gates Foundation has announced the world’s strongest policy in support of open research and open data. If strictly enforced, it would prevent Gates-funded researchers from publishing in well-known journals such as Nature and Science.

On 20 November, the medical charity, of Seattle, Washington, announced that from January 2015, researchers it funds must make open their resulting papers and underlying data-sets immediately upon publication — and must make that research available for commercial re-use. “We believe that published research resulting from our funding should be promptly and broadly disseminated,” the foundation states. It says it will pay the necessary publication fees (which often amount to thousands of dollars per article).

The Foundation is allowing two years’ grace: until 2017, researchers may apply a 12-month delay before their articles and data are made free. At first glance, this suggests that authors may still — for now — publish in journals that do not offer immediate open-access (OA) publishing, such as Science and Nature. These journals permit researchers to archive their peer-reviewed manuscripts elsewhere online, usually after a delay of 6-12 months following publication.

Allowing a year’s delay makes the charity’s open-access policy similar to those of other medical funders, such as the Wellcome Trust or the US National Institutes of Health (NIH). But the charity’s intention to close off this option by 2017 might put pressure on paywalled journals to create an open-access publishing route.

However, the Gates Foundation’s policy has a second, more onerous twist which appears to put it directly in conflict with many non-OA journals now, rather than in 2017. Once made open, papers must be published under a license that legally allows unrestricted re-use — including for commercial purposes. This might include ‘mining’ the text with computer software to draw conclusions and mix it with other work, distributing translations of the text, or selling republished versions.  In the parlance of Creative Commons, a non-profit organization based in Mountain View, California, this is the CC-BY licence (where BY indicates that credit must be given to the author of the original work).

This demand goes further than any other funding agency has dared. The UK’s Wellcome Trust, for example, demands a CC-BY license when it is paying for a paper’s publication — but does not require it for the archived version of a manuscript published in a paywalled journal. Indeed, many researchers actively dislike the thought of allowing such liberal re-use of their work, surveys have suggested. But Gates Foundation spokeswoman Amy Enright says that “author-archived articles (even those made available after a 12-month delay) will need to be available after the 12 month period on terms and conditions equivalent to those in a CC-BY license.”

Most non-OA publishers do not permit authors to apply a CC-BY license to their archived, open, manuscripts. Nature, for example, states that openly archived manuscripts may not be re-used for commercial purposes. So do the American Association for the Advancement of ScienceElsevier and Wiley and many other publishers (in relation to their non-OA journals).

“It’s a major change. It would be major if publishers that didn’t previously use CC-BY start to use it, even for the subset of authors funded by the Gates Foundation. It would be major if publishers that didn’t previously allow immediate or unembargoed OA start to allow it, again even for that subset of authors. And of course it would be major if some publishers refused to publish Gates-funded authors,” says Peter Suber, director of the Office for Scholarly Communication at Harvard University in Cambridge, Massachusetts.

“You could say that Gates-funded authors can’t publish in journals that refuse to use CC-BY. Or you could say that those journals can’t publish Gates-funded authors. It may look like a stand-off but I think it’s the start of a negotiation,” Suber adds — noting that when the NIH’s policy was announced in 2008, many publishers did not want to accommodate all its terms, but now all do.

That said, the Gates Foundation does not leave as large a footprint in the research literature as the NIH. It only funded 2,802 research articles in 2012 and 2013, Enright notes; 30% of these were published in open access journals. (Much of the charity’s funding goes to development projects, rather than to research which will be published in journals).

The Gates Foundation also is not clear on how it will enforce its mandate; many researchers are still resistant to the idea of open data, for instance. (And most open access mandates are not in fact strictly enforced; only recently have the NIH and the Wellcome Trust begun to crack down). But Enright says the charity will be tracking what happens and will write to non-compliant researchers if needs be. “We believe that the foundation’s Open Access Policy is in alignment with current practice and trends in research funded in the public interest.  Hence, we expect that the policy will be readily understood, adopted and complied with by the researchers we fund,” she says.

### Sean Carroll - Preposterous Universe

Guest Post by Alessandra Buonanno: Nobel Laureates Call for Release of Iranian Student Omid Kokabee

Usually I start guest posts by remarking on what a pleasure it is to host an article on the topic being discussed. Unfortunately this is a sadder occasion: protesting the unfair detention of Omid Kokabee, a physics graduate student at the University of Texas, who is being imprisoned by the government of Iran. Alessandra Buonanno, who wrote the post, is a distinguished gravitational theorist at the Max Planck Institute for Gravitational Physics and the University of Maryland, as well as a member of the Committee on International Freedom of Scientists of the American Physical Society. This case should be important to everyone, but it’s especially important for physicists to work to protect the rights of students who travel from abroad to study our subject.

Omid Kokabee was arrested at the airport of Teheran in January 2011, just before taking a flight back to the University of Texas at Austin, after spending the winter break with his family. He was accused of communicating with a hostile government and after a trial, in which he was denied contact with a lawyer, he was sentenced to 10 years in Teheran’s Evin prison.

According to a letter written by Omid Kokabee, he was asked to work on classified research, and his arrest and detention was a consequence of his refusal. Since his detention, Kokabee has continued to assert his innocence, claiming that several human rights violations affected his interrogation and trial.

Since 2011, we, the Committee on International Freedom of Scientists (CIFS) of the American Physical Society, have protested the imprisonment of Omid Kokabee. Although this case has received continuous support from several scientific and international human rights organizations, the government of Iran has refused to release Kokabee.

Omid Kokabee has received two prestigious awards:

• The American Physical Society awarded him Andrei Sakharov Prize “For his courage in refusing to use his physics knowledge to work on projects that he deemed harmful to humanity, in the face of extreme physical and psychological pressure.”
• The American Association for the Advancement of Science awarded Kokabee the Scientific Freedom and Responsibility Prize.

Amnesty International (AI) considers Kokabee a prisoner of conscience and has requested his immediate release.

Recently, the Committee of Concerned Scientists (CCS), AI and CIFS, have prepared a letter addressed to the Iranian Supreme Leader Ali Khamenei asking that Omid Kokabee be released immediately. The letter was signed by 31 Nobel-prize laureates. (An additional 13 Nobel Laureates have signed this letter since the Nature blog post. See also this update from APS.)

Unfortunately, earlier last month, Kokabee’s health conditions have deteriorated and he has been denied proper medical care. In response, the President of APS, Malcolm Beasley, has written a letter to the Iranian President Rouhani calling for a medical furlough for Omid Kokabee so that he can receive proper medical treatment. AI has also made further steps and has requested urgent medical care for Kokabee.

Very recently, the Iran’s supreme court has nullified the original conviction of Omid Kokabee and has agreed to reconsider the case. Although this is positive news, it is not clear when the new trial will start. Considering Kokabee’s health conditions, it is very important that he is granted a medical furlough as soon as possible.

More public engagement and awareness is needed to solve this unacceptable case of violation of human rights and freedom of scientific research. You can help by tweeting/blogging about it and responding to this Urgent Action that AI has issued. Please note that the date on the Urgent Action is there to create an avalanche effect; it is not a deadline nor it is the end of action.

Alessandra Buonanno for the American Physical Society’s Committee on International Freedom of Scientists (CIFS).

### Lubos Motl - string vacua and pheno

An evaporating landscape? Possible issues with the KKLT scenario
By Dr Thomas Van Riet, K.U. Leuven, Belgium

What is this blog post about?

In 2003, in a seminal paper by Kachru, Kallosh, Linde and Trivedi (KKLT) (2000+ cites!), a scenario for constructing a landscape of de Sitter vacua in string theory with small cosmological constant was found. This paper was (and is) conceived as the first evidence that the string theory landscape contains a tremendous amount of de Sitter vacua (not just anti-de Sitter vacua) which could account for the observed dark energy.

The importance of this discovery should not be underestimated since it profoundly changed the way we think about how a fundamental, UV-complete theory of all interactions addresses apparent fine-tuning and naturalness problems we are faced with in high energy physics and cosmology. It changed the way we think string theory makes predictions about the low-energy world that we observe.

It is fair to say that, since the KKLT paper, the multiverse scenario and all of its related emotions have been discussed at full intensity, even been taken up by the media and it has sparked some (unsuccessful) attempts to classify string theory as non-scientific.

In this post I briefly outline the KKLT scenario and highlight certain aspects that are not often described in reviews but are crucial to the construction. Secondly I describe research done since 2009 that sheds doubts on the consistency of the KKLT scenario. I have tried to be as unbiased as possible. But near the end of this post I have taken the freedom to give a personal view on the matter.

The KKLT construction

The main problem of string phenomenology at the time of the KKLT paper was the so-called moduli-stabilisation problem. The string theory vacua that were constructed before the flux-revolution were vacua that, at the classical level, contained hundreds of massless scalars. Massless scalars are a problem for many reasons that I will not go into. Let us stick to the observed fact that they are not there. Obviously quantum corrections will induce a mass, but the expected masses would still be too low to be consistent with observations and various issues in cosmology. Hence we needed to get rid of the massless scalars. This is where fluxes come into the story since they provide a classical mass to many (but typically not all) moduli.

The above argument that masses due to quantum corrections are too low is not entirely solid. What is really the problem is that vacua supported solely by quantum corrections are not calculable. This is called the Dine-Seiberg problem and it roughly goes as follows: if quantum corrections are strong enough to create a meta-stable vacuum we necessarily are in the strong coupling regime and hence out of computational control. Fluxes evade the argument because they induce a classical piece of energy that can stabilize the coupling at a small value. Fluxes are used mainly as a tool for computational control, to stay within the supergravity approximation.

Step 1: fluxes and orientifolds

Step 1 in the KKLT scenario is to start from the classical IIB solution often referred to as GKP (1400+ cites), (see also this paper). What Giddings, Kachru and Polchinski did was to construct compactifications of IIB string theory (in the supergravity limit) down to 4-dimensional Minkowski space using fluxes and orientifolds. Orientifolds are specific boundary conditions for strings that are different from Dirichlet boundary conditions (which would be D-branes). The only thing that is required for understanding this post is to know that orientifolds are like D-branes but with negative tension and negative charge (anti D-brane charge). GKP understood that Minkowski solutions (SUSY and non-SUSY) can be build from balancing the negative energy of the orientifolds $$T_{{\rm O}p}$$ against the positive energy of the 3-form fluxes $$F_3$$ and $$H_3$$:$V = H_3^2 + F_3^2 + T_{{\rm O}p} = 0$ This scalar potential $$V$$ is such that it does not depend on the sizes of the compact dimensions. Those sizes are then perceived as massless scalar fields in four dimensions. Many other moduli directions have gained a mass due to the fluxes and all those masses are positive such that the Minkowski space is classically stable.

The 3-form fluxes $$H_3$$ and $$F_3$$ carry D3 brane charges, as can be verified from the Bianchi identity for the five-form field strength $$F_5$$$\dd F_5 = H_3 \wedge F_3 + Q_3\delta$ The delta-function on the right represent the D3/O3 branes that are really localised charge densities (points) in the internal dimensions, whereas the fluxes correspond to a smooth, spread out, charge distribution. Gauss' law tells us that a compact space cannot carry any charge and consequently the charges in the fluxes have opposite sign to the charges in the localised sources.

I want to stress the physics in the Bianchi identity. To a large extend one can think of the 3-form fluxes as a smeared configuration of actual D3 branes. Not only do they induce D3 charge, they also back-react on the metric because of their positive energy-momentum. We will see below that this is more than an analogy: the fluxes can even materialize into actual D3 branes.

This flux configuration is ‟BPS″, in the sense that various ingredients exert no force on each other: the orientifolds have negative tension such that the gravitational repulsion between fluxes and orientifolds exactly cancels the Coulomb attraction. This will become an issue once we insert SUSY-breaking anti-branes (see below).

Step 2: Quantum corrections

One of the major breakthroughs of the KKLT paper (which I am not criticizing here) is a rather explicit realization of how the aforementioned quantum corrections stabilize all scalar fields in a stable Anti-de Sitter minimum that is furthermore SUSY. As expected quantum corrections do give a mass to those scalar fields that were left massless at the classical level in the GKP solution. From that point of view it was not a surprise. The surprise was the simplicity, the level of explicitness, and most important, the fact that the quantum stabilization can be done in a regime where you can argue that other quantum corrections will not mess up the vacuum. Much of the original classical supergravity background is preserved by the quantum corrections since the stabilization occurs at weak coupling and large volume. Both coupling and volume are dynamical fields that need to be stabilized at self-consistent values, meaning small coupling and large (in string units) volume of the internal space. If this were not the case than one would be too far outside the classical regime for this quantum perturbation to be leading order.

So what KKLT showed is exactly how the Dine-Seiberg problem can be circumvented using fluxes. But, in my opinion, something even more important was done at this step in the KKLT paper. Prior to KKLT one could not have claimed on solid grounds that string theory allows solutions that are perceived to an observer as four-dimensional. Probably the most crude phenomenological demand on a string theory vacuum remained questionable. Of course flux compactifications were known, for example the celebrated Freund-Rubin vacua like $$AdS_5\times S^5$$ which were crucial for developing holography. But such vacua are not lower-dimensional in any phenomenological way. If we were to throw you inside the $$AdS_5\times S^5$$ you would not see a five-dimensional space, but you would observe all ten dimensions.

KKLT had thus found the first vacua with all moduli fixed that have a Kaluza-Klein scale that is hierarchically smaller than the length-scale of the AdS vacuum. In other words, the cosmological constant in KKLT is really tiny.

But the cosmological constant was negative and the vacuum of KKLT was SUSY. This is where KKLT came with the second, and most vulnerable, insight of their paper: the anti-brane uplifting.

Step 3: Uplifting with anti-D3 branes

Let us go back to the Bianchi identity equation and the physics it entails. If one adds D3 branes to the KKLT background the cosmological constant does not change and SUSY remains unbroken. The reason is that D3 branes are both BPS with respect to the fluxes and the orientifold planes. Intuitively this is again clear from the no-force condition. D3 branes repel orientifolds gravitationally as strong as they attract them "electromagnetically" and vice versa for the fluxes (recall that the fluxes can be seen as a smooth D3 distribution). This also implies that D3 branes can be put at any position of the manifold without changing the vacuum energy: the energy in the tension of the branes gets cancelled by the decrease in fluxes required to cancel the tadpole condition (Gauss' law).

Anti-D3 branes instead break SUSY. Heuristically that is straightforward since the no-force condition is violated. The anti-D3 branes can be drawn towards the non-dynamical O-planes without harm since they cannot annihilate with each other. The fluxes, however, are another story that I will get to shortly. The energy added by the anti-branes is twice the anti-brane tension $$T_{\overline{D3}}$$: the gain in energy due to the addition of fluxes, required to cancel off the extra anti-D3 charges, equals the tension of the anti-brane. Hence we get$V_{\rm NEW} = V_{\rm SUSY} + 2 T_{\overline{D3}}$ At first it seems that this new potential can never have a de Sitter critical point since $$T_{\overline{D3}}$$ is of the order of the string scale (which is a huge amount of energy) whereas $$V_{\rm SUSY}$$ was supposed to be a very tiny cosmological constant. One can verify that the potential has a runaway structure towards infinite volume. What comes to the rescue is space-time warping. Mathematically warping means that the space-time metric has the following form$\dd s_{10}^2 = e^{2A} \dd s_4^2 + \dd s_6^2$ where $$\dd s_4^2$$ is the metric of four-dimensional space, $$\dd s_6^2$$ the metric on the compact dimensions (conformal Calabi-Yau, in case you care) and $$\exp(2A)$$ is the warp-factor, a function that depends on the internal coordinates. A generic compactification contains warped throats, regions of space where the function $$\exp(A)$$ can become exponentially small. This is often depicted using phallus-like pictures of warped Calabi-Yau spaces, such as the one below (taken from the KPV paper (I will come to KPV in a minute)):

Consider some localized object with some non-zero energy, then that energy is significantly red-shifted in regions of high warping. For anti-branes the tension gets the following redshift factor$\exp(4A) T_{\overline{D3}}.$ This can bring a string scale energy all the way down to the lowest energy scales in nature. The beauty of this idea is that this redshift occurs dynamically; an anti-brane feels literally a force towards that region since that is where its energy is minimized. So this redshift effect seems completely natural, one just needs a warped throat.

The KKLT scenario then continues by observing that with a tunable warping, a new critical point in the potential arises that is a meta-stable de Sitter vacuum as shown in the picture below.

This was verified by KKLT explicitly using a Calabi-Yau with a single Kähler modulus .

The reason for the name uplifting then becomes obvious; near the critical point of the potential it indeed seems as if the potential is lifted with a constant value to a de Sitter value. This lifting did not happen with a constant value but the dependence of the uplift term on the Kähler modulus is practically constant when compared to the sharp SUSY part of the potential.

I am glossing over many issues, such as the stability of the other directions, but all of this seems under control (the arguments are based on a parametric separation between the complex structure moduli masses and the masses of the Kähler moduli).

The KKLT scrutiny

The issues with the KKLT scenario that have been discussed in the last five years have to do with back-reaction. As mentioned earlier, the no-force condition becomes violated once we insert the anti-D3 branes. Given the physical interpretation of the 3-form fluxes as a cloud of D3 branes, you can guess what the qualitative behavior of the back-reaction is: the fluxes are drawn gravitationally and electromagnetically towards the anti-branes, leading to a local increase of the 3-form flux density near the anti-brane.

Although the above interpretation was not given, this effect was first found in 2009 independently by Bena, Grana and Halmagyi in Saclay (France) and by McGuirk, Shiu and Sumitomo in Madison (Wisconsin, USA). These authors constructed the supergravity solution that describes a back-reacting anti-brane. Clearly this is an impossible job, were it not for three simplifying assumptions:
• They put the anti-brane inside the non-compact warped Klebanov-Strassler throat since that is the canonical example of a throat in which computations are doable. This geometry consists of a radial coordinate measuring the distance from the tip and five angles that span the manifold which is topologically $$S^2\times S^3$$. The non-compactness implies that we can circumvent the use of the quantum corrections of KKLT to have a space-time solution in the first place. Non-compact geometries work differently from compact ones. For example, the energy of the space-time (ADM mass) does not need to effect the cosmological constant of the 4D part of the metric. Roughly, this is because there is no volume modulus that needs to be stabilized. In the end one should ‟glue″ the KS throat, at large distance from the tip, to a compact Calabi-Yau orientifold.

• The second simplification was to smear the anti-D3 branes over the tip of the throat. This means that the solution describes anti-D3's homogeneously distributed over the tip. In practice this implies that the supergravity equations of motion become a (large) set of coupled ODE's.

• These two papers solved the ODE's approximately: They treated the anti-brane SUSY breaking as small and expanded the solution in terms of a SUSY-breaking parameter, keeping the first terms in the expansion.
Regardless of these assumptions it was an impressive task to solve the ODE's. In this task the Saclay paper was the more careful one in connecting the solution at small radius to the solution at large radius. In any case these two papers found the same result, which was unexpected at the time: The 3-form flux density became divergent at the tip of the throat. More precisely, the following scalar quantity diverges at the tip:$H_3^2 \to \infty.$ (I am ignoring the string coupling in all equations.) Diverging fluxes near brane sources are rather mundane (a classical electron has a diverging electric field near its position). But the real reason for the worry is that this singularity is not in the field sourced by the brane (since that should be the $$F_5$$-field strength and it indeed blows up as well).

In light of the physical picture I outlined above, this divergence is not that strange to understand. The D3 charges in the fluxes are being pulled towards the anti-D3 branes where they pile up. The sign of the divergence in the 3-form fluxes is indeed that of a D3 charge density and not anti-D3 charge density.

Whenever a supergravity solution has a singularity one has to accept that one is outside of the supergravity approximation and full-blown string theory might be necessary to understand it. And I agree with that. But still singularities can — and should — be interpreted and the interpretation might be sufficient to know or expect that stringy corrections will resolve it.

So what was the attitude of the community when these papers came out? As I recall it, the majority of string cosmologists are not easily woken up and the attitude of the majority of experts that took the time to form an opinion, believed that the three assumptions above (especially the last two) were the reason for this. To cut a long story short (and painfully not mention my own work on showing this was wrong) it is now proven that the same singularity is still there when the assumptions are undone. The full proof was presented in a paper that gets too little love.

So what was the reaction of the few experts that still cared to follow this? They turned to an earlier suggestion by Dymarsky and Maldacena that the real KKLT solution is not described by anti-D3 branes at the tip of the throat but by spherical 5-branes, that carry anti-D3 charges (a.k.a. the Myers effect). This then would resolve the singularity they argued (hoped?). In fact, a careful physicist could have predicted some singularity based on the analogy with other string theory models of 3 branes and 3-form fluxes. Such solutions often come with singularities that are only resolved when the 3-branes are being polarised. But such singularities can be of any form. The fact that it so nicely corresponds to a diverging D3 charge density should not be ignored — and it too often is.

So, again, I agree that the KKLT solution should really contain 5-branes instead of 3-branes and I will discuss this below. But before I do, let me mention a very solid argument of why also this seems not to help.

If indeed the anti-D3 branes ‟puff″ into fuzzy spherical 5-branes leading to a smooth supergravity solution then one should be able to ‟heat up″ the solution. Putting gravity solutions at finite temperature means adding an extra warp-factor in front of the time-component in the metric that creates an event horizon at a finite distance. In a well-known paper by Gubser it was argued that this provides us with a classification of acceptable singularities in supergravity. If a singularity can be cloaked by a horizon by adding sufficient temperature it has a chance of being resolved by string theory. The logic behind this is simple but really smart: if there is some stringy physics that resolves a sugra singularity one can still heat up the branes that live at the singularity. One can then add so much temperature that the horizon literally becomes parsecs in length such that the region at and outside the horizon become amendable to classical sugra and it should be smooth. Here is the suprise: that doesn't work. In a recent paper, the techniques of arXiv:1301.5647 were extended to include finite temperature and what happened is that the diverging flux density simply tracks the horizon, it does not want to fall inside. The metric Ansatz that was used to derive this no-go theorem is compatible with spherical 5-branes inside the horizon. So it seems difficult to evade this no-go theorem.

The reaction sofar on this from the community, apart from a confused referee report, is silence.

But still let us go back to zero temperature since there is some beautiful physics taking place. I said earlier that the true KKLT solution should include 5-branes instead of anti-D3 branes. This was described prior to KKLT in a beautiful paper by Kachru, Pearson and Verlinde, called KPV (again the same letter ‛K′). The KPV paper is both the seed and the backbone of the KKLT paper and the follow-up papers, like KKLMMT, but for some obscure reason is less cited. KPV investigated the ‟open-string″ stability of probe anti-D3 branes placed at the tip of the KS throat. They realised that the 3-form fluxes can materialize into actual D3 branes that annihilate the anti-D3 branes which implies a decay to the SUSY vacuum. But they found that this materialization of the fluxes occurs non-perturbatively if the anti-brane charge $$p$$ is small enough$\frac{p}{M} \ll 1.$ In the above equation $$M$$ denotes a 3-form flux quantum that sets the size of the tip of the KS throat. The beauty of this paper resides in the fact that they understood how the brane-flux annihilation takes place, but I necessarily have to gloss over this such that you cannot really understand it if you do not already know this. In any case, here it comes: the anti-D3 brane polarizes into a spherical NS5 brane wrapping a finite contractible 2-sphere inside the 3-sphere at the tip of the KS throat as in the below picture:

One can show that this NS5 brane carries $$p$$ anti-D3 charges at the South pole and $$M-p$$ D3 charges at the North pole. So if it is able to move over the equator from the South to the North pole, the SUSY-breaking state decays into the SUSY vacuum: recall that the fluxes have materialized into $$M$$ D3 branes that annihilate with the $$p$$ anti-D3 branes leaving $$M-p$$ D3 branes behind in the SUSY vacuum. But what pushes the NS5 to the other side? That is exactly the 3-form flux $$H_3$$. This part is easy to understand: an NS5 brane is magnetically charged with respect to the $$H_3$$ field strength. In the probe limit KPV found that this force is small enough to create a classical barrier if $$p$$ is small enough. So we get a meta-stable state, nice and very beautiful. But what would they have thought if they could have looked into the future to see that the same 3-form flux that pushes the NS5 brane diverges in the back-reacted solution? Not sure, but I cannot resist from quoting a sentence out of their paper
One forseeable quantitative difference, for example, is that the inclusion of the back-reaction of the NS5 brane might trigger the classical instabilities for smaller values of $$p/M$$ than found above.
It should be clear that this brane-flux mechanism is suggesting a trivial way to resolve the singularity. The anti-brane is thrown into the throat and starts to attract the flux, which keeps on piling up until it becomes too strong causing the flux to annihilate with the anti-brane. Then the flux pile-up stops since there is no anti-brane anymore. At no point does this time-dependent process lead to a singular flux density. The singularity was just an artifact of forcing an intrinsically time-dependent process into a static Ansatz. This idea is explained in two papers: arXiv:1202.1132 and arXiv:1410.8476 .

I am often asked whether a probe computation can ever fail, apart from being slightly corrected? I am not sure, but what I do know is that KPV do not really have a trustworthy probe regime: for details explained in the KPV paper, they have to work in the strongly coupled regime and they furthermore have a spherical NS5 brane wrapping a cycle of stringy length scale, which is also worrisome.

Still one can argue that the NS5 brane back-reaction will be slightly different from the anti-D3 back-reaction exactly such as to resolve the divergence. I am sympathetic to this (if one ignores the trouble with the finite temperature, which one cannot ignore). However, again computations suggest this does not work. Here I will go even faster since this guest blog is getting lengthy.

This issue has been investigated in some papers such as arXiv:1212.4828, and there it was shown, under certain assumptions, that the polarisation does not occur in a way to resolve the divergence. Note that, like the finite temperature situation, the calculation could have worked in favor of the KKLT model, but it did not! At the moment I am working on brane models which have exactly the same 3-form singularity but are conceptually different since the 4D space is AdS and SUSY is not broken. In this circumstance the same singularity does get resolved that way. My point is that the intuition of how the singularity should get resolved does work in certain cases, but it does not work sofar for models relevant to KKLT.

What is the reaction of the community? Well they are cornered to say that it is the simplifications made in the derivation of the ‛no polarisation′ result that is causing troubles.

But wait a minute... could it perhaps be that at this point in time the burden of proof has shifted? Apparently not, and that, in my opinion, starts becoming very awkward.

It is true that there is still freedom for the singularity to be resolved through brane polarisation. There is just one issue with that: to be able to compute this in a supergravity regime requires to tune parameters out of the small $$p$$ limit. Bena et. al. have pushed this idea recently in arXiv:1410.7776 and were so kind to assume the singularity gets resolved, but they found the vacuum is then necessarily tachyonic. It can be argued that this is obvious since they necessarily had to take the limit away from what KPV want for stability (remember $$p\ll M$$). But then again, the tachyon they find has nothing to do with a perturbative brane-flux annihilation. Once again a situation in which a honest-to-God computation could have turned into the favor of KKLT, it did not.

Here comes the bias of this post: were it not for a clear physical picture behind the singularity I might be finding myself in the position of being less surprised that there is a camp that is not too worried about the consistency of KKLT. But there is a clear picture with trivial intuition I already alluded to: the singularity, when left unresolved, indicates that the anti-brane is perturbatively unstable and once you realise that, the singularity is resolved by allowing the brane to decay. At least I hope the intuition behind this interpretation was clear. It simply uses that a higher charge density in fluxes (near the anti-D3) increases the probability for the fluxes to materialize into actual D3 branes that eat up the anti-branes. KPV told us exactly how this process occurs: the spherical NS5 brane should not feel a too strong force that pulls it towards the other side of the sphere. But that force is proportional to the density of the 3-form fluxes... and it diverges. End of story.

What now?

I guess that at some point these ‟anti-KKLT″ papers will stop being produced as their producers will run out of ideas for computations that probe the stability of the would-be KKLT vacuum. If the first evidence in favor of KKLT will be found in that endeavor, I can assure you that it will be published in that way. It just never happened thus far.

We are facing the following problem: to fully settle the discussion, computations outside the sugra regime have to be done (although I believe that the finite temperature argument suggests that this will not help). Were fluxes not invented to circumvent this? It seems that the anti-brane back-reaction brings us back to the Dine-Seiberg problem.

So we are left with a bunch of arguments against what is/was a beautiful idea for constructing dS vacua. The arguments against have an order of rigor higher than the original models. I guess we need an extra level of rigor on top from those that want to keep using the original KKLT model.

What about alternative de Sitter embeddings in string theory? Lots of hard work has been done there. Let me do injustice to it by summarizing it as follows: none of these models are convincing to me at least. They are borderline in the supergravity regime or we don't know whether it is trustworthy in supergravity (like with non-geometric fluxes). Very popular are F-term quantum corrections to the GKP vacuum which are used to stabilize the moduli in a dS vacuum. But none of this is from the full 10D point of view. Instead it is between 4D effective field theory and 10D. KKLT at least had a full 10-dimensional picture of uplifting and that is why it can be scrutinized.

It seems as if string theory is allergic to de Sitter vacua. Consider the following: any grad student can find an anti-de Sitter solution in string theory. Why not de Sitter? All claimed de Sitter solutions are always rather phenomenological in the sense that the cosmological constant is small compared with the KK scale. I guess we better first try to find unphysical dS vacua. Say a six-dimensional de Sitter solution with large cosmological constant. But we cannot, or nobody ever did this. Strange, right? Many say: "you just have to work harder". That ‛harder′ always implies ‛less explicit′ and then suddenly a landscape of de Sitter vacua opens up. I doubt that seriously, maybe it just means we are sweeping problems under the carpet of effective field theory?

I hope I have been able to convince you that the search for de Sitter vacua is tough if you want to do this truly top-down. The most popular construction method, the KKLT anti-brane uplifting, has a surprise: a singularity in the form of a diverging flux density. It sofar persistently survives all attempts to resolve it. This divergence is however resolved when you are willing to accept that the de Sitter vacuum is not meta-stable but instead a solution with decaying vacuum energy. Does string theory want to tell us something deep about quantum gravity?

### Emily Lakdawalla - The Planetary Society Blog

Lunar Polar Volatile Puzzle
Deepak Dhingra gives an exciting update from the recent Lunar Exploration and Analysis Group (LEAG) meeting at Johns Hopkins University Applied Physics Lab (JHU-APL) in Baltimore.

### Christian P. Robert - xi'an's og

some LaTeX tricks

Here are a few LaTeX tricks I learned or rediscovered when working on several papers the past week:

1. I am always forgetting how to make aligned equations with a single equation number, so I found this solution on the TeX forum of stackexchange, Namely use the equation environment and then an aligned environment inside. Or the split environment. But it does not always work…
2. Another frustrating black hole is how to deal with integral signs that do not adapt to the integrand. Too bad we cannot use \left\int, really! Another stackexchange question led me to the bigints package. Not perfect though.
3. Pierre Pudlo also showed me the commands \graphicspath{{dir1}{dir2}} and \DeclareGraphicsExtensions{.pdf,.png,.jpg} to avoid coding the entire path to each image and to put an order on the extension type, respectively. The second one is fairly handy when working on drafts. The first one does not seem to work with symbolic links, though…

Filed under: Books, Kids, Statistics, University life Tagged: bigint, graphical extension, LaTeX, mathematical equations, StackExchange

### CERN Bulletin

CERN Bulletin Issue No. 47-48/2014
Link to e-Bulletin Issue No. 47-48/2014Link to all articles in this issue No.

### Peter Coles - In the Dark

Warning! Offensive Image…

No reasonable person could possibly take offence at that tweet from Emily Thornberry, yet she has had to resignfrom the Shadow Cabinet because of it. It is beyond belief how pathetic British politics and the British media have become.

### astrobites - astro-ph reader's digest

A New Way with Old Stars: Fluctuation Spectroscopy

Astronomers use models to derive properties of individual stars that we cannot directly observe, such as mass, age, and radius. This is also the case for a group of stars (a galaxy or a star cluster). How do we test how accurate these models are? Well, we compare model predictions against observations. One problem with current stellar population models is that they remain untested for old populations of stars (because they are rare). These old stars are important because they produce most of the light from massive elliptical galaxies. So a wrong answer from model means a wrong answer on various properties of massive elliptical galaxies such as their age and metallicity. (Houstan, we have a problem.)

Fear not — this paper introduces fluctuation spectroscopy as a new way to test stellar population models for elliptical galaxies. It focuses on a group of stars known as red giants, stars nearing the end of their lives. The spectra of red giants have features (TiO and water molecular bands) that can be used to obtain the chemical abundances, age, and initial mass function (IMF) of a galaxy. Red giants are very luminous. For instance, once our beloved Sun grows into old age as a red giant, it will be thousands of times more luminous than today. As such, red giants dominate the light of early-type galaxy (another name for elliptical galaxy). By looking at an image of an early-type galaxy, we can infer that bright pixels contain more red giants than faint pixels. Figure 1 illustrates this effect. Intensity variations from pixel-to-pixel are due to fluctuations in the number of red giants. By comparing the spectra of pixels with different brightness, one can isolate the spectral features of red giants. Astronomers can then analyse these spectral features to derive galaxy properties to be checked against model predictions.

FIG. 1 – Top left figure shows a model elliptical galaxy based on observation of NGC 4472. The right figure zooms in on a tiny part of the galaxy, and shows the pixel-to-pixel brightness variations within that tiny region. Figures on the bottom panel further zoom in on a bright (white) and a faint (black) pixel. The bright pixel (bottom left) contains many more bright red giant stars, represented as red dots, compared to the faint pixel (bottom right). The inset figures are color versus magnitude diagrams of the stars in these pixels, where there are more luminous giant stars (open circles) in the bright pixel.

The authors applied fluctuation spectroscopy on NGC 4472, the brightest galaxy in the Virgo cluster. They obtained images of the galaxy at six different wavelengths using the narrow-band filters (filters that allow only a few wavelengths of light, or emission lines, to pass through; see this or this) in the Advanced Camera for Surveys aboard the Hubble Space Telescope. In addition, they acquired deep broad-band images (images obtained using broad-band filters that allow a large portion of light to go through) of the galaxy. These broad-band images, because of their high signal-to-noise compared to the narrow-band images (broad-band images receive more light than narrow-band images and so have higher signals), are used to measure the flux in each pixel in order to measure how brightness changes. Next, the authors divided narrow-band images in two adjacent narrow-band filters. Recall that since narrow-band filters allow only certain emission lines to get through, the ratio of flux in two narrow-band filters –an “index image”– is a proxy to the distribution of stellar types in each pixel because different stars produce different emission lines. The money plot of this paper, Figure 2, shows the relation between the averaged indices of index image and surface brightness fluctuation; it illuminates the fact that pixels with more red giants (larger SBF) produce a different spectrum (indices of index images) than pixels with less giants (lower SBF).

By fitting observed index variations with models, we can obtain a predicted spectrum. The authors compared observed index variations of NGC 4472 with modeled index variations derived from Conroy & van Dokkum (2012) stellar population synthesis models, shown in Figure 3, which performs well in characterizing the galaxy.

The last thing that the authors analysed are the effects of changing model parameters on the indices of index images, in particular by varying age, metallicity, and the IMF. They found that the indices are sensitive to age and metallicity, thereby enabling them to exclude models that produce incompatible ages and metallicities with observations. One interesting result is that the indices are also sensitive to the presence of late M giant stars, which allows one to constrain their contribution to the total light from a galaxy. This is useful because standard stellar population synthesis models for early-type galaxies do not include the presence of these cool giants.

In conclusion, the authors introduced fluctuation spectroscopy as a probe of stellar type distributions in old populations. They applied this method to NGC 4472 and found that results of observation agree very well with model predictions. Various perturbations are introduced into the model with the most important result being that one can quantify the contribution of late M giants to the integrated light of early-type galaxies. Before ending, the authors propose directions for future work, which include obtaining actual spectra rather than narrow-band images and studying larger ranges of surface brightness fluctuations.

FIG. 2 – Vertical axis is the flux ratio in a narrow-band filter and the adjacent band. It is a measure of the different number of different stars present. The horizontal axis is surface brightness fluctuation, SBF. SBF = 1 is the mean, while SBF < 1 represents little fluctuation and SBF > 1 represents high fluctuation. There is a trend between index and SBF because red giants produce a larger-than-average brightness and a different spectrum that changes the index of different index images.

FIG. 3 – The top panel compares observed indices (dots) of NGC 4472 with model indices (lines). The vertical and horizontal axes are the same as Figure 2. The bottom panel shows the differences between observed and predicted indices. These figures suggest that model predictions agree amazingly well with observations.

### Emily Lakdawalla - The Planetary Society Blog

A Mission to Europa Just Got a Whole Lot More Likely
Rep. John Culberson, an outspoken supporter of Europa exploration, will assume leadership of an influential congressional committee that funds NASA.

## November 20, 2014

### Christian P. Robert - xi'an's og

not converging to London for an [extra]ordinary Read Paper

On December 10, I will alas not travel to London to attend the Read Paper on sequential quasi-Monte Carlo presented by Mathieu Gerber and Nicolas Chopin to The Society, as I fly instead to Montréal for the NIPS workshops… I am quite sorry to miss this event, as this is a major paper which brings quasi-Monte Carlo methods into mainstream statistics. I will most certainly write a discussion and remind Og’s readers that contributed (800 words) discussions are welcome from everyone, the deadline for submission being January 02.

Filed under: Books, Kids, pictures, Statistics, Travel, University life Tagged: discussion paper, London, MCQMC, Nicolas Chopin, NIPS 2014, Read paper, Royal Statistical Society, sequential Monte Carlo

### astrobites - astro-ph reader's digest

Real-Time Stellar Evolution

Images of four planetary nebulae taken by the Hubble Space Telescope using a narrow Hα filter. All of these feature hydrogen-rich central stars.

To get an idea of how stars live and die, we can’t just pick one and watch its life unfold in real time. Most stars live for billions of years! So instead, we do a population census of sorts. Much like you can study how humans age by taking a “snapshot” of individuals ranging from newborn to elderly, so too can we study the lives of stars.

But like all good things in life (and stars), there are exceptions. Sometimes, stellar evolution happens on more human timescales—tens to hundreds of years rather than millions or billions. One such exception is the topic of today’s paper: planetary nebulae, and the rapidly dying stellar corpses responsible for all that glowing gas.

All stars similar to our Sun, or up to about eight times as massive, will end their lives embedded in planetary nebulae like these. The name is a holdover from their discovery and general appearance—we have long known that planetary nebulae have nothing to do with planets. Instead, they are the former outer layers of a star: an envelope of material hastily ejected when gravity can no longer hold a star together. In its final death throes, what’s left of the star rapidly heats up and begins to ionize gas in the nebula surrounding it.

A Deathly Glow

Ionized gas is the telltale sign that the central star in a planetary nebula isn’t quite done yet. When high-energy light from a dying star rams into gas in its planetary nebula, some atoms of gas are so energized that electrons are torn from their nuclei. Hotter central stars emit more light, making the ionized gas glow brighter. This final stage of stellar evolution is what the authors of today’s paper observe in real time for a handful of planetary nebulae.

Most planetary nebulae show increasing oxygen emission with time as the central star heats up and ionizes gas in the nebula. The stars are classified into one of three categories based on their spectra. Points indicate the average change in oxygen emission per year, and dashed lines show simple stellar evolution models for stars with final masses between 0.6 and 0.7 times that of the Sun.

The figure above shows how oxygen emission in many planetary nebulae has changed brightness over time. Each point represents data spanning at least ten years and brings together new observations with previously published values in the literature. Distinct symbols assign each star to one of three categories: stars with lots of hydrogen in their spectra (H rich), Wolf-Rayet ([WR]) stars with many emission lines in their spectra (indicating lots of hot gas very close to the star), and weak emission line stars (wels). The fact that most stars show an increase in planetary nebula emission—the stars are heating up—agrees with our expectations.

Oxygen emission flux as a function of time for three planetary nebulae over 30+ years. The top two systems, M 1-11 and M 1-12, have hydrogen-rich stars that cause increasing emission as expected. The bottom pane, SwSt 1, shows a Wolf-Rayet star with a surprising decreasing trend.

The earliest observation in this study is from 1978. Spectrographs and imaging techniques have improved markedly since then! While some changes in flux are from different observing techniques, the authors conclude that at least part of each flux increase is real. What’s more, hydrogen-rich stars seem to agree with relatively simple evolution models, shown as dashed lines on the figure above. (Stars move toward the right along the lines as they evolve.) More evolved stars cause oxygen in the nebula to glow ever brighter, but the rate of increase in oxygen emission slows as the star ages and loses fuel.

There’s Always an Oddball

However, the authors find that some planetary nebulae don’t behave quite as consistently. None of the more evolved Wolf-Rayet systems show increasing emission with time. In fact, one of them, in the bottom pane of the figure to the right, shows a steady decline in oxygen emission! This suggests the hot gas closest to the star may be weakening even as the star is getting hotter, but it is not fully understood.

This unique glimpse into real-time stellar evolution is possible because so many changes happen to a star as it nears the end of its life. Eventually, these hot stellar remnants will become white dwarfs and slowly cool for eternity. Until then, not-dead-yet stars and their planetary nebulae have lots to teach us.

### Symmetrybreaking - Fermilab/SLAC

CERN frees LHC data

Anyone can access collision data from the Large Hadron Collider through the new CERN Open Data Portal.

Today CERN launched its Open Data Portal, which makes data from real collision events produced by LHC experiments available to the public for the first time.

“Data from the LHC program are among the most precious assets of the LHC experiments, that today we start sharing openly with the world,” says CERN Director General Rolf Heuer. “We hope these open data will support and inspire the global research community, including students and citizen scientists.”

The LHC collaborations will continue to release collision data over the coming years.

The first high-level and analyzable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. Open source software to read and analyze the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” says CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations that have been prepared for educational purposes. These resources are accompanied by visualization tools.

All data on OpenData.cern.ch are shared under a Creative Commons CC0 public domain dedication. Data and software are assigned unique DOI identifiers to make them citable in scientific articles. And software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

CERN published a version of this article as a press release.

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

CERN frees LHC data

Anyone can access collision data from the Large Hadron Collider through the new CERN Open Data Portal.

Today CERN launched its Open Data Portal, which makes data from real collision events produced by LHC experiments available to the public for the first time.

“Data from the LHC program are among the most precious assets of the LHC experiments, that today we start sharing openly with the world,” says CERN Director General Rolf Heuer. “We hope these open data will support and inspire the global research community, including students and citizen scientists.”

The LHC collaborations will continue to release collision data over the coming years.

The first high-level and analyzable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. Open source software to read and analyze the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” says CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations that have been prepared for educational purposes. These resources are accompanied by visualization tools.

All data on OpenData.cern.ch are shared under a Creative Commons CC0 public domain dedication. Data and software are assigned unique DOI identifiers to make them citable in scientific articles. And software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

CERN published a version of this article as a press release.

Like what you see? Sign up for a free subscription to symmetry!

### arXiv blog

Twitter "Exhaust" Reveals Patterns of Unemployment

Twitter data mining reveals surprising detail about socioeconomic indicators but at a fraction of the cost of traditional data-gathering methods, say computational sociologists.

Human behaviour is closely linked to social and economic status. For example, the way an individual travels round a city is influenced by their job, their income and their lifestyle.

### Peter Coles - In the Dark

Hubble Images With Music By Herschel

Too busy for a full post today, so here’s a little stocking filler. The, perhaps familiar, pictures are taken by the Hubble Space Telescope but the music is by noted astronomer (geddit?) Sir William Herschel – the Second Movement of his Chamber Symphony In F Major, marked Adagio e Cantabile. Although best known as an astronomer Herschel was a capable musician and composer with a style very obviously influenced by his near contemporary Georg Frideric Handel. Although music of this era puts me on a High Harpsichord Alert, I thought I’d share this example of music for those of you unfamiliar with his work…

### Jester - Resonaances

Update on the bananas
One of the most interesting physics stories of this year was the discovery of an unidentified 3.5 keV x-ray  emission line from galactic clusters. This so-called bulbulon can be interpreted as a signal of a sterile neutrino dark matter particle decaying into an active neutrino and  a photon. Some time ago I wrote about the banana paper that questioned the dark matter origin of the signal. Much has happened since, and I owe you an update. The current experimental situation is summarized in this plot:

To be more specific, here's what's happening.

•  Several groups searching for the 3.5 keV emission have reported negative results. One of those searched for the signal in dwarf galaxies, which offer a  much cleaner environment allowing for a more reliable detection. No signal was found, although the limits do not exclude conclusively the original bulbulon claim. Another study looked for the signal in multiple galaxies. Again, no signal was found, but this time the reported limits are in severe tension with the sterile neutrino interpretation of the bulbulon. Yet another study failed to find the 3.5 keV line in  Coma, Virgo and Ophiuchus clusters, although they detect it in the Perseus cluster. Finally, the banana group analyzed the morphology of the 3.5 keV emission from the Galactic center and Perseus and found it incompatible with dark matter decay.
• The discussion about the existence of the 3.5 keV emission from the Andromeda galaxy ongoing. The conclusions seem to depend on the strategy to determine the continuum x-ray emission. Using data from the XMM satellite, the banana group fits the background in the 3-4 keV range  and does not find the line, whereas this paper argues it is more kosher to fit in the 2-8 keV range, in which case the line can be detected in exactly the same dataset. It is not obvious who is right, although the fact that the significance of the signal depends so strongly on the background fitting procedure is not encouraging.
• The main battle rages on around K-XVIII (X-n stands for the X atom stripped of n-1 electrons; thus, K-XVIII is the potassium ion with 2 electrons). This little bastard has emission lines at 3.47 keV and 3.51 keV which could account for the bulbulon signal. In the original paper, the bulbuline group invokes a model of plasma emission that allows them to constrain  the flux due to the K-XVIII emission from  the  measured ratios of the strong S-XVI/S-XV and Ca-XX/Ca-XIX lines. The banana paper argued that the bulbuline model is unrealistic as it  gives inconsistent predictions for some plasma line ratios. The bulbuline group pointed out that the banana group used wrong numbers to estimate the line emission strenghts. The banana group maintains that their conclusions still hold when the error is corrected. It all boils down to the question whether the allowed range for the K-XVIII emission strength assumed by the bulbine group is conservative enough. Explaining the 3.5 keV feature solely by K-XVIII requires assuming element abundance ratios that are very different than the solar one, which may or may not be realistic.
•  On the other hand, both groups have converged on the subject of chlorine. In the banana  paper it  was pointed out that the 3.5 keV line may be due to the Cl-XVII (hydrogen-like chlorine ion) Lyman-β transition which happens to be at 3.51 keV. However the bulbuline group subsequently derived limits on the corresponding Lyman-α line at 2.96 keV. From these limits, one can deduce in a fairly model-independent way that the contribution of Cl-XVII Lyman-β transition is negligible.

To clarify the situation we need more replies to comments on replies, and maybe also  better data from future x-ray satellite missions. The significance of the detection depends, more than we'd wish, on dirty astrophysics involved in modeling the standard x-ray emission from galactic plasma. It seems unlikely that the sterile neutrino model with the originally reported parameters will stand, as it is in tension with several other analyses. The probability of the 3.5 keV signal being of dark matter origin is certainly much lower than a few months ago. But the jury is still out, and it's not impossible to imagine that more data and more analyses will tip the scales the other way.

Further reading: how to protect yourself from someone attacking you with a banana.

### Tommaso Dorigo - Scientificblogging

Extraordinary Claims: Review My Paper For \$10
Bringing the concept of peer review to another dimension, I am offering you to read a review article I just wrote. You are invited to contribute to its review by suggesting improvements, corrections, changes or amendments to the text. I sort of need some scrutiny of this paper since it is not a report of CMS results -and thus I have not been forced by submit it for internal review to my collaboration.

### Emily Lakdawalla - The Planetary Society Blog

How NASA Plans to Land Humans on Mars
On the surface, NASA's humans to Mars plans seem vague and disjointed. But that's because the agency is playing the long game. Right now, it may be the only game they can play.

## November 19, 2014

### astrobites - astro-ph reader's digest

Could we detect signs of life on a massive super-Earth?

Super-Earths are the Starbucks of the modern world–you can find them everywhere, its not exactly what you want but it’s just good enough to satisfy your desire for something better. Super-Earths are not technically Earth-like since they are up to 10 Earth masses and have thick hydrogen (H2) atmospheres. However, they are rocky like Earth, they have an atmosphere like Earth, and if they are in the habitable zone, there is a good chance they could have liquid water like Earth. Case and point: they are just good enough.

Unfortunately, in the next 15 years, the only way we will be able to characterize a super-Earth, is if it’s orbiting an M-type star. Since M-type stars are smaller and dimmer than the Sun, the planets orbiting them need to be closer in so that the planet get enough warmth to sustain liquid water. Therefore, habitable zone planets around M-type stars could be observed in transit once every ~20 days rather than once every year for an Earth twin. This bodes well for future missions that will try and characterize exoplanets such as the James Webb Space Telescope (JWST).

So, if super-Earths orbiting M-type stars are our best bet at characterization, it pays to think about what signs of life, or biosignatures, could hypothetically be detected in one of their atmospheres. Seager et al. investigate several biosignatures and aim to identify which are likely to build up to detectable levels in an H2-dominated super-Earth orbiting an M-type star.

Biosignatures and Photochemistry

To test the “build up” of any molecule, let’s say ABX, in an atmosphere, you need to know what molecular species are creating ABX and what molecular species or processes are destroying ABX. In the world of photochemistry, we refer to these as sources and sinks. The photochemical model that Seager et al. use includes 111 species, involved in 824 chemical reactions and 71 photochemical reactions. Dwell on that parameter space… A photochemical reaction occurs when a molecule absorbs a photon of light and is broken down into smaller components. We call this process photolysis and it can be a major sink for biosignatures, depending on how much UV flux the star is giving off. Let’s take Earth as an example.

Since oxygen, O2, is a abundantly produced by life on Earth, it is one of Earth’s dominant biosignature gases. O2 is destroyed by photolysis when it interacts with, you guessed it, UV light. On Earth though, UV radiation from our Sun isn’t that high, so O2 is free to build up in the atmosphere. If we were to increase the UV radiation Earth received, it is likely that O2 would all be destroyed and would cease being one of Earth’s dominant biosignature gases.

Because M stars might have a much higher UV flux than our Sun, it is uncertain how much UV flux a super-Earth orbiting an M star will receive. Therefore, in order to asses which biosignature gases will build up on an exoplanetary atmosphere orbiting an M star, we need to assess each of the bisoignature gas’s removal rate, or the rate at which a molecule is destroyed by photolysis or any other reaction.

The rate at which H, O, and OH destroy CH3Cl as a function of UV flux received from the parent star. The dashed lines represent the case of a 10% N2, 90% H2 atmosphere. The diamond and the circle show cases for an N2 dominated atmosphere and a present day atmosphere, respectively, Main point: Removal rate increases with UV flux. Image credit: Seager et al. (2013) ApJ

In order to illustrate this effect, Seager et al. took a biosignature gas, CH3Cl, and calculated the removal rate by reactions with H, O and OH as a function of UV flux. As we’d expect, the figure above shows that the removal rate increases with UV flux. This means that if we encounter a super-Earth around an M-type star that has a high UV flux, the rate of removal of a biosignature gas will depend largely on the concentration of the gas and how quickly it is being destroyed by H, O and OH.

The Most Likely Biosignature Gas

After considering the removal rate of several biosignature gases, Seager et al. find that ammonia (NH3) is likely to build up in the atmosphere of a super-Earth orbiting an M star. NH3 is created when a microbe harvests energy from a chemical energy gradient. On Earth, ammonia is not produced in large quantities so there isn’t a lot of it in our atmosphere. However, if an alien world produced as much ammonia as humans produced oxygen, it may actually be detectable in their atmosphere.

In a world where NH3 is a viable biosignature, life would be vastly different from what we see on Earth. It would need to be able to break the H2 and N2 bonds in the reaction: 3H2 + N2→ 2NH3. Since this reaction is exothermic (releases heat), it could be used to harvest energy. Is this possible though? Seager et. al. say that although there is no chemical reaction on Earth that can break both bonds of H2 and N2, there is no physical reason that it can’t happen.

Thermal emission spectra for a 90% H2, 10% N2 super-Earth (10 Earth masses, 1.75 Earth radii). Each color spectrum represents a different concentration of ammonia. Higher ammonia concentrations create stronger emission features. Main point: If life was producing lots of NH3, we would be able to see it in the spectrum of a super-Earth orbiting an M star. Image credit: Seager et al. ApJ, 2013

The plot above shows what the spectrum of a planet would look like if it were producing lots of ammonia. This spectrum is taken in “thermal emission” which means that we are looking at the planet when it is just about to disappear behind its parent star. There are strong NH3 emission features (labeled) from 2-100 microns. JWST will be able to make observations in the 1-30 micron range and will likely observe at least a handful of super-Earths orbiting M-type stars. So, should we expect to find one of these NH3 producing life forms? This is where I leave the Seager et. al. paper and let your imagination take over.

### Clifford V. Johnson - Asymptotia

Chalkboards Everywhere!
I love chalkboards (or blackboards if you prefer). I love showing up to give a talk somewhere and just picking up the chalk and going for it. No heavily over-packed slides full of too many fast moving things, as happens too much these days. If there is coloured chalk available, that's fantastic - special effects. It is getting harder to find these boards however. Designers of teaching rooms and other spaces seem embarrassed by them, and so they either get smaller or disappear, often in favour of the less than magical whiteboard. So in my continued reinvention of the way I produce slides for projection (I do this every so often), I've gone another step forward in returning to the look (and [...] Click to continue reading this post

### Symmetrybreaking - Fermilab/SLAC

LHCb experiment finds new particles

A new LHCb result adds two new composite particles to the quark model.

Today the LHCb experiment at CERN’s Large Hadron Collider announced the discovery of two new particles, each consisting of three quarks.

The particles, known as the Xi_b'- and Xi_b*-, were predicted to exist by the quark model but had never been observed. The LHCb collaboration submitted a paper reporting the finding to the journal Physical Review Letters.

Similar to the protons that the LHC accelerates and collides, these two new particles are baryons and made from three quarks bound together by the strong force.

But unlike protons—which are made of two up quarks and one down quark—the new Xi_b particles both contain one beauty quark, one strange quark and one down quark. Because the b quarks are so heavy, these particles are more than six times as massive as the proton.

“We had good reason to believe that we would be able to see at least one of these two predicted particles,” says Steven Blusk, an LHCb researcher and associate professor of physics at Syracuse University. “We were lucky enough to see both. It’s always very exciting to discover something new.”

Even though these two new particles contain the same combination of quarks, they have a different configuration of spin—which is a quantum mechanical property that describes a particle’s angular momentum. This difference in spin makes Xi_b*- a little heavier than Xi_b'-.

“Nature was kind and gave us two particles for the price of one," says Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products. If it had been just a little lighter, we wouldn't have seen it at all.”

In addition to the masses of these particles, the research team studied their relative production rates, their widths—which is a measurement of how unstable they are—and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

“QCD is a powerful framework that describes the interactions of quarks, but it is not that precise,” Blusk says. “If we do see something new, we need to be able to say that is not the result of uncertainties in QCD, but that it is in fact something new and unexpected. That is why we need precision data and precision measurements like these—to refine our models.”

The LHCb detector is one of the four main Large Hadron Collider experiments. It is specially designed to study hadrons and search for new particles.

“As you go up in mass, it becomes harder to discover new particles and requires unique detector capabilities,” Blusk says. “These new measurements really exploit the strengths of the LHCb detector, which has the unique ability to clearly identify hadrons.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared—after its first long shutdown—to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

“I’m a firm believer that whenever you look for something, there is always the possibility that you will instead find something completely unexpected,” Blusk says. “Doing these generic searches opens the door for discovering new physics. We are just starting to explore b-baryon sector, and more data from the next run of the LHC will allow us to discover more particles not see before.”

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

LHCb experiment finds new particles

A new LHCb result adds two new composite particles to the quark model.

Today the LHCb experiment at CERN’s Large Hadron Collider announced the discovery of two new particles, each consisting of three quarks.

The particles, known as the Xi_b'- and Xi_b*-, were predicted to exist by the quark model but had never been observed. The LHCb collaboration submitted a paper reporting the finding to the journal Physical Review Letters.

Similar to the protons that the LHC accelerates and collides, these two new particles are baryons and made from three quarks bound together by the strong force.

But unlike protons—which are made of two up quarks and one down quark—the new Xi_b particles both contain one beauty quark, one strange quark and one down quark. Because the b quarks are so heavy, these particles are more than six times as massive as the proton.

“We had good reason to believe that we would be able to see at least one of these two predicted particles,” says Steven Blusk, an LHCb researcher and associate professor of physics at Syracuse University. “We were lucky enough to see both. It’s always very exciting to discover something new.”

Even though these two new particles contain the same combination of quarks, they have a different configuration of spin—which is a quantum mechanical property that describes a particle’s angular momentum. This difference in spin makes Xi_b*- a little heavier than Xi_b'-.

“Nature was kind and gave us two particles for the price of one," says Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products. If it had been just a little lighter, we wouldn't have seen it at all.”

In addition to the masses of these particles, the research team studied their relative production rates, their widths—which is a measurement of how unstable they are—and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

“QCD is a powerful framework that describes the interactions of quarks, but it is not that precise,” Blusk says. “If we do see something new, we need to be able to say that is not the result of uncertainties in QCD, but that it is in fact something new and unexpected. That is why we need precision data and precision measurements like these—to refine our models.”

The LHCb detector is one of the four main Large Hadron Collider experiments. It is specially designed to study hadrons and search for new particles.

“As you go up in mass, it becomes harder to discover new particles and requires unique detector capabilities,” Blusk says. “These new measurements really exploit the strengths of the LHCb detector, which has the unique ability to clearly identify hadrons.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared—after its first long shutdown—to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

“I’m a firm believer that whenever you look for something, there is always the possibility that you will instead find something completely unexpected,” Blusk says. “Doing these generic searches opens the door for discovering new physics. We are just starting to explore b-baryon sector, and more data from the next run of the LHC will allow us to discover more particles not see before.”

Like what you see? Sign up for a free subscription to symmetry!

### Peter Coles - In the Dark

Marginal Notes – Are You For Or Against?

At the weekend I was listening to a programme on Radio 3 part of which was about the rise of the foreeign language phrasebook over the last three or four centuries. It was a fascinating discussion, not least because it reminded me of an old Victorian English-Hindi phrasebook I found in a bookship in Pune (India). The book was intended for the use of well-to-do British ladies  and the phrases presumably chosen to reflect their likely needs as they travelled about India. I opened the book at random and found a translation of “Doctor, please help me. I am suffering from severe constipation”. In my experience as a Westerner travelling in India, constipation was the least of my worries…

Anyway, the real point of posting about this is that some of the old phrasebooks which were used to illustrate the programme had been heavily annotated by their owners. That reminded me of an discussion I’ve had with a number of people about whether they like to scribble in the margins of their books, or whether they believe this practice to be a form of sacrilege.

I’ll put my cards on the table  straightaway. I like to annotate my books – especially the technical ones – and some of them have extensive commentaries written in them. I also like to mark up poems that I read; that helps me greattly to understand the structure. I don’t have a problem with scribbling in margins because I think that’s what margins are for.Why else would they be there?

This is a famous example – a page from Newton’s Principia, annotated by Leibniz:

Some of my fellow academics, however, regard such actions as scandalous and seem to think books should be venerated in their pristine state.  Others probably find little use for printed books given the plethora of digitial resources now available online or via Kindles etc so this is not an issue..

I’m interested to see what the divergence of opinions is in with regard to the practice of writing in books, so here’s a poll for you to express your opinion:

<noscript><a href="http://polldaddy.com/poll/8460987">Take Our Poll</a></noscript>

### The n-Category Cafe

Integral Octonions (Part 8)

This time I’d like to summarize some work I did in the comments last time, egged on by a mysterious entity who goes by the name of ‘Metatron’.

As you probably know, there’s an archangel named Metatron who appears in apocryphal Old Testament texts such as the Second Book of Enoch. These texts rank Metatron second only to YHWH himself. I don’t think the Metatron posting comments here is the same guy. However, it’s a good name for someone interested in lattices and geometry, since there’s a variant of the Cabbalistic Tree of Life called Metatron’s Cube, which looks like this:

This design includes within it the ${\mathrm{G}}_{2}\mathrm\left\{G\right\}_2$ root system, a 2d projection of a stellated octahedron, and a perspective drawing of a hypercube.

Anyway, there are lattices in 26 and 27 dimensions that play rather tantalizing and mysterious roles in bosonic string theory. Metatron challenged me to find octonionic descriptions of them. I did.

Given a lattice $LL$ in $nn$-dimensional Euclidean space, there’s a way to build a lattice ${L}^{++}L^\left\{++\right\}$ in $\left(n+2\right)\left(n+2\right)$-dimensional Minkowski spacetime. This is called the ‘over-extended’ version of $LL$.

If we start with the lattice ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ in 8 dimensions, this process gives a lattice called ${\mathrm{E}}_{10}\mathrm\left\{E\right\}_\left\{10\right\}$, which plays an interesting but mysterious role in superstring theory. This shouldn’t come as a complete shock, since superstring theory lives in 10 dimensions, and it can be nicely formulated using octonions, as can the lattice ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$.

If we start with the lattice called ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_\left\{24\right\}$, this over-extension process gives a lattice ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$. This describes the ‘cosmological billiards’ for the 3d compactification of the theory of gravity arising from bosonic string theory. Again, this shouldn’t come as a complete shock, since bosonic string theory lives in 26 dimensions.

Last time I gave a nice description of ${\mathrm{E}}_{10}\mathrm\left\{E\right\}_\left\{10\right\}$: it consists of $2×22 \times 2$ self-adjoint matrices with integral octonions as entries.

It would be nice to get a similar description of ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$. Indeed, one exists! But to find it, it’s actually easier to go up to 27 dimensions, because the space of $3×33 \times 3$ self-adjoint matrices with octonion entries is 27-dimensional. And indeed, there’s a 27-dimensional lattice waiting to be described with octonions.

You see, for any lattice $LL$ in $nn$-dimensional Euclidean space, there’s also a way to build a lattice ${L}^{+++}L^\left\{+++\right\}$ in $\left(n+3\right)\left(n+3\right)$-dimensional Minkowski spacetime, called the ‘very extended’ version of $LL$.

If we do this to $L={\mathrm{E}}_{8}L = \mathrm\left\{E\right\}_8$ we get an 11-dimensional lattice called ${\mathrm{E}}_{11}\mathrm\left\{E\right\}_\left\{11\right\}$, which has mysterious connections to M-theory. But if we do it to ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_\left\{24\right\}$ we get a 27-dimensional lattice sometimes called ${\mathrm{K}}_{27}\mathrm\left\{K\right\}_\left\{27\right\}$. You can read about both these lattices here:

I’ll prove that both ${\mathrm{E}}_{11}\mathrm\left\{E\right\}_\left\{11\right\}$ and ${\mathrm{K}}_{27}\mathrm\left\{K\right\}_\left\{27\right\}$ have nice descriptions in terms of integral octonions. To do this, I’ll use the explanation of over-extended and very extended lattices given here:

These constructions use a 2-dimensional lattice called $\mathrm{H}\mathrm\left\{H\right\}$. Let’s get to know this lattice. It’s very simple.

### A 2-dimensional Lorentzian lattice

Up to isometry, there’s a unique even unimodular lattice in Minkowski spacetime whenever its dimension is 2 more than a multiple of 8. The simplest of these is $\mathrm{H}\mathrm\left\{H\right\}$: it’s the unique even unimodular lattice in 2-dimensional Minkowski spacetime.

There are various ways to coordinatize $\mathrm{H}\mathrm\left\{H\right\}$. The easiest, I think, is to start with ${ℝ}^{2}\mathbb\left\{R\right\}^2$ and give it the metric $gg$ with

$g\left(x,x\right)=-2uv g\left(x,x\right) = -2 u v $

when $x=\left(u,v\right)x = \left(u,v\right)$. Then, sitting in ${ℝ}^{2}\mathbb\left\{R\right\}^2$, the lattice ${ℤ}^{2}\mathbb\left\{Z\right\}^2$ is even and unimodular. So, it’s a copy of $\mathrm{H}\mathrm\left\{H\right\}$.

Let’s get to know it a bit. The coordinates $uu$ and $vv$ are called lightcone coordinates, since the $uu$ and $vv$ axes form the lightcone in 2d Minkowski spacetime. In other words, the vectors

$\ell =\left(1,0\right),\phantom{\rule{1em}{0ex}}\ell \prime =\left(0,1\right) \ell = \left(1,0\right), \quad \ell\text{'} = \left(0,1\right) $

are lightlike, meaning

$g\left(\ell ,\ell \right)=0,\phantom{\rule{1em}{0ex}}g\left(\ell \prime ,\ell \prime \right)=0 g\left(\ell,\ell\right) = 0 , \quad g\left(\ell\text{'}, \ell\text{'}\right) = 0 $

Their sum is a timelike vector

$\tau =\ell +\ell \prime =\left(1,1\right) \tau = \ell + \ell\text{'} = \left(1,1\right)$

since the inner product of $\tau \tau$ with itself is negative; in fact

$g\left(\tau ,\tau \right)=-2 g\left(\tau,\tau\right) = -2 $

Their difference is a spacelike vector

$\sigma =\ell -\ell \prime =\left(1,-1\right) \sigma = \ell - \ell\text{'} = \left(1,-1\right) $

since the inner product of $\sigma \sigma$ with itself is positive; in fact

$g\left(\sigma ,\sigma \right)=2 g\left(\sigma,\sigma\right) = 2 $

Since the vectors $\tau \tau$ and $\sigma \sigma$ are orthogonal and have length $\sqrt{2}\sqrt\left\{2\right\}$ in the metric $gg$, we get a square of area $22$ with corners

$0,\tau ,\sigma ,\tau +\sigma 0, \tau, \sigma, \tau + \sigma $

that is,

$\left(0,0\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,1\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,-1\right),\phantom{\rule{thickmathspace}{0ex}}\left(2,0\right) \left(0,0\right),\; \left(1,1\right),\; \left(1,-1\right), \;\left(2,0\right) $

If you draw a picture, you can see by dissection that this square has twice the area of the unit cell

$\left(0,0\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,0\right),\phantom{\rule{thickmathspace}{0ex}}\left(0,1\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,1\right) \left(0,0\right),\; \left(1,0\right), \; \left(0,1\right) , \; \left(1,1\right) $

So, the unit cell has area 1, and the lattice is unimodular as claimed. Furthermore, every vector in the lattice has even inner product with itself, so this lattice is even.

### Over-extended lattices

Given a lattice $LL$ in Euclidean ${ℝ}^{n}\mathbb\left\{R\right\}^n$,

${L}^{++}=L\oplus \mathrm{H}L^\left\{++\right\} = L \oplus \mathrm\left\{H\right\} $

is a lattice in $\left(n+2\right)\left(n+2\right)$-dimensional Minkowski spacetime, also known as ${ℝ}^{n+1,1}\mathbb\left\{R\right\}^\left\{n+1,1\right\}$. This lattice ${L}^{++}L^\left\{++\right\}$ is called the over-extension of $LL$.

A direct sum of even lattices is even. A direct sum of unimodular lattices is unimodular. Thus if $LL$ is even and unimodular, so is ${L}^{++}L^\left\{++\right\}$.

All this is obvious. But here are some deeper facts about even unimodular lattices. First, they only exist in ${ℝ}^{n}\mathbb\left\{R\right\}^n$ when $nn$ is a multiple of 8. Second, they only exist in ${ℝ}^{n+1,1}\mathbb\left\{R\right\}^\left\{n+1,1\right\}$ when $nn$ is a multiple of 8.

But here’s the really amazing thing. In the Euclidean case there can be lots of different even unimodular lattices in a given dimension. In 8 dimensions there’s just one, up to isometry, called ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$. In 16 dimensions there are two. In 24 dimensions there are 24. In 32 dimensions there are at least 1,160,000,000, and the number continues to explode after that. On the other hand, in the Lorentzian case there’s just one even unimodular lattice in a given dimension, if there are any at all.

More precisely: given two even unimodular lattices in ${ℝ}^{n+1,1}\mathbb\left\{R\right\}^\left\{n+1,1\right\}$, they are always isomorphic to each other via an isometry: a linear transformation that preserves the metric. We then call them isometric.

Let’s look at some examples. Up to isometry, ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ is the only even unimodular lattice in 8-dimensional Euclidean space. We can identify it with the lattice of integral octonions, $O\subseteq 𝕆\mathbf\left\{O\right\} \subseteq \mathbb\left\{O\right\}$, with the inner product

$g\left(X,X\right)=2X{X}^{*} g\left(X,X\right) = 2 X X^*$

${L}^{++}L^\left\{++\right\}$ is usually called ${E}_{10}E_\left\{10\right\}$. Up to isometry, this is the unique even unimodular lattice in 10 dimensions. There are lots of ways to describe it, but last time we saw that it’s the lattice of $2×22 \times 2$ self-adjoint matrices with integral octonions as entries:

${𝔥}_{2}\left(O\right)=\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{cc}a& X\\ {X}^{*}& b\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \mathfrak\left\{h\right\}_2\left(\mathbf\left\{O\right\}\right) = \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{cc\right\} a & X \\ X^* & b \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\}, \; X \in \mathbf\left\{O\right\} \; \right\\right\} $

where the metric comes from $-2-2$ times the determinant:

$x=\left(\begin{array}{cc}a& X\\ {X}^{*}& b\end{array}\right)\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}⇒\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}g\left(x,x\right)=-\mathrm{det}\left(x\right)=2X{X}^{*}-2ab x = \left\left( \begin\left\{array\right\}\left\{cc\right\} a & X \\ X^* & b \end\left\{array\right\} \right\right) \;\; \implies \;\; g\left(x,x\right) = - \det\left(x\right) = 2 X X^* - 2 a b $

We’ll see a fancier formula like this later on.

There are 24 even unimodular lattices in 24-dimensional Euclidean space. One of them is

${\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8} \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 $

Another is ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_24$. This is the lattice of vectors in ${ℝ}^{24}\mathbb\left\{R\right\}^\left\{24\right\}$ where the components are integers and their sum is even. It’s also the root lattice of the Lie group $\mathrm{Spin}\left(48\right)\mathrm\left\{Spin\right\}\left(48\right)$.

If we take the over-extension of any of these lattices, we get an even unimodular lattice in 26-dimensional Minkowski spacetime… and all these are isometric! The over-extension process ‘washes out the difference’ between them. In particular,

${\mathrm{D}}_{24}^{++}\cong \left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{++} \mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\} \cong \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{++\right\} $

This is nice because up to a scale factor, ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ is the lattice of integral octonions. So, there’s a description of ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$ using three integral octonions! But the story is prettier if we go up an extra dimension.

### Very extended lattices

After the over-extended version ${L}^{++}L^\left\{++\right\}$of a lattice $LL$ in Euclidean space comes the ‘very extended’ version, called ${L}^{+++}L^\left\{+++\right\}$. If you ponder the paper by Gaberdiel et al, you can see this is the direct sum of the over-extension ${L}^{++}L^\left\{++\right\}$ and a 1-dimensional lattice called ${\mathrm{A}}_{1}\mathrm\left\{A\right\}_1$. ${\mathrm{A}}_{1}\mathrm\left\{A\right\}_1$ is just $\mathbb\left\{Z\right\}$ with the metric

$g\left(x,x\right)=2{x}^{2} g\left(x,x\right) = 2 x^2 $

It’s even but not unimodular.

In short, the very extended version of $LL$ is

${L}^{+++}={L}^{++}\oplus {\mathrm{A}}_{1}=L\oplus \mathrm{H}\oplus {\mathrm{A}}_{1}L^\left\{+++\right\} = L^\left\{++\right\} \oplus \mathrm\left\{A\right\}_1 = L \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 $

If $LL$ is even, so is ${L}^{+++}L^\left\{+++\right\}$. But if $LL$ is unimodular, this will not be true of ${L}^{+++}L^\left\{+++\right\}$.

The very extended version of ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ is called ${\mathrm{E}}_{11}\mathrm\left\{E\right\}_\left\{11\right\}$. This a fascinating thing, but I want to talk about the very extended version of ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_\left\{24\right\}$, and how to describe it using octonions.

Let ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$ be the space of $3×33 \times 3$ self-adjoint octonionic matrices. It’s 27-dimensional since a typical element looks like

$x=\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& c\end{array}\right) x = \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end\left\{array\right\} \right\right) $

where $a,b,c\in ℝ,X,Y,Z\in 𝕆a,b,c \in \mathbb\left\{R\right\}, X,Y,Z \in \mathbb\left\{O\right\}$. It’s called the exceptional Jordan algebra. We don’t need to know about Jordan algebras now, but this concept encapsulates the fact that if $x\in {𝔥}_{3}\left(𝕆\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$, so is ${x}^{2}x^2$.

There’s a 2-parameter family of metrics on the exceptional Jordan algebra that are invariant under all Jordan algebra automorphisms. They have

$g\left(x,x\right)=\alpha tr\left({x}^{2}\right)+\beta tr\left(x{\right)}^{2} g\left(x,x\right) = \alpha \tr\left(x^2\right) + \beta \tr\left(x\right)^2 $

for $\alpha ,\beta \in ℝ\alpha, \beta \in \mathbb\left\{R\right\}$ with $\alpha \ne 0\alpha \ne 0$. Some are Euclidean and some are Lorentzian.

Sitting inside the exceptional Jordan algebra is the lattice of $3×33 \times 3$ self-adjoint matrices with integral octonions as entries:

${𝔥}_{3}\left(O\right)=\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& c\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b,c\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X,Y,Z\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right) = \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end\left\{array\right\} \right\right) :\; a,b,c \in \mathbb\left\{Z\right\}, \; X,Y,Z \in \mathbf\left\{O\right\} \; \right\\right\} $

And here’s the cool part:

Theorem. There is a Lorentzian inner product $gg$ on the exceptional Jordan algebra that is invariant under all automorphisms and makes the lattice ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ isometric to ${\mathrm{K}}_{27}\cong {\mathrm{D}}_{24}^{+++}\mathrm\left\{K\right\}_\left\{27\right\} \cong \mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\}$.

Proof. We will prove that the metric

$g\left(x,x\right)=tr\left({x}^{2}\right)-tr\left(x{\right)}^{2} g\left(x,x\right) = \tr\left(x^2\right) - \tr\left(x\right)^2 $

obeys all the conditions of this theorem. From what I’ve already said, it is invariant under all Jordan algebra automorphisms. The challenge is to show that it makes ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ isometric to ${\mathrm{D}}_{24}^{+++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\}$. But instead of ${\mathrm{D}}_{24}^{+++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\}$, we can work with $\left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{+++}\left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{+++\right\}$, since we have seen that

${\mathrm{D}}_{24}^{+++}\cong \left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{+++} \mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\} \cong \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{+++\right\} $

Let us examine the metric $gg$ in more detail. Take any element $x\in {𝔥}_{3}\left(O\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$:

$x=\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& c\end{array}\right) x = \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end\left\{array\right\} \right\right) $

where $a,b,c\in ℝ,X,Y,Z\in 𝕆a,b,c \in \mathbb\left\{R\right\}, X,Y,Z \in \mathbb\left\{O\right\}$. Then

$\mathrm{tr}\left({x}^{2}\right)={a}^{2}+{b}^{2}+{c}^{2}+2\left(X{X}^{*}+Y{Y}^{*}+Z{Z}^{*}\right) tr\left(x^2\right) = a^2 + b^2 + c^2 + 2\left(X X^* + Y Y^* + Z Z^*\right)$

while

$\mathrm{tr}\left(x{\right)}^{2}=\left(a+b+c{\right)}^{2} tr\left(x\right)^2 = \left(a + b + c\right)^2 $

Thus

$\begin{array}{ccl}g\left(x,x\right)& =& \mathrm{tr}\left({x}^{2}\right)-\mathrm{tr}\left(x{\right)}^{2}\\ & =& 2\left(X{X}^{*}+Y{Y}^{*}+Z{Z}^{*}\right)-2\left(ab+bc+ca\right)\end{array} \begin\left\{array\right\}\left\{ccl\right\} g\left(x,x\right) &=& tr\left(x^2\right) - tr\left(x\right)^2 \\ &=& 2\left(X X^* + Y Y^* + Z Z^*\right) - 2\left(a b + b c + c a\right) \end\left\{array\right\} $

It follows that with this metric, the diagonal matrices are orthogonal to the off-diagonal matrices. An off-diagonal matrix $x\in {𝔥}_{3}\left(O\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ is a triple $\left(X,Y,Z\right)\in {O}^{3}\left(X,Y,Z\right) \in \mathbf\left\{O\right\}^3$, and has

$g\left(x,x\right)=2\left(X{X}^{*}+Y{Y}^{*}+Z{Z}^{*}\right) g\left(x,x\right) = 2\left(X X^* + Y Y^* + Z Z^*\right) $

Thanks to the factor of 2, this metric makes the lattice of these off-diagonal matrices isometric to ${\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8$. Since

$\left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{+++}={\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus \mathrm{H}\oplus {\mathrm{A}}_{1} \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{+++\right\} = \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 $

it thus suffices to show that the 3-dimensional Lorentzian lattice of diagonal matrices in ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ is isometric to

$\mathrm{H}\oplus {\mathrm{A}}_{1} \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 $

A diagonal matrix $x\in {𝔥}_{3}\left(O\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ is a triple $\left(a,b,c\right)\in {ℤ}^{3}\left(a,b,c\right) \in \mathbb\left\{Z\right\}^3$, and on these triples the inner product $gg$ is given by

$g\left(x,x\right)=-2\left(ab+bc+ca\right) g\left(x,x\right) = -2\left(a b + b c + c a\right) $

If we restrict attention to triples of the form $x=\left(a,b,0\right)x = \left(a,b,0\right)$, we get a 2-dimensional Lorentzian lattice: a copy of ${ℤ}^{2}\mathbb\left\{Z\right\}^2$ with inner product

$g\left(x,x\right)=-2ab g\left(x,x\right) = -2a b$

This is just $\mathrm{H}\mathrm\left\{H\right\}$.

We can use this to show that the lattice of all triples $\left(a,b,c\right)\in {ℤ}^{3}\left(a,b,c\right) \in \mathbb\left\{Z\right\}^3$, with the inner product $gg$, is isometric to $\mathrm{H}\oplus {\mathrm{A}}_{1}\mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1$.

Remember, ${\mathrm{A}}_{1}\mathrm\left\{A\right\}_1$ is a 1-dimensional lattice generated by a spacelike vector whose norm squared is 2. So, it suffices to show that the lattice ${ℤ}^{3}\mathbb\left\{Z\right\}^3$ is generated by vectors of the form $\left(a,b,0\right)\left(a,b,0\right)$ together with a spacelike vector of norm squared 2 that is orthogonal to all those of the form $\left(a,b,0\right)\left(a,b,0\right)$.

To do this, we need to describe the inner product $gg$ on ${ℤ}^{3}\mathbb\left\{Z\right\}^3$ more explicitly. For this, we can use polarization identity

$g\left(x,x\prime \right)=\frac{1}{2}\left(g\left(x+x\prime ,x+x\prime \right)-g\left(x,x\right)-g\left(x\prime ,x\prime \right)\right) g\left(x,x\text{'}\right) = \frac\left\{1\right\}\left\{2\right\}\left( g\left(x+x\text{'},x+x\text{'}\right) - g\left(x,x\right) - g\left(x\text{'},x\text{'}\right)\right) $

Remember, if $x=\left(a,b,c\right)x = \left(a,b,c\right)$ we have

$g\left(x,x\right)=-2\left(ab+bc+ca\right) g\left(x,x\right) = -2\left(a b + b c + c a\right) $

So, if we also have $x\prime =\left(a\prime ,b\prime ,c\prime \right)x\text{'} = \left(a\text{'},b\text{'},c\text{'}\right)$, the polarization identity gives

$g\left(x,x\prime \right)=-\left(ab\prime +a\prime b\right)-\left(bc\prime +bc\prime \right)-\left(ca\prime +c\prime a\right) g\left(x,x\text{'}\right) = -\left(a b\text{'}+a\text{'} b\right) - \left(b c\text{'}+ b c\text{'}\right) - \left(c a\text{'} + c\text{'}a\right)$

We are looking for a spacelike vector $x\prime =\left(a\prime ,b\prime ,c\prime \right)x\text{'} = \left(a\text{'},b\text{'},c\text{'}\right)$ that is orthogonal to all those of the form $x=\left(a,b,0\right)x = \left(a,b,0\right)$. For this, it is necessary and sufficient to have

$0=g\left(\left(1,0,0\right),\left(a\prime ,b\prime ,c\prime \right)\right)=-b\prime -c\prime 0 = g\left(\left(1,0,0\right),\left(a\text{'},b\text{'},c\text{'}\right)\right) = - b\text{'} - c\text{'} $

and

$0=g\left(\left(0,1,0\right),\left(a\prime ,b\prime ,c\prime \right)\right)=-a\prime -c\prime 0 = g\left(\left(0,1,0\right), \left(a\text{'},b\text{'},c\text{'}\right)\right) = - a\text{'} - c\text{'} $

An example is $x\prime =\left(1,1,-1\right)x\text{'} = \left(1,1,-1\right)$. This has

$g\left(x\prime ,x\prime \right)=-2\left(1-1-1\right)=2 g\left(x\text{'},x\text{'}\right) = -2\left(1 - 1 - 1\right) = 2 $

so it is spacelike, as desired. Even better, it has norm squared two. And even better, this vector $x\prime x\text{'}$, along with those of the form $\left(a,b,0\right)\left(a,b,0\right)$, generates the lattice ${ℤ}^{3}\mathbb\left\{Z\right\}^3$.

So we have shown what we needed: the lattice of all triples $\left(a,b,c\right)\in {ℤ}^{3}\left(a,b,c\right) \in \mathbb\left\{Z\right\}^3$ is generated by those of the form $\left(a,b,0\right)\left(a,b,0\right)$ together with a spacelike vector with norm squared 2 that is orthogonal to all those of the form $\left(a,b,0\right)\left(a,b,0\right)$. $\blacksquare \blacksquare$

This theorem has three nice spinoffs:

Corollary. With the same Lorentzian inner product $gg$ on the exceptional Jordan algebra, the lattice ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$ is isometric to the sublattice of ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ where a fixed diagonal entry is set equal to zero, e.g.:

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& 0\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X,Y,Z\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & 0 \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\}, \; X,Y,Z \in \mathbf\left\{O\right\} \; \right\\right\} $

Proof. Use the fact that with the metric $gg$, the diagonal matrices

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& 0& 0\\ 0& b& 0\\ 0& 0& 0\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & 0 \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\} \; \right\\right\} $

form a copy of $\mathrm{H}\mathrm\left\{H\right\}$, so the matrices above form a copy of

${\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus \mathrm{H}\cong \left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{++}\cong {\mathrm{D}}_{24}^{++}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\blacksquare \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \cong \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{++\right\} \cong \mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\} \qquad \qquad \qquad \blacksquare $

Corollary. With the same Lorentzian inner product $gg$ on the exceptional Jordan algebra, the lattice ${\mathrm{E}}_{11}={E}_{8}^{+++}\mathrm\left\{E\right\}_\left\{11\right\} = E_8^\left\{+++\right\}$ is isometric to the sublattice of ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ where two fixed off-diagonal entries are set equal to zero, e.g.:

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& 0\\ {X}^{*}& b& 0\\ 0& 0& c\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b,c\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & 0 \\ X^* & b & 0 \\ 0 & 0 & c \end\left\{array\right\} \right\right) : \; a,b,c \in \mathbb\left\{Z\right\}, \; X\in \mathbf\left\{O\right\} \; \right\\right\} $

Proof. Use the fact that with the metric $gg$, the diagonal matrices

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& 0& 0\\ 0& b& 0\\ 0& 0& c\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\} \; \right\\right\} $

form a copy of $\mathrm{H}\oplus {\mathrm{A}}_{1}\mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1$, so the matrices above form a copy of

${\mathrm{E}}_{8}\oplus \mathrm{H}\oplus {\mathrm{A}}_{1}\cong {\mathrm{E}}_{8}^{+++}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\blacksquare \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 \cong \mathrm\left\{E\right\}_8^\left\{+++\right\} \qquad \qquad \qquad \blacksquare $

Corollary. With the same Lorentzian inner product $gg$ on the exceptional Jordan algebra, the lattice ${\mathrm{E}}_{10}={\mathrm{E}}_{8}^{++}\mathrm\left\{E\right\}_\left\{10\right\} = \mathrm\left\{E\right\}_8^\left\{++\right\}$ is isometric to the sublattice of ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ where two fixed off-diagonal entries and one diagonal entry are set equal to zero, e.g.:

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& 0\\ {X}^{*}& b& 0\\ 0& 0& 0\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b,c\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & 0 \\ X^* & b & 0 \\ 0 & 0 & 0 \end\left\{array\right\} \right\right) : \; a,b,c \in \mathbb\left\{Z\right\}, \; X\in \mathbf\left\{O\right\} \; \right\\right\} $

Proof. Use the previous corollary; this is the obvious copy of ${\mathrm{E}}_{8}^{++}\cong {\mathrm{E}}_{8}\oplus \mathrm{H}\mathrm\left\{E\right\}_8^\left\{++\right\} \cong \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\}$ inside ${\mathrm{E}}_{8}^{+++}\cong {\mathrm{E}}_{8}\oplus \mathrm{H}\oplus {\mathrm{A}}_{1}\mathrm\left\{E\right\}_8^\left\{+++\right\} \cong \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1$. $\blacksquare \blacksquare$

## November 18, 2014

### Clifford V. Johnson - Asymptotia

Three Cellos
These three fellows, perched on wooden boxes, just cried out for a quick sketch of them during the concert. It was the LA Phil playing Penderecki's Concerto Grosso for Three Cellos, preceded by the wonderful Rapsodie Espagnole by Ravel and followed by that sublime (brought tears to my eyes - I'd not heard it in so long) serving of England, Elgar's Enigma Variations. . . . . . -cvj Click to continue reading this post

### Symmetrybreaking - Fermilab/SLAC

Auger reveals subtlety in cosmic rays

Scientists home in on the make-up of cosmic rays, which are more nuanced than previously thought.

Unlike the twinkling little star of nursery rhyme, the cosmic ray is not the subject of any well-known song about an astronomical wonder. And yet while we know all about the make-up of stars, after decades of study scientists still wonder what cosmic rays are.

Thanks to an abundance of data collected over eight years, researchers in the Pierre Auger collaboration are closer to finding out what cosmic rays—in particular ultrahigh-energy cosmic rays—are made of. Their composition would tell us more about where they come from: perhaps a black hole, a cosmic explosion or colliding galaxies.

Auger’s latest research has knocked out two possibilities put forward by the prevailing wisdom: that UHECRs are dominated by either lightweight protons or heavier nuclei such as iron. According to Auger, one or more middleweight components, such as helium or nitrogen nuclei, must make up a significant part of the cosmic-ray mix.

“Ten years ago, people couldn’t posit that ultrahigh-energy cosmic rays would be made of something in between protons and iron,” says Fermilab scientist and Auger collaborator Eun-Joo Ahn, who led the analysis. “The idea would have garnered sidelong glances.”

Cosmic rays are particles that rip through outer space at incredibly high energies. UHECRs, upwards of 1018 electronvolts, are rarely observed, and no one knows exactly where they originate.

One way physicists reach back to a cosmic ray’s origins is by looking to the descendants of its collisions. The collision of one of these breakneck particles with the Earth’s upper atmosphere sets off a domino effect, generating more particles that in turn collide with air and produce still more. These ramifying descendants form an air shower, spreading out like the branches of a tree reaching toward the Earth. Twenty-seven telescopes at the Argentina-based Auger Observatory look for ultraviolet light resulting from the cosmic rays, and 1600 detectors, distributed over a swath of land the size of Rhode Island, record the showers’ signals.

Scientists measure how deep into the atmosphere—how close to Earth—the air shower is when it maxes out. The closer to the Earth, the more lightweight the original cosmic ray particle is likely to be. A proton, for example, would penetrate the atmosphere more deeply before setting off an air shower than would an iron nucleus.

Auger scientists compared their data with three different simulation models to narrow the possible compositions of cosmic rays.

Auger’s favoring a compositional middle ground between protons and iron nuclei is based on a granular take on their data, a first for cosmic-ray research. In earlier studies, scientists distilled measurements of shower depths to two values: the average and standard deviation of all shower depths in a given cosmic-ray energy range. Their latest study, however, made no such generalization. Instead, it used the full distribution of data on air shower depth. If researchers measured 1000 different air shower depths for a specific UHECR energy, all 1000 data points—not just the average—went into Auger’s simulation models.

The result was a more nuanced picture of cosmic ray composition. The analysis also gave researchers greater insight into their simulations. For one model, the data and predictions could not be matched no matter the composition of the cosmic ray, giving scientists a starting point for constraining the model further.

“Just getting the distribution itself was exciting,” Ahn says.

Auger will continue to study cosmic rays at even higher energies, gathering more statistics to answer the question: What exactly are cosmic rays made of?

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

Auger reveals subtlety in cosmic rays

Scientists home in on the make-up of cosmic rays, which are more nuanced than previously thought.

Unlike the twinkling little star of nursery rhyme, the cosmic ray is not the subject of any well-known song about an astronomical wonder. And yet while we know all about the make-up of stars, after decades of study scientists still wonder what cosmic rays are.

Thanks to an abundance of data collected over eight years, researchers in the Pierre Auger collaboration are closer to finding out what cosmic rays—in particular ultrahigh-energy cosmic rays—are made of. Their composition would tell us more about where they come from: perhaps a black hole, a cosmic explosion or colliding galaxies.

Auger’s latest research has knocked out two possibilities put forward by the prevailing wisdom: that UHECRs are dominated by either lightweight protons or heavier nuclei such as iron. According to Auger, one or more middleweight components, such as helium or nitrogen nuclei, must make up a significant part of the cosmic-ray mix.

“Ten years ago, people couldn’t posit that ultrahigh-energy cosmic rays would be made of something in between protons and iron,” says Fermilab scientist and Auger collaborator Eun-Joo Ahn, who led the analysis. “The idea would have garnered sidelong glances.”

Cosmic rays are particles that rip through outer space at incredibly high energies. UHECRs, upwards of 1018 electronvolts, are rarely observed, and no one knows exactly where they originate.

One way physicists reach back to a cosmic ray’s origins is by looking to the descendants of its collisions. The collision of one of these breakneck particles with the Earth’s upper atmosphere sets off a domino effect, generating more particles that in turn collide with air and produce still more. These ramifying descendants form an air shower, spreading out like the branches of a tree reaching toward the Earth. Twenty-seven telescopes at the Argentina-based Auger Observatory look for ultraviolet light resulting from the cosmic rays, and 1600 detectors, distributed over a swath of land the size of Rhode Island, record the showers’ signals.

Scientists measure how deep into the atmosphere—how close to Earth—the air shower is when it maxes out. The closer to the Earth, the more lightweight the original cosmic ray particle is likely to be. A proton, for example, would penetrate the atmosphere more deeply before setting off an air shower than would an iron nucleus.

Auger scientists compared their data with three different simulation models to narrow the possible compositions of cosmic rays.

Auger’s favoring a compositional middle ground between protons and iron nuclei is based on a granular take on their data, a first for cosmic-ray research. In earlier studies, scientists distilled measurements of shower depths to two values: the average and standard deviation of all shower depths in a given cosmic-ray energy range. Their latest study, however, made no such generalization. Instead, it used the full distribution of data on air shower depth. If researchers measured 1000 different air shower depths for a specific UHECR energy, all 1000 data points—not just the average—went into Auger’s simulation models.

The result was a more nuanced picture of cosmic ray composition. The analysis also gave researchers greater insight into their simulations. For one model, the data and predictions could not be matched no matter the composition of the cosmic ray, giving scientists a starting point for constraining the model further.

“Just getting the distribution itself was exciting,” Ahn says.

Auger will continue to study cosmic rays at even higher energies, gathering more statistics to answer the question: What exactly are cosmic rays made of?

Like what you see? Sign up for a free subscription to symmetry!

### Quantum Diaries

Stanley Wojcicki awarded 2015 Panofsky Prize

This article appeared in Fermilab Today on Nov. 18, 2014.

Stanley Wojcicki

In late October, the American Physical Society Division of Particles and Fields announced that Stanford University professor emeritus of physics and Fermilab collaborator Stanley Wojcicki has been selected as the 2015 recipient of the W.K.H. Panofsky Prize in experimental particle physics. Panofsky, who died in 2007, was SLAC National Accelerator Laboratory’s first director, holding that position from 1961 to 1984.

“I knew Pief Panovsky for about 40 years, and I think he was a great man not only as a scientist, but also as a statesman and as a human being,” said Wojcicki, referring to Panofsky by his nickname. “So it doubles my pleasure and satisfaction in receiving an award that bears his name.”

Wojcicki was given the prestigious award “for his leadership and innovative contributions to experiments probing the flavor structure of quarks and leptons, in particular for his seminal role in the success of the MINOS long-baseline neutrino experiment.”

Wojcicki is a founding member of MINOS. He served as spokesperson from 1999 to 2004 and as co-spokesperson from 2004 to 2010.

“I feel a little embarrassed being singled out because, in high-energy physics, there is always a large number of individuals who have contributed and are absolutely essential to the success of the experiment,” he said. “This is certainly true of MINOS, where we had and have a number of excellent people.”

Wojcicki recalls the leadership of Caltech physicist Doug Michael, former MINOS co-spokesperson, who died in 2005.

“I always regret that Doug did not have a chance to see the results of an experiment that he very much contributed to,” Wojcicki said.

In 2006, MINOS measured an important parameter related to the mass difference between two neutrino types.

Fermilab physicist Doug Glenzinski chaired the Panofsky Prize review committee and says that the committee was impressed by Wojcicki’s work on flavor physics, which focuses on how particles change from one type to another, and his numerous contributions over decades of research.

“He is largely credited with making MINOS happen, with thinking about ways to advance neutrino measurements and with playing an active role in all aspects of the experiment from start to finish,” Glenzinski said.

More than 30 years ago, Wojcicki collaborated on charm quark research at Fermilab, later joining Fermilab’s neutrino explorations. Early on Wojcicki served on the Fermilab Users Executive Committee from 1969-71 and on the Program Advisory Committee from 1972-74. He has since been on many important committees, including serving as chair of the High-Energy Physics Advisory Panel for six years and as member of the P5 committee from 2005-08. He now continues his involvement in neutrino physics, participating in the NOvA and MINOS+ experiments.

“I feel really fortunate to have been connected with Fermilab since its inception,” Wojcicki said. “I think Fermilab is a great lab, and I hope it will continue as such for many years to come.”

Rich Blaustein

### Peter Coles - In the Dark

German Tanks, Traffic Wardens, and the End of the World

The other day I was looking through some documents relating to the portfolio of courses and modules offered by the Department of Mathematics here at the University of Sussex when I came across a reference to the German Tank Problem. Not knowing what this was I did a google search and  a quite comprehensive wikipedia page on the subject which explains the background rather well.

It seems that during the latter stages of World War 2 the Western Allies made sustained efforts to determine the extent of German tank production, and approached this in two major ways, namely  conventional intelligence gathering and statistical estimation with the latter approach often providing the more accurate and reliable, as was the case in estimation of the production of Panther tanks  just prior to D-Day. The allied command structure had thought the heavy Panzer V (Panther) tanks, with their high velocity, long barreled 75 mm/L70 guns, were uncommon, and would only be encountered in northern France in small numbers.  The US Army was confident that the Sherman tank would perform well against the Panzer III and IV tanks that they expected to meet but would struggle against the Panzer V. Shortly before D-Day, rumoursbegan to circulate that large numbers of Panzer V tanks had been deployed in Normandy.

To ascertain if this were true the Allies attempted to estimate the number of Panzer V  tanks being produced. To do this they used the serial numbers on captured or destroyed tanks. The principal numbers used were gearbox numbers, as these fell in two unbroken sequences; chassis, engine numbers and various other components were also used. The question to be asked is how accurately can one infer the total number of tanks based on a sample of a few serial numbers. So accurate did this analysis prove to be that, in the statistical theory of estimation, the general problem of estimating the maximum of a discrete uniform distribution from sampling without replacement is now known as the German tank problem. I’ll leave the details to the wikipedia discussion, which in my opinion is yet another demonstration of the advantages of a Bayesian approach to this kind of problem.

This problem is a more general version of a problem that I first came across about 30 years ago. I think it was devised in the following form by Steve Gull, but can’t be sure of that.

Imagine you are a visitor in an unfamiliar, but very populous, city. For the sake of argument let’s assume that it is in China. You know that this city is patrolled by traffic wardens, each of whom carries a number on their uniform.  These numbers run consecutively from 1 (smallest) to T (largest) but you don’t know what T is, i.e. how many wardens there are in total. You step out of your hotel and discover traffic warden number 347 sticking a ticket on your car. What is your best estimate of T, the total number of wardens in the city? I hope the similarity to the German Tank Problem is obvious, except in this case it is much simplified by involving just one number rather than a sample.

I gave a short lunchtime talk about this many years ago when I was working at Queen Mary College, in the University of London. Every Friday, over beer and sandwiches, a member of staff or research student would give an informal presentation about their research, or something related to it. I decided to give a talk about bizarre applications of probability in cosmology, and this problem was intended to be my warm-up. I was amazed at the answers I got to this simple question. The majority of the audience denied that one could make any inference at all about T based on a single observation like this, other than that it  must be at least 347.

Actually, a single observation like this can lead to a useful inference about T, using Bayes’ theorem. Suppose we have really no idea at all about T before making our observation; we can then adopt a uniform prior probability. Of course there must be an upper limit on T. There can’t be more traffic wardens than there are people, for example. Although China has a large population, the prior probability of there being, say, a billion traffic wardens in a single city must surely be zero. But let us take the prior to be effectively constant. Suppose the actual number of the warden we observe is t. Now we have to assume that we have an equal chance of coming across any one of the T traffic wardens outside our hotel. Each value of t (from 1 to T) is therefore equally likely. I think this is the reason that my astronomers’ lunch audience thought there was no information to be gleaned from an observation of any particular value, i.e. t=347.

Let us simplify this argument further by allowing two alternative “models” for the frequency of Chinese traffic wardens. One has T=1000, and the other (just to be silly) has T=1,000,000. If I find number 347, which of these two alternatives do you think is more likely? Think about the kind of numbers that occupy the range from 1 to T. In the first case, most of the numbers have 3 digits. In the second, most of them have 6. If there were a million traffic wardens in the city, it is quite unlikely you would find a random individual with a number as small as 347. If there were only 1000, then 347 is just a typical number. There are strong grounds for favouring the first model over the second, simply based on the number actually observed. To put it another way, we would be surprised to encounter number 347 if T were actually a million. We would not be surprised if T were 1000.

One can extend this argument to the entire range of possible values of T, and ask a more general question: if I observe traffic warden number t what is the probability I assign to each value of T? The answer is found using Bayes’ theorem. The prior, as I assumed above, is uniform. The likelihood is the probability of the observation given the model. If I assume a value of T, the probability P(t|T) of each value of t (up to and including T) is just 1/T (since each of the wardens is equally likely to be encountered). Bayes’ theorem can then be used to construct a posterior probability of P(T|t). Without going through all the nuts and bolts, I hope you can see that this probability will tail off for large T. Our observation of a (relatively) small value for t should lead us to suspect that T is itself (relatively) small. Indeed it’s a reasonable “best guess” that T=2t. This makes intuitive sense because the observed value of t then lies right in the middle of its range of possibilities.

Before going on, it is worth mentioning one other point about this kind of inference: that it is not at all powerful. Note that the likelihood just varies as 1/T. That of course means that small values are favoured over large ones. But note that this probability is uniform in logarithmic terms. So although T=1000 is more probable than T=1,000,000,  the range between 1000 and 10,000 is roughly as likely as the range between 1,000,000 and 10,000,0000, assuming there is no prior information. So although it tells us something, it doesn’t actually tell us very much. Just like any probabilistic inference, there’s a chance that it is wrong, perhaps very wrong.

Which brings me to an extrapolation of this argument to an argument about the end of the World. Now I don’t mind admitting that as I get older I get more and  more pessimistic about the prospects for humankind’s survival into the distant future. Unless there are major changes in the way this planet is governed, our Earth may indeed become barren and uninhabitable through war or environmental catastrophe. But I do think the future is in our hands, and disaster is, at least in principle, avoidable. In this respect I have to distance myself from a very strange argument that has been circulating among philosophers and physicists for a number of years. It is called Doomsday argument, and it even has a sizeable wikipedia entry, to which I refer you for more details and variations on the basic theme. As far as I am aware, it was first introduced by the mathematical physicist Brandon Carter and subsequently developed and expanded by the philosopher John Leslie (not to be confused with the TV presenter of the same name). It also re-appeared in slightly different guise through a paper in the serious scientific journal Nature by the eminent physicist Richard Gott. Evidently, for some reason, some serious people take it very seriously indeed.

So what can Doomsday possibly have to do with Panzer tanks or traffic wardens? Instead of traffic wardens, we want to estimate N, the number of humans that will ever be born, Following the same logic as in the example above, I assume that I am a “randomly” chosen individual drawn from the sequence of all humans to be born, in past present and future. For the sake of argument, assume I number n in this sequence. The logic I explained above should lead me to conclude that the total number N is not much larger than my number, n. For the sake of argument, assume that I am the one-billionth human to be born, i.e. n=1,000,000,0000.  There should not be many more than a few billion humans ever to be born. At the rate of current population growth, this means that not many more generations of humans remain to be born. Doomsday is nigh.

Richard Gott’s version of this argument is logically similar, but is based on timescales rather than numbers. If whatever thing we are considering begins at some time tbegin and ends at a time tend and if we observe it at a “random” time between these two limits, then our best estimate for its future duration is of order how long it has lasted up until now. Gott gives the example of Stonehenge, which was built about 4,000 years ago: we should expect it to last a few thousand years into the future. Actually, Stonehenge is a highly dubious . It hasn’t really survived 4,000 years. It is a ruin, and nobody knows its original form or function. However, the argument goes that if we come across a building put up about twenty years ago, presumably we should think it will come down again (whether by accident or design) in about twenty years time. If I happen to walk past a building just as it is being finished, presumably I should hang around and watch its imminent collapse….

But I’m being facetious.

Following this chain of thought, we would argue that, since humanity has been around a few hundred thousand years, it is expected to last a few hundred thousand years more. Doomsday is not quite as imminent as previously, but in any case humankind is not expected to survive sufficiently long to, say, colonize the Galaxy.

You may reject this type of argument on the grounds that you do not accept my logic in the case of the traffic wardens. If so, I think you are wrong. I would say that if you accept all the assumptions entering into the Doomsday argument then it is an equally valid example of inductive inference. The real issue is whether it is reasonable to apply this argument at all in this particular case. There are a number of related examples that should lead one to suspect that something fishy is going on. Usually the problem can be traced back to the glib assumption that something is “random” when or it is not clearly stated what that is supposed to mean.

There are around sixty million British people on this planet, of whom I am one. In contrast there are 3 billion Chinese. If I follow the same kind of logic as in the examples I gave above, I should be very perplexed by the fact that I am not Chinese. After all, the odds are 50: 1 against me being British, aren’t they?

Of course, I am not at all surprised by the observation of my non-Chineseness. My upbringing gives me access to a great deal of information about my own ancestry, as well as the geographical and political structure of the planet. This data convinces me that I am not a “random” member of the human race. My self-knowledge is conditioning information and it leads to such a strong prior knowledge about my status that the weak inference I described above is irrelevant. Even if there were a million million Chinese and only a hundred British, I have no grounds to be surprised at my own nationality given what else I know about how I got to be here.

This kind of conditioning information can be applied to history, as well as geography. Each individual is generated by its parents. Its parents were generated by their parents, and so on. The genetic trail of these reproductive events connects us to our primitive ancestors in a continuous chain. A well-informed alien geneticist could look at my DNA and categorize me as an “early human”. I simply could not be born later in the story of humankind, even if it does turn out to continue for millennia. Everything about me – my genes, my physiognomy, my outlook, and even the fact that I bothering to spend time discussing this so-called paradox – is contingent on my specific place in human history. Future generations will know so much more about the universe and the risks to their survival that they won’t even discuss this simple argument. Perhaps we just happen to be living at the only epoch in human history in which we know enough about the Universe for the Doomsday argument to make some kind of sense, but too little to resolve it.

To see this in a slightly different light, think again about Gott’s timescale argument. The other day I met an old friend from school days. It was a chance encounter, and I hadn’t seen the person for over 25 years. In that time he had married, and when I met him he was accompanied by a baby daughter called Mary. If we were to take Gott’s argument seriously, this was a random encounter with an entity (Mary) that had existed for less than a year. Should I infer that this entity should probably only endure another year or so? I think not. Again, bare numerological inference is rendered completely irrelevant by the conditioning information I have. I know something about babies. When I see one I realise that it is an individual at the start of its life, and I assume that it has a good chance of surviving into adulthood. Human civilization is a baby civilization. Like any youngster, it has dangers facing it. But is not doomed by the mere fact that it is young,

John Leslie has developed many different variants of the basic Doomsday argument, and I don’t have the time to discuss them all here. There is one particularly bizarre version, however, that I think merits a final word or two because is raises an interesting red herring. It’s called the “Shooting Room”.

Consider the following model for human existence. Souls are called into existence in groups representing each generation. The first generation has ten souls. The next has a hundred, the next after that a thousand, and so on. Each generation is led into a room, at the front of which is a pair of dice. The dice are rolled. If the score is double-six then everyone in the room is shot and it’s the end of humanity. If any other score is shown, everyone survives and is led out of the Shooting Room to be replaced by the next generation, which is ten times larger. The dice are rolled again, with the same rules. You find yourself called into existence and are led into the room along with the rest of your generation. What should you think is going to happen?

Leslie’s argument is the following. Each generation not only has more members than the previous one, but also contains more souls than have ever existed to that point. For example, the third generation has 1000 souls; the previous two had 10 and 100 respectively, i.e. 110 altogether. Roughly 90% of all humanity lives in the last generation. Whenever the last generation happens, there bound to be more people in that generation than in all generations up to that point. When you are called into existence you should therefore expect to be in the last generation. You should consequently expect that the dice will show double six and the celestial firing squad will take aim. On the other hand, if you think the dice are fair then each throw is independent of the previous one and a throw of double-six should have a probability of just one in thirty-six. On this basis, you should expect to survive. The odds are against the fatal score.

This apparent paradox seems to suggest that it matters a great deal whether the future is predetermined (your presence in the last generation requires the double-six to fall) or “random” (in which case there is the usual probability of a double-six). Leslie argues that if everything is pre-determined then we’re doomed. If there’s some indeterminism then we might survive. This isn’t really a paradox at all, simply an illustration of the fact that assuming different models gives rise to different probability assignments.

While I am on the subject of the Shooting Room, it is worth drawing a parallel with another classic puzzle of probability theory, the St Petersburg Paradox. This is an old chestnut to do with a purported winning strategy for Roulette. It was first proposed by Nicolas Bernoulli but famously discussed at greatest length by Daniel Bernoulli in the pages of Transactions of the St Petersburg Academy, hence the name.  It works just as well for the case of a simple toss of a coin as for Roulette as in the latter game it involves betting only on red or black rather than on individual numbers.

Imagine you decide to bet such that you win by throwing heads. Your original stake is £1. If you win, the bank pays you at even money (i.e. you get your stake back plus another £1). If you lose, i.e. get tails, your strategy is to play again but bet double. If you win this time you get £4 back but have bet £2+£1=£3 up to that point. If you lose again you bet £8. If you win this time, you get £16 back but have paid in £8+£4+£2+£1=£15 to that point. Clearly, if you carry on the strategy of doubling your previous stake each time you lose, when you do eventually win you will be ahead by £1. It’s a guaranteed winner. Isn’t it?

The relationship of all this to the Shooting Room is that it is shows it is dangerous to pre-suppose a finite value for a number which in principle could be infinite. If the number of souls that could be called into existence is allowed to be infinite, then any individual as no chance at all of being called into existence in any generation!

Amusing as they are, the thing that makes me most uncomfortable about these Doomsday arguments is that they attempt to determine a probability of an event without any reference to underlying mechanism. For me, a valid argument about Doomsday would have to involve a particular physical cause for the extinction of humanity (e.g. asteroid impact, climate change, nuclear war, etc). Given this physical mechanism one should construct a model within which one can estimate probabilities for the model parameters (such as the rate of occurrence of catastrophic asteroid impacts). Only then can one make a valid inference based on relevant observations and their associated likelihoods. Such calculations may indeed lead to alarming or depressing results. I fear that the greatest risk to our future survival is not from asteroid impact or global warming, where the chances can be estimated with reasonable precision, but self-destructive violence carried out by humans themselves. Science has no way of being able to predict what atrocities people are capable of so we can’t make any reliable estimate of the probability we will self-destruct. But the absence of any specific mechanism in the versions of the Doomsday argument I have discussed robs them of any scientific credibility at all.

There are better grounds for worrying about the future than simple-minded numerology.

### The n-Category Cafe

The Kan Extension Seminar in the Notices

Emily has a two-page article in the latest issue of the Notices of the American Mathematical Society, describing her experience of setting up and running the Kan extension seminar. In my opinion, the seminar was an exciting innovation for both this blog and education at large. It also resulted in some excellent posts. Go read it!

### Lubos Motl - string vacua and pheno

CMS sees excess of same-sign dimuons "too"
An Xmas rumor deja vu

There are many LHC-related hep-ex papers on the arXiv today, and especially
Searches for the associated $$t\bar t H$$ production at CMS
by Liis Rebane of CMS. The paper notices a broad excess of like-sign dimuon events. See the last 2+1 lines of Table 1 for numbers.

Those readers who remember all 6,000+ blog posts on this blog know very well that back in December 2012, there was a "Christmas rumor" about an excess seen by the other major LHC collaboration, ATLAS.

ATLAS was claimed to have observed 14 events – which would mean a 5-sigma excess – of same-sign dimuon events with the invariant mass$m_{\rm inv}(\mu^\pm \mu^\pm) = 105\GeV.$ Quite a bizarre Higgs-like particle with $$Q=\pm 2$$, if a straightforward explanation exists. Are the ATLAS and CMS seeing the same deviation from the Standard Model?

## November 17, 2014

### Marco Frasca - The Gauge Connection

That’s a Higgs but how many?

CMS and ATLAS collaborations are yet up to work producing results from the datasets obtained in the first phase of activity of LHC. The restart is really near the corner and, maybe already the next summer, things can change considerably. Anyway what they get from the old data can be really promising and rather intriguing. This is the case for the recent paper by CMS (see here). The aim of this work is to see if a heavier state of Higgs particle exists and the kind of decay they study is $Zh\rightarrow l^+l^-bb$. That is, one has a signature with two leptons moving in opposite directions, arising from the dacy of the $Z$, and two bottom quarks arising from the decay of the Higgs particle. The analysis of this decay aims to get hints of existence of a heavier pseudoscalar Higgs state. This can be greatly important for SUSY extensions of the Standard Model that foresee more than one Higgs particle.

Often CMS presents its results with some intriguing open questions and also this is the case and so, it is worth this blog entry. Here is the main result

The evidence, as said in the paper, is that there is a 2.6-2.9 sigma evidence at 560 GeV and a smaller one at around 300 GeV. Look elsewhere effect reduces the former at 1.1 sigma and the latter is practically negligible. Overall, this is pretty negligible but, as always, with more data at the restart, could become something real or just fade away. It should be appreciated the fact that a door is left open anyway and a possible effect is pointed out.

My personal interpretation is that such higher excitations do exist but their production rates are heavily suppressed with the respect to the observed ground state at 126 GeV and so, negligible with the present datasets. I am also convinced that the current understanding of the breaking of SUSY, currently adopted in MSSM-like to go beyond the Standard Model, is not the correct one provoking the early death of such models. I have explained this in a coupled of papers of mine (see here and here). It is my firm conviction that the restart will yield exciting results and we should be really happy to have such a powerful machine in our hands to grasp them.

Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5

Marco Frasca (2012). Classical solutions of a massless Wess-Zumino model J.Nonlin.Math.Phys. 20:4, 464-468 (2013) arXiv: 1212.1822v2

Filed under: Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Higgs particle, Standard Model, Supersymmetry

### astrobites - astro-ph reader's digest

ASASSN-13co: A Type-Defying Supernova
Title: Discovery and Observations of the Unusually Bright Type-Defying II-P/II-L Supernova ASASSN-13co

Authors: T. W.-S. Holoien, et al.

First Author’s Institution: Department of Astronomy, The Ohio State University

Paper Status: Submitted to MNRAS

There are arguably a lot of things defy categorization, but it’s not everyday that we find something that suggests we do away with our categories altogether. The authors of today’s paper believe that the recently-discovered Type II supernova ASASSN-13co — read that as “assassin”, please — might just be one of the latter. Its unusual characteristics call into question the validity of the two classes (II-P and II-L, more on that later) into which we usually group Type II supernovae. As a result, they suggest that we treat Type II supernovae properties as a continuum, rather than the discrete designations we’ve become accustomed to assigning.

Death Throes of Massive Stars

Type II supernovae are identified by the hydrogen in their spectra (meaning that they still have a hydrogen envelope when they die). They are formed when a star with mass of 8-50 times that of the sun dies through core-collapse.

All stars produce energy through nuclear fusion, but massive stars can fuse much heavier nuclei than stars the size of our sun – all the way to nickel and iron, which have the highest binding energy of all elements. While the fusion of the lighter elements is an exothermic process, fusing iron uses up energy instead, so fusing elements heavier than iron isn’t energetically favorable. As a result, a core of iron and nickel (which then decays into iron) builds up in the center of a massive star. The core is supported by electron degeneracy pressure. When the mass of the iron-nickel core exceeds the Chandrasekhar limit (about 1.4 solar masses), however, electron degeneracy pressure is not enough to stop the core from collapsing. As the core collapses, the protons and electrons in the core of the star merge to form neutrons and neutrinos. The neutrinos can escape and carry away energy. At the same time, the outer layers of the star fall inward until neutron degeneracy pressure kicks in, stopping the collapse and causing the outer layers to rebound.  The combination of the pressure from the neutrinos and the rebound of the outer layers off of the core causes the star to be torn apart in a huge explosion – a core-collapse supernovae.

Left: Archival SDSS data of the host galaxy PGC 067159. Right: LCOGT image that was taken during the supernova. The circles have radii of 2 arcseconds and are centered on the supernova’s position. We can see that there was previously no visible object at the location of the supernova. An image like this is called a finding chart.

These supernovae exhibit a wide range of properties, but have generally been grouped into Type II-P or Type II-L supernovae.  Type II-P supernovae – the P stands for “plateau” – get their names from the long flat stretch present in their optical light curves.  Type II-L supernovae, on the other hand, show a relatively steady “linear” decline in their intensity after reaching peak brightness. However, it has recently been suggested that Type II supernovae light curves may not fall neatly into the two groups, but actually display a continuum of these properties. The authors of today’s paper hope that by studying unusually bright or hard-to-classify events, they will be able to better understand the variations in Type II supernovae and improve upon the current classification scheme.

Profiling an Unusual Supernova

The focus of the paper, ASASSN-13co, is a supernova that the authors state is both unusually bright and hard to classify. It was detected with the All-Sky Automated Survey for Supernovae (ASAS-SN) on August 29, 2013 in the V-band — an optical bandpass with a mean wavelength of 540 nm. The supernova had an apparent magnitude of 16.9 +/- 0.1 and coordinates RA = 21:40:38.72, Dec = +06:30:36.98. Using the Sloan Digital Sky Survey (SDSS), they located the host galaxy as the spiral galaxy PGC 067159, which was offset by 3 arcseconds from the source of the supernova.

The bolometric (total flux over all wavelengths) light curve of ASASSN-13co in red plotted against the light curves of the supernovae used in making the PP14 model, in grey. The thickness of the red indicates the 1-sigma uncertainty in the light curve. We can see that ASASSN-13co is one of the most luminous supernovae of the bunch and that unlike the other Type II-P SN shown, it does not have a long plateau phase.

After finding that ASASSN-13co had an unusually bright V-band absolute magnitude of -18.1 at the time of detection, they decided to launch an extensive follow-up campaign to fully characterize the event. They obtained photometric observations from space using the Swift X-ray Telescope and UVOT target-of-opportunity observations and from the ground using the Las Cumbres Observatory Global Telescope Network (LCOGTN). Since they do not have prior X-ray data from the host galaxy (and are therefore unable to determine if the X-ray flux comes from the supernova or the galaxy) they ultimately don’t include their X-ray data in the analysis. In addition, they have spectroscopic data from spectrographs located on the LCO du Pont 2.5-m telescope, the MDM Observatory Hiltner 2.4-m telescope, and the Apache Point Observatory 3.5-m telescope.

Finally, the authors also use a new model from Pejcha & Prieto 2014, which they designate as PP14, to calculate the light curve of the SN in the V-band, since they do not have follow-up data in the V-band. The model takes in measurements of the supernova’s flux and expansion velocities to calculate other information about the supernova, such as its light curve in other filters, its luminosity over all wavelengths, and the mass of nickel-56 that it produces.

Type-Defying

The V-band light curve (in absolute magnitudes) for ASASSN-13co, plotted in red again against a sample of various Type II SN from Anderson et al. 2014. ASASSN-13co has one of the brightest light curves, and it also seems to decline more slowly than the other bright SN light curves.

The spectroscopic data that the authors obtain allow them to determine that ASASSN-13co’s spectrum looks typical for a Type II-P supernova. However, the V-band light curves calculated using PP14 show that the duration of the plateau seems to fall between the values for typical Type II-P and Type II-L supernovae. Unlike a Type II-P, which has a rapid fall and then a long plateau phase, ASASSN-13co displays a steady decline in its luminosity. However, it defies easy categorization by exhibiting a steady decline in luminosity that is considerably slower than the decline of an average Type II-L supernova. On top of that, ASASSN-13co is just unusually bright for a Type II supernova.

ASASSN-13co’s unusual characteristics lead the authors to conclude that the supernova is not easily classified as Type II-P or a Type II-L. Instead, they offer this as another piece of evidence that the II-P and II-L designations for Type II SN are oversimplifications of the wide range of Type II supernovae characteristics. Lastly, they note that the PP14 model, which was able to provide a good fit to even the unusual ASASSN-13co, can be a useful tool for future studies of variations in Type II supernovae characteristics.

### arXiv blog

Machine-Learning Algorithm Ranks the World's Most Notable Authors

Deciding which books to digitise when they enter the public domain is tricky; unless you have an independent ranking of the most notable authors.

Public Domain Day, January 1, is the day on which previously copyrighted works become freely available to print, digitize, modify, or reuse in more or less any way. In most countries, this happens 50 or 70 years after the death of the author.

### Matt Strassler - Of Particular Significance

At the Naturalness 2014 Conference

Greetings from the last day of the conference “Naturalness 2014“, where theorists and experimentalists involved with the Large Hadron Collider [LHC] are discussing one of the most widely-discussed questions in high-energy physics: are the laws of nature in our universe “natural” (= “generic”), and if not, why not? It’s so widely discussed that one of my concerns coming in to the conference was whether anyone would have anything new to say that hadn’t already been said many times.

What makes the Standard Model’s equations (which are the equations governing the known particles, including the simplest possible Higgs particle) so “unnatural” (i.e. “non-generic”) is that when one combines the Standard Model with, say, Einstein’s gravity equations. or indeed with any other equations involving additional particles and fields, one finds that the parameters in the equations (such as the strength of the electromagnetic force or the interaction of the electron with the Higgs field) must be chosen so that certain effects almost perfectly cancel, to one part in a gazillion* (something like 10³²). If this cancellation fails, the universe described by these equations looks nothing like the one we know. I’ve discussed this non-genericity in some detail here.

*A gazillion, as defined on this website, is a number so big that it even makes particle physicists and cosmologists flinch. [From Old English, gajillion.]

Most theorists who have tried to address the naturalness problem have tried adding new principles, and consequently new particles, to the Standard Model’s equations, so that this extreme cancellation is no longer necessary, or so that the cancellation is automatic, or something to this effect. Their suggestions have included supersymmetry, warped extra dimensions, little Higgs, etc…. but importantly, these examples are only natural if the lightest of the new particles that they predict have masses that are around or below 1 TeV/c², and must therefore be directly observable at the LHC (with a few very interesting exceptions, which I’ll talk about some other time). The details are far too complex to go into here, but the constraints from what was not discovered at LHC in 2011-2012 implies that most of these examples don’t work perfectly. Some partial non-automatic cancellation, not at one part in a gazillion but at one part in 100, seems to be necessary for almost all of the suggestions made up to now.

So what are we to think of this?

• Maybe one of the few examples that is entirely natural and is still consistent with current data is correct, and will turn up at the LHC in 2015 or 2016 or so, when the LHC begins running at higher energy per collision than was available in 2011-2012.
• Maybe one of the examples that isn’t entirely natural is correct. After all, one part in 100 isn’t awful to contemplate, unlike one part in a gazillion. We do know of other weird things about the world that are improbable, such as the fact that the Sun and the Moon appear to be almost exactly the same size in the Earth’s sky. So maybe our universe is slightly non-generic, and therefore discoveries of new particles that we might have expected to see in 2011-2012 are going to be delayed until 2015 or beyond.
• Maybe naturalness is simply not a good guide to guessing our universe’s laws, perhaps because the universe’s history, or its structure, forced it to be extremely non-generic, or perhaps because the universe as a whole is generic but huge and variegated (this is often called a “multiverse”, but be careful, because that word is used in several very different ways — see here for discussion) and we can only live in an extremely non-generic part of it.
• Maybe naturalness is not a good guide because there’s something wrong with the naturalness argument, perhaps because quantum field theory itself, on which the argument rests, or some other essential assumption, is breaking down.

Some of the most important issues at this conference are: how can we determine experimentally which of these possibilities is correct (or whether another we haven’t thought of is correct)? In this regard, what measurements do we need to make at the LHC in 2015 and beyond? What theoretical directions concerning naturalness have been underexplored, and might any of them suggest new measurements at LHC (or elsewhere) that have not yet been attempted?

I am afraid my time is too limited to report on highlights. Most of the progress reported at this conference has been incremental rather than major steps; there weren’t any big new solutions to the naturalness problem proposed.  But it has been a good opportunity for an exchange of ideas among theorists and experimentalists, with a number of new approaches to LHC measurements being presented and discussed, and with some interesting conversation regarding the theoretical and conceptual issues surrounding naturalness, selection bias (sometimes called “anthropics”), and the behavior of quantum field theory.

Filed under: LHC News, Particle Physics Tagged: atlas, cms, Higgs, LHC, naturalness

### astrobites - astro-ph reader's digest

Exploring the Planetary Graveyard
Title:  The frequency of planetary debris around young white dwarfs

Authors: Detlev Koester, Boris Gaensicke, Jay Farihi

First Author’s Institution: Institut für Theoretische Physik und Astrophysik, Universität Kiel, 24098 Kiel, Germany

Figure 1: Example Hubble Space Telescope spectrum of a metal polluted white dwarf (a) with zoomed in sections showing the absorption lines from silicon (b, c ) and carbon (d, e). The red line shows the model atmosphere fit to the spectrum, used to calculate how much of each metal was present. Image Credit: Koester et al 2014

Over the past decade the study of planetary debris in orbit around white dwarfs has become an increasingly exciting area. Observations of this debris have allowed us to make unique discoveries about the chemical composition of extrasolar rocky planets, as well as revealing the endpoints of the evolution of planetary systems very similar to our own.

A key missing piece of information in these studies has been just how many, or more accurately what proportion of, white dwarfs have debris. Although many debris-polluted white dwarfs have been found, most of them were given away by other features such as orbiting dusty or gaseous debris discs. This leaves key questions unanswered.

For example, how many of the stars that formed the white dwarfs had planets? Does it depend on the kind of star? How do these evolved planetary systems change over time? In order to answer these questions, the authors have tried to gain an unbiased measurement of the frequency of planetary systems of white dwarfs.

The easiest way to spot the planetary debris in a white dwarf’s atmosphere is to look for light absorption by calcium, which creates a distinctive line in the blue end of a white dwarf’s spectrum. Unfortunately this calcium line tends to diminish at temperatures above around 15000K, severely limiting the range over which any results from the survey would be relevant. More importantly however, calcium only makes up a small fraction of the material in the planets of the Solar System, so might only show up in the spectra of more heavily polluted white dwarfs- not exactly an unbiased sample!

To get around this problem the authors decided to instead look for silicon, which makes up around a third of the Earth. If the composition of the planetary systems at white dwarfs are similar,  the silicon should therefore be easy to spot even in mildly polluted white dwarfs.  Unfortunately, all of the convenient silicon lines in the spectrum of a white dwarf are found in the ultraviolet. Earth’s atmosphere blocks out UV light, so this survey would need to use the Hubble Space Telescope.

The authors used a snapshot survey, providing a list of over a hundred white dwarfs that could be quickly observed in any order in the gaps between other, longer observations. Within a certain temperature range (1700027000K), these white dwarfs were chosen at random. Over three years, eighty-five white dwarfs from their list were successfully observed, enough to get a good grip on the statistics of debris pollution.

Figure 2: The key findings of the paper. The horizontal axis shows the temperature of each white dwarf (bottom), which is analogous to the time since the star turned into a white dwarf (top). Red symbols show the results from the paper, with other white dwarfs shown in black and grey. Image Credit: Koester et al. 2014

The results of the survey are surmised in Figure 2. The key observation is the middle panel, showing the fraction of polluted white dwarfs. Out of the 85 white dwarfs, the authors found pollution from planetary debris in an astounding 48 (56%). This means that at least half of white dwarfs are orbited by the remains of planetary systems. Put another way, that means that at least half of the stars that turned into the white dwarfs once had orbiting planets. This result agrees nicely with the latest estimates from direct studies of exoplanets.

Out of those white dwarfs with debris pollution, analysis of their atmosphere shows that at half of them must be currently accreting rocky objects, whilst the other half will have been accreting recently. Far from being a few scattered objects, this paper has shown that active evolved planetary systems are abundan, and offer an intriguing opportunity to study the death-throes of planetary systems- including, eventually, the Solar System itself.

### Peter Coles - In the Dark

The Physics Of Nonconformity: Why Difference Always Looks The Same

I came across an interesting paper while I was in an unblogging state last week so thought I’d share it here. Have you ever wondered why non-conformists always seem to look the same? I was struck by this last year when I saw a group of self-styled “anarchists” – of which there are many in Brighton – gathering ahead of a demonstration against something or other, or possibly nothing at all. Anyway they all struck rigidly to a particular dress code, a fact which I found amusing given their professed preference for a state of disorder. The same seems to be the case in other contexts too. A striking current example is  the fad for the “hipster” beard, but wherever you look you will find a group of people who express their desire to be different by looking exactly the same as each other.  It seems people always want to conform in some way. Perhaps we should call this conformal invariance?

Anyway, the paper investigates this – in a slightly tonggue-in-cheek manner – from the point of view of statistical physics using an approach similar to that used to study the phenomenon of the spin glass. Here is the abstract:

In such different domains as statistical physics and spin glasses, neurosciences, social science, economics and finance, large ensemble of interacting individuals taking their decisions either in accordance (mainstream) or against (hipsters) the majority are ubiquitous. Yet, trying hard to be different often ends up in hipsters consistently taking the same decisions, in other words all looking alike. We resolve this apparent paradox studying a canonical model of statistical physics, enriched by incorporating the delays necessary for information to be communicated. We show a generic phase transition in the system: when hipsters are too slow in detecting the trends, they will keep making the same choices and therefore remain correlated as time goes by, while their trend evolves in time as a periodic function. This is true as long as the majority of the population is made of hipsters. Otherwise, hipsters will be, again, largely aligned, towards a constant direction which is imposed by the mainstream choices. Beyond the choice of the best suit to wear this winter, this study may have important implications in understanding dynamics of inhibitory networks of the brain or investment strategies finance, or the understanding of emergent dynamics in social science, domains in which delays of communication and the geometry of the systems are prominent.

## November 16, 2014

### The n-Category Cafe

Jaynes on Mathematical Courtesy

In the last years of his life, fierce Bayesian Edwin Jaynes was working on a large book published posthumously as Probability Theory: The Logic of Science (2003). Jaynes was a lively writer. In an appendix on “Mathematical formalities and style”, he really let rip, railing against modern mathematical style. Here’s a sample:

Nowadays, if you introduce a variable $xx$ without repeating the incantation that it is in some set or ‘space’ $XX$, you are accused of dealing with an undefined problem. If you differentiate a function $f\left(x\right)f\left(x\right)$ without first having stated that it is differentiable, you are accused of lack of rigor. If you note that your function $f\left(x\right)f\left(x\right)$ has some special property natural to the application, you are accused of lack of generality. In other words, every statement you make will receive the discourteous interpretation.

Discuss.

This is taken from the final section of this appendix, on “Mathematical courtesy”. Here’s most of the rest of it:

Obviously, mathematical results cannot be communicated without some decent standards of precision in our statements. But a fanatical insistence on one particular form of precision and generality can be carried so far that it defeats its own purpose; 20th century mathematics often degenerates into an idle adversary game instead of a communication process.

The fanatic is not trying to understand your substantive message at all, but only trying to find fault with your style of presentation. He will strive to read nonsense into what you are saying, if he can possibly find any way of doing so. In self-defense, writers are obliged to concentrate their attention on every tiny, irrelevant, nit-picking detail of how things are said rather than on what is said. The length grows; the content shrinks.

Mathematical communication would be much more efficient and pleasant if we adopted a different attitude. For one who makes the courteous interpretation of what others write, the fact that $xx$ is introduced as a variable already implies that there is some set $XX$ of possible values. Why should it be necessary to repeat that incantation every time a variable is introduced, thus using up two symbols where one would do? (Indeed, the range of values is usually indicated more clearly at the point where it matters, by adding conditions such as ($00 \lt x \lt 1$) after an equation.)

For a courteous reader, the fact that a writer differentiates $f\left(x\right)f\left(x\right)$ twice already implies that he considers it twice differentiable; why should he be required to say everything twice? If he proves proposition $AA$ in enough generality to cover his application, why should he be obliged to use additional space for irrelevancies about the most general possible conditions under which $AA$ would be true?

A scourge as annoying as the fanatic is his cousin, the compulsive mathematical nitpicker. We expect that an author will define his technical terms, and then use them in a way consistent with his definitions. But if any other author has ever used the term with a slightly different shade of meaning, the nitpicker will be right there accusing you of inconsistent terminology. The writer has been subjected to this many times; and colleagues report the same experience.

Nineteenth century mathematicians were not being nonrigorous by their style; they merely, as a matter of course, extended simple civilized courtesy to others, and expected to receive it in return. This will lead one to try to read sense into what others write, if it can possibly be done in view of the whole context; not to pervert our reading of every mathematical work into a witch-hunt for deviations from the Official Style.

Therefore […] we issue the following:

Emancipation Proclamation

Every variable $xx$ that we introduce is understood to have some set $XX$ of possible values. Every function $f\left(x\right)f\left(x\right)$ that we introduce is understood to be sufficiently well-behaved so that what we do with it makes sense. We undertake to make every proof general enough to cover the application we make of it. It is an assigned homework problem for the reader who is interested in the question to find the most general conditions under which the result would hold.

We could convert many 19th century mathematical works to 20th century standards by making a rubber stamp containing this Proclamation, with perhaps another sentence using the terms ‘sigma-algebra, Borel field, Radon-Nikodym derivative’, and stamping it on the first page.

Modern writers could shorten their works substantially, with improved readability and no decrease in content, by including such a Proclamation in the copyright message, and writing thereafter in the 19th century style. Perhaps some publishers, seeing these words, may demand that they do this for economic reasons; it would be a service to science.

### arXiv blog

Japanese Artists Solve The Problem of How To Sell Multiple Copies of Interactive Artworks

If you’re a modern art fan, you may have bought a copy of a Picasso or a Pollock. But chances are, you’ve never been able to buy a copy of an interactive art installation…until now.

Back in July, the Museum of Contemporary Art Tokyo in Japan displayed a giant canvas showing four snowmen in a wintry scene. People approaching the canvas found their faces superimposed onto the heads of the four snowmen, so that their facial expressions determined the mood of the scene.

### Michael Schmitt - Collider Blog

Quark contact interactions at the LHC

So far, no convincing sign of new physics has been uncovered by the CMS and ATLAS collaborations. Nonetheless, the scientists continue to look using a wide variety of approaches. For example, a monumental work on the coupling of the Higgs boson to vector particles has been posted by the CMS Collaboration (arXiv:1411.3441). The authors conducted a thorough and very sophisticated statistical analysis of the kinematic distributions of all relevant decay modes, with the conclusion that the data for the Higgs boson are fully consistent with the standard model expectation. The analysis and article are too long for a blog post, however, so please see the paper if you want to learn the details.

The ATLAS Collaboration posted a paper on generic searches for new physics signals based on events with three leptons (e, μ and τ). This paper (arXiv:1411.2921) is longish one describing a broad-based search with several categories of events defined by lepton flavor and charge and other event properties. In all categories the observation confirms the predictions based on standard model processes: the smallest p-value is 0.05.

A completely different search for new physics based on a decades-old concept was posted by CMS (arXiv:1411.2646). We all know that the Fermi theory of weak interactions starts with a so-called contact interaction characterized by an interaction vertex with four legs. The Fermi constant serves to parametrize the interaction, and the participation of a vector boson is immaterial when the energy of the interaction is low compared to the boson mass. This framework is the starting point for other effective theories, and has been employed at hadron colliders when searching for deviations in quark-quark interactions, as might be observable if quarks were composite.

The experimental difficulty in studying high-energy quark-quark scattering is that the energies of the outgoing quarks are not so well measured as one might like. (First, the hadronic jets that materialize in the detector do not precisely reflect the quark energies, and second, jet energies cannot be measured better than a few percent.) It pays, therefore, to avoid using energy as an observable and to get the most out of angular variables, which are well measured. Following analyses done at the Tevatron, the authors use a variable χ = exp(|y1-y2|), which is a simple function of the quark scattering angle in the center-of-mass frame. The distribution of events in χ can be unambiguously predicted in the standard model and in any other hypothetical model, and confronted with the data. So we have a nice case for a goodness-of-fit test and pairwise hypothesis testing.

The traditional parametrization of the interaction Lagrangian is:

where the η parameters have values -1, 0, +1 and specify the chirality of the interaction; the key parameter is the mass scale Λ. An important detail is that this interaction Lagrangian can interfere with the standard model piece, and the interference can be either destructive or constructive, depending on the values of the η parameters.

The analysis proceeds exactly as one would expect: events must have at least two jets, and when there are more than two, the two highest-pT jets are used and the others ignored. Distributions of χ are formed for several ranges of di-jet invariant mass, MJJ, which extends as high as 5.2 TeV. The measured χ distributions are unfolded, i.e., the effects of detector resolution are removed from the distribution on a statistical basis. The main sources of systematic uncertainty come from the jet energy scale and resolution and are based on an extensive parametrization of jet uncertainties.

Since one is looking for deviations with respect to the standard model prediction, it is very important to have an accurate prediction. Higher-order terms must be taken into account; these are available at next-to-leading order (NLO). In fact, even electroweak corrections are important and amount to several percent as a strong function of χ — see the plot on the right. The scale uncertainties are a few percent (again showing the a very precise SM prediction is non-trivial event for pp→2J) and fortunately the PDF uncertainties are small, at the percent level. Theoretical uncertainties dominate for MJJ near 2 TeV, while statistical uncertainties dominate for MJJ above 4 TeV.

The money plot is this one:

Optically speaking, the plot is not exciting: the χ distributions are basically flat and deviations due to a mass scale Λ = 10 TeV would be mild. Such deviations are not observed. Notice, though, that the electroweak corrections do improve the agreement with the data in the lowest χ bins. Loosely speaking, this improvement corresponds to about one standard deviation and therefore would be significant if CMS actually had evidence for new physics in these distributions. As far as limits are concerned, the electroweak corrections are “worth” 0.5 TeV.

The statistical (in)significance of any deviation is quantified by a ratio of log-likelihoods: q = -2ln(LSM+NP/LSM) where SM stands for standard model and NP for new physics (i.e., one of distinct possibilities given in the interaction Lagrangian above). Limits are derived on the mass scale Λ depending on assumed values for the η parameters; they are very nicely summarized in this graph:
The limits for contact interactions are roughly at the 10 TeV scale — well beyond the center-of-mass energy of 8 TeV. I like this way of presenting the limits: you see the expected value (black dashed line) and an envelope of expected statistical fluctuations from this expectation, with the observed value clearly marked as a red line. All limits are slightly more stringent than the expected ones (these are not independent of course).

The authors also considered models of extra spatial dimensions and place limits on the scale of the extra dimensions at the 7 TeV level.

So, absolutely no sign of new physics here. The LHC will turn on in 2015 at a significantly higher center-of-mass energy (13 TeV), and given the ability of this analysis to probe mass scales well above the proton-proton collision energy, a study of the χ distribution will be interesting.

### Clifford V. Johnson - Asymptotia

Nerd-Off Results
So I'm supposed to be writing 20 slides for a colloquium so let me see if I get this right really fast:- First round, the Koch Brothers bested the Justice League and Ultron was beaten up by Inspector Gadget meanwhile Ice Cube trumped Mr. Rogers and Stephen Hawking battled Charles Darwin but the audience loved them so much that they were asked to team up for the next round (before which Jon Snow did standup in the break) and in which they lost to Inspector Gadget who [...] Click to continue reading this post

### Lubos Motl - string vacua and pheno

CMS: locally 2.6 or 2.9 sigma excess for another $$560\GeV$$ Higgs boson $$A$$
And there are theoretical reasons why this could be the right mass

Yesterday, the CMS Collaboration at the LHC published the results of a new search:
Search for a pseudoscalar boson $$A$$ decaying into a $$Z$$ and an $$h$$ boson in the $$\ell^+\ell^- \bar b b$$ final state
They look at collisions with the $$\ell\ell bb$$ final state and interpret it using the two higgs doublet model scenarios.

There are no stunning excesses in the data.

But I think it's always a good idea to point out what is the most significant excess they see in the data, and the CMS folks do just that in this paper, too.

On page 10, one may see Figure 4 and Figure 5 that show the main results.

According to Figure 4, a new Higgs boson with $$\Gamma=0$$ has some cross section (multiplied by the branching ratio) that stays within the 2-sigma band but reveals a deficit "slightly exceeding 2 sigma" for $$m_A=240\GeV$$ and slight 2-sigma excesses for $m_A = 260\GeV, \quad 315\GeV, \quad 560 \GeV.$ And let's not forget about a different CMS search that suggested $$m_H=137\GeV$$.

The excess for $$m_A=560\GeV$$ has the local significance of 2.6 sigma which reduces to just 1.1 sigma "globally", after the look-elsewhere-effect correction.

As Figure 5 (which is similar but fuzzier) shows, this excess for $$m_A=560\GeV$$ becomes even larger, 2.9 sigma (or 1.6 sigma globally) if we assume a larger decay width of this $$A$$ boson, namely $$\Gamma=30\GeV$$. The significance levels are mentioned in the paper, too.

That is somewhat intriguing. If there's another search for such bosons, don't forget to look for similar excesses at this mass. But it's nothing to lose your sleep over, of course.

Recall that the minimum supersymmetric standard model – a special, more motivated subclass of the two-higgs-doublet model – predicts five Higgs particles because $$8-3=5$$ expresses the a priori real scalar degrees of freedom minus those eaten by the 3 broken symmetry generators.

These 5 bosons may be denoted $$h,H,A,H^\pm$$. The first three bosons are neutral, the last two are charged. $$A$$ is the only CP-odd CP-eigenstate.

If you want to get excited by a paper/talk that "predicted" this $$m_A=560\GeV$$ while $$m_h=125\GeV$$, open this June 2014 talk
The post-Higgs MSSM scenario
by Abdelahk Djouadi of CNRS Paris. On page 13, he deduces that a "best fit" in MSSM has$\tan\beta=1, \quad m_A = 560\GeV,\\ m_h = 125\GeV, \quad m_H = 580\GeV,\\ m_{H^\pm} = 563 \GeV$ although the sentence right beneath that indicates that the author thinks that many other points are rather good fits, too. Good luck to that prediction, anyway. ;-)

The very same scenario with the same values of the masses is also defended in this May 2014 paper by Jérémie Quevillon who argues that these values of the new Higgses are almost inevitable consequences of supersymmetry given the superpartner masses' being above $$1\TeV$$.

It sounds cool despite the fact that the simplest, truly MSSM-based scenarios corresponding to their "best fit" involve superpartners around $$100\TeV$$. The discovery of the Higgses near $$560\GeV$$ in 2015 would be circumstantial evidence in favor of supersymmetry, nevertheless.

Update: Abdelahk Djouadi told me that their scenario only predicts some 0.5 fb cross section (with the factors added) but one needs about 5 fb to explain the excess above. So it's bad news.

### ZapperZ - Physics and Physicists

"Should I Go Into Physics Or Engineering?"
I get asked that question a lot, and I also see similar question on Physics Forums. Kids who are either still in high school, or starting their undergraduate  years are asking which area of study should they pursue. In fact, I've seen cases where students ask whether they should do "theoretical physics" or "engineering", as if there is nothing in between those two extremes!

My response has always been consistent. I why them why can't they have their cake and eat it too?

This question often arises out of ignorance of what physics really encompasses. Many people, especially high school students, still think of physics as being this esoteric subject matter, dealing with elementary particles, cosmology, wave-particle duality, etc.. etc., things that they don't see involving everyday stuff. On the other hand, engineering involves things that they use and deal with everyday, where the product are often found around them. So obviously, with such an impression, those two areas of study are very different and very separate.

I try to tackle such a question by correcting their misleading understanding of what physics is and what a lot of physicists do. I tell them that physics isn't just the LHC or the Big Bang. It is also your iPhone, your medical x-ray, your MRI, your hard drive, your silicon chips, etc. In fact, the largest percentage of practicing physicists are in the field of condensed matter physics/material science, an area of physics that study the basic properties of materials, the same ones that are used in modern electronics. I point to them many of the Nobel Prize in physics that were awarded to condensed matter physicists or for invention of practical items (graphene, lasers, etc.). So already, the idea of having to choose between doing physics, and doing something "practical and useful" may not be mutually exclusive.

Secondly, I point to different areas of physics in which physics and engineering smoothly intermingle. I've mentioned earlier about the field of accelerator physics, in which you see both physics and engineering come into play. In fact, in this field, you have both physicists and electrical engineers, and they often do the same thing. The same can be said about those in instrumentation/device physics. In fact, I have also seen many high energy physics graduate students who work on detectors for particle colliders who looked more like electronics engineers than physicists! So for those working in this field, the line between doing physics and doing engineering is sufficiently blurred. You can do exactly what you want, leaning as heavily towards the physics side or engineering side as much as you want, or straddle exactly in the middle. And you can approach these fields either from a physics major or an electrical engineering major. The point here is that there are areas of study in which you can do BOTH physics and engineering!

Finally, the reason why you don't have to choose to major in either physics or engineering is because there are many schools that offer a major in BOTH! My alma mater, the University of Wisconsin-Madison (Go Badgers!) has a major called AMEP - Applied Mathematics, Engineering, and Physics - where with your advisor, you can tailor a major that straddles two of more of the areas in math, physics, and engineering. There are other schools that offer majors in Engineering Physics or something similar. In other words, you don't have to choose between physics or engineering. You can just do BOTH!

Zz.

### Tommaso Dorigo - Scientificblogging

A New Search For The A Boson With CMS
I am quite happy to report today that the CMS experiment at the CERN Large Hadron Collider has just published a new search which fills a gap in studies of extended Higgs boson sectors. It is a search for the decay of the A boson into Zh pairs, where the Z in turn decays to an electron-positron or a muon-antimuon pair, and the h is assumed to be the 125 GeV Higgs and is sought for in its decay to b-quark pairs.

If you are short of time, this is the bottomline: no A boson is found in Run 1 CMS data, and limits are set in the parameter space of the relevant theories. But if you have a bit more time to spend here, let's start with the beginning - What's the A boson, you might wonder for a start.

## November 15, 2014

### Lubos Motl - string vacua and pheno

Is our galactic black hole a neutrino factory?
When I was giving a black hole talk two days ago, I would describe Sagittarius A*, a black hole in the center of the Milky Way, our galaxy, as our "most certain" example of an astrophysical black hole that is actually observed in the telescopes. Its mass is 4 million solar masses – the object is not a negligible dwarf.

Accidentally, a term paper and presentation I would do at Rutgers more than 15 years ago was about Sgr A*. Of course, I had no doubt it was a black hole at that time.

Today, science writers affiliated with all the usual suspects (e.g. RT) would run the story that Sgr A* is a high-energy neutrino factory.

Why now? Well, a relevant paper got published in Physical Review D. Again, it wasn't today, it was almost 2 months ago, but a rational justification of the explosion of hype in the mid of November 2014 simply doesn't exist. Someone in NASA helped the media to explode – by this press release – and they did explode, copying from each other in the usual way.

The actual paper was published as the July 2014 preprint
Neutrino Lighthouse at Sagittarius A*
by Bai, Barger squared, Lu, Peterson, and Salvado. Their main argument in favor of the bizarrely sounding claim that "Sgr A* produces high-energy neutrinos" comes from something that looks like a timing coincidence.

Chandra X-ray Observatory and its NuSTAR and Swift friend – all in space – would detect some outbursts or flares between 2010 and 2013. And the timing and (limited data about) locations seemed remarkably close to some detection of high-energy neutrinos by IceCube on the South Pole.

IceCube saw an exceptional neutrino 2-3 hours before a remarkable X-ray flare seen in the space X-ray telescopes, and so on. The confidence level is just around 99%. Yes, the word "before" sounds like the stories about OPERA that would detect "faster than light" neutrinos.

To my taste, the confidence level supporting the arguments is lousy. But even if I accept the possibility that the neutrinos are coming from the direction of Sgr A*, they're almost certainly not due to the black hole itself. Or at least, I would be stunned if the event horizon – which is what allows us to call the object a black hole – were needed for the emission of these high-energy neutrinos.

In particular, I emphasize that the Hawking radiation for such macroscopic black holes should be completely negligible, and emitting virtually no massive particles (and neutrinos are light from some viewpoints but very massive relatively to the typical Hawking quanta).

It seems much more likely to me that the X-rays as well as (possibly) the neutrinos are due to some messy astrophysical effects in the vicinity of the black hole. What are these astrophysical effects?

They propose that the neutrinos are created by decays of charged pions – which seems like a very likely birth of neutrinos to me (at least if one assumes that beyond the Standard Model physics is not participating). But these charged pions are there independently of the event horizon, aren't they? If the neutrinos arise from decaying charged pions near the black hole, there should also be neutral pions and their decays should produce gamma rays (near a TeV) which should be visible to the CTA, HAWC, H.E.S.S. and VERITAS experiments, they say.

At this moment, the paper has 3 citations.

The first one, by Brian Vlček et al. (sorry, it is vastly easier to choose the Czech name and write this complicated disclaimer than to remember the non-Czech name), refers to IceCube that says that the origin of the neutrinos could be LS 5039, a binary object, which is clearly distinct from Sgr A* but I guess it's close enough. Correct me if I misunderstood something about the apparent identification of these two explanations.

Murase talks about the neutrino flux around the Fermi bubbles in the complicated galactic central environment. These thoughts have the greatest potential to be relevant for fundamental physics, I think. Esmaili et al. counts the paper about the "neutrino lighthouse" among 15 or so "speculative" papers ignited by the IceCube's surprising observation of high-energy neutrinos.

So I do think that this lighthouse neutrino paper was overhyped, much like most papers that attract the journalists' attention, but sometimes it's good if random papers are reported in the media as long as they are not completely pathetic, and this one arguably isn't "quite" pathetic.

### Clifford V. Johnson - Asymptotia

Nerd Judgement
I’ve judged Poetry battles a number of times, essay competitions, art displays… but never Nerd-offs. Until tonight. Come to the Tournament of Nerds around midnight tonight at Upright Citizen’s Brigade. I’ll be one of the guest judges. I’ve no idea what I’m supposed to do, and my core “nerd” and … Click to continue reading this post

### John Baez - Azimuth

A Second Law for Open Markov Processes

guest post by Blake Pollard

What comes to mind when you hear the term ‘random process’? Do you think of Brownian motion? Do you think of particles hopping around? Do you think of a drunkard staggering home?

Today I’m going to tell you about a version of the drunkard’s walk with a few modifications. Firstly, we don’t have just one drunkard: we can have any positive real number of drunkards. Secondly, our drunkards have no memory; where they go next doesn’t depend on where they’ve been. Thirdly, there are special places, such as entrances to bars, where drunkards magically appear and disappear.

The second condition says that our drunkards satisfy the Markov property, making their random walk into a Markov process. The third condition is really what I want to tell you about, because it makes our Markov process into a more general ‘open Markov process’.

There are a collection of places the drunkards can be, for example:

$V= \{ \text{bar},\text{sidewalk}, \text{street}, \text{taco truck}, \text{home} \}$

We call this set $V$ the set of states. There are certain probabilities associated with traveling between these places. We call these transition rates. For example it is more likely for a drunkard to go from the bar to the taco truck than to go from the bar to home so the transition rate between the bar and the taco truck should be greater than the transition rate from the bar to home. Sometimes you can’t get from one place to another without passing through intermediate places. In reality the drunkard can’t go directly from the bar to the taco truck: he or she has to go from the bar to sidewalk to the taco truck.

This information can all be summarized by drawing a directed graph where the positive numbers labelling the edges are the transition rates:

For simplicity we draw only three states: home, bar, taco truck. Drunkards go from home to the bar and back, but they never go straight from home to the taco truck.

We can keep track of where all of our drunkards are using a vector with 3 entries:

$\displaystyle{ p(t) = \left( \begin{array}{c} p_h(t) \\ p_b(t) \\ p_{tt}(t) \end{array} \right) \in \mathbb{R}^3 }$

We call this our population distribution. The first entry $p_h$ is the number of drunkards that are at home, the second $p_b$ is how many are at the bar, and the third $p_{tt}$ is how many are at the taco truck.

There is a set of coupled, linear, first-order differential equations we can write down using the information in our graph that tells us how the number of drunkards in each place change with time. This is called the master equation:

$\displaystyle{ \frac{d p}{d t} = H p }$

where $H$ is a 3×3 matrix which we call the Hamiltonian. The off-diagonal entries are nonnegative:

$H_{ij} \geq 0, i \neq j$

and the columns sum to zero:

$\sum_i H_{ij}=0$

We call a matrix satisfying these conditions infinitesimal stochastic. Stochastic matrices have columns that sum to one. If we take the exponential of an infinitesimal stochastic matrix we get one whose columns sum to one, hence the label ‘infinitesimal’.

The Hamiltonian for the graph above is

$H = \left( \begin{array}{ccc} -2 & 5 & 10 \\ 2 & -12 & 0 \\ 0 & 7 & -10 \end{array} \right)$

John has written a lot about Markov processes and infinitesimal stochastic Hamiltonians in previous posts.

Given two vectors $p,q \in \mathbb{R}^3$ describing the populations of drunkards which obey the same master equation, we can calculate the relative entropy of $p$ relative to $q$:

$\displaystyle{ S(p,q) = \sum_{ i \in V} p_i \ln \left( \frac{p_i}{q_i} \right) }$

This is an example of a ‘divergence’. In statistics, a divergence a way of measuring the distance between probability distributions, which may not be symmetrical and may even not obey the triangle inequality.

The relative entropy is important because it decreases monotonically with time, making it a Lyapunov function for Markov processes. Indeed, it is a well known fact that

$\displaystyle{ \frac{dS(p(t),q(t) ) } {dt} \leq 0 }$

This is true for any two population distributions which evolve according to the same master equation, though you have to allow infinity as a possible value for the relative entropy and negative infinity for its time derivative.

Why is entropy decreasing? Doesn’t the Second Law of Thermodynamics say entropy increases?

Don’t worry: the reason is that I have not put a minus sign in my definition of relative entropy. Put one in if you like, and then it will increase. Sometimes without the minus sign it’s called the Kullback–Leibler divergence. This decreases with the passage of time, saying that any two population distributions $p(t)$ and $q(t)$ get ‘closer together’ as they get randomized with the passage of time.

That itself is a nice result, but I want to tell you what happens when you allow drunkards to appear and disappear at certain states. Drunkards appear at the bar once they’ve had enough to drink and once they are home for long enough they can disappear. The set of places where drunkards can appear or disappear $B$ is called the set of boundary states.  So for the above process

$B = \{ \text{home},\text{bar} \}$

is the set of boundary states. This changes the way in which the population of drunkards changes with time!

The drunkards at the taco truck obey the master equation. For them,

$\displaystyle{ \frac{dp_{tt}}{dt} = 7p_b -10 p_{tt} }$

still holds. But because the populations can appear or disappear at the boundary states the master equation no longer holds at those states! Instead it is useful to define the flow of drunkards into the $i^{th}$ state by

$\displaystyle{ \frac{Dp_i}{Dt} = \frac{dp_i}{dt}-\sum_j H_{ij} p_j}$

This quantity describes by how much the rate of change of the populations at the boundary states differ from that given by the master equation.

The reason why we are interested in open Markov processes is because you can take two open Markov processes and glue them together along some subset of their boundary states to get a new open Markov process! This allows us to build up or break down complicated Markov processes using open Markov processes as the building blocks.

For example we can draw the graph corresponding to the drunkards’ walk again, only now we will distinguish boundary states from internal states by coloring internal states blue and having boundary states be white:

Consider another open Markov process with states

$V=\{ \text{home},\text{work},\text{bar} \}$

where

$B=\{ \text{home}, \text{bar}\}$

are the boundary states, leaving

$I=\{\text{work}\}$

as an internal state:

Since the boundary states of this process overlap with the boundary states of the first process we can compose the two to form a new Markov process:

Notice the boundary states are now internal states. I hope any Markov process that could approximately model your behavior has more interesting nodes! There is a nice way to figure out the Hamiltonian of the composite from the Hamiltonians of the pieces, but we will leave that for another time.

We can ask ourselves, how does relative entropy change with time in open Markov processes? You can read my paper for the details, but here is the punchline:

$\displaystyle{ \frac{dS(p(t),q(t) ) }{dt} \leq \sum_{i \in B} \frac{Dp_i}{Dt}\frac{\partial S}{\partial p_i} + \frac{Dq_i}{Dt}\frac{\partial S}{\partial q_i} }$

This is a version of the Second Law of Thermodynamics for open Markov processes.

It is important to notice that the sum is only over the boundary states! This inequality tells us that relative entropy still decreases inside our process, but depending on the flow of populations through the boundary states the relative entropy of the whole process could either increase or decrease! This inequality will be important when we study how the relative entropy changes in different parts of a bigger more complicated process.

That is all for now, but I leave it as an exercise for you to imagine a Markov process that describes your life. How many states does it have? What are the relative transition rates? Are there states you would like to spend more or less time in? Are there states somewhere you would like to visit?

Here is my paper, which proves the above inequality:

• Blake Pollard, A Second Law for open Markov processes.

If you have comments or corrections, let me know!

## November 14, 2014

### CERN Bulletin

CHIS - Information concerning the health insurance of frontalier workers who are family members of a CHIS main member

We recently informed you that the Organization was still in discussions with the Host State authorities to clarify the situation regarding the health insurance of frontalier workers who are family members (as defined in the Staff Rules and Regulations) of a CHIS main member, and that we were hoping to arrive at a solution soon.

After extensive exchanges, we finally obtained a response a few days ago from the Swiss authorities, with which we are fully satisfied and which we can summarise as follows:

1) Frontalier workers who are currently using the CHIS as their basic health insurance can continue to do so.

2) Family members who become frontalier workers, or those who have not yet exercised their “right to choose” (droit d’option) can opt to use the CHIS as their basic health insurance. To this end, they must complete the form regarding the health insurance of frontaliers, ticking the LAMal box and submitting their certificate of CHIS membership (available from UNIQA).

3) For family members who joined the LAMal system since June 2014, CERN is in contact with the Swiss authorities and the Geneva Health Insurance Service with a view to securing an exceptional arrangement allowing them to leave the LAMal system and use the CHIS as their basic health insurance.

4) People who exercised their “right to choose” and opted into the French Sécurité sociale or the Swiss LAMal system before June 2014 can no longer change, as the decision is irreversible. As family members, however, they remain beneficiaries of the CHIS, which then serves as their complementary insurance.

5) If a frontalier family member uses the CHIS as his or her basic health insurance and the main member concerned ceases to be a member of the CHIS or the relationship between the two ends (divorce or dissolution of a civil partnership), the frontalier must join LAMal.

We hope that this information satisfies your expectations and concerns. We would like to thank the Host State authorities for their help in clarifying these highly complex issues.

We remind you that staff members, fellows and beneficiaries of the CERN Pension Fund must declare the professional situation and health insurance cover of their spouse or partner, as well as any changes in this regard, pursuant to Article III 6.01 of the CHIS Rules. In addition, in cases where a spouse or partner wishes to use the CHIS as his or her basic insurance and receives income from a professional activity or a retirement pension, the main member must pay a supplementary contribution based on the income of the spouse or partner, in accordance with Article III 5.07 of the CHIS Rules. For more information, see www.cern.ch/chis/DCSF.asp.

The CHIS team is on hand to answer any questions you may have on this subject, which you can submit to Chis.Info@cern.ch. The above information, as well as the Note Verbale from the Permanent Mission of Switzerland, is available in the frontaliers section of the CHIS website: www.cern.ch/chis/frontaliers.asp

### CERN Bulletin

Micro club
Opération NEMO   Pour finir en beauté les activités spéciales que le CMC a réalisé pendant cette année 2014, pour commémorer le 60ème anniversaire du CERN, et le 30ème du Micro Club, l’ Opération NEMO aura cette année un caractère très particulier. Nous allons proposer 6 fabricants de premier ordre qui offriront chacun deux ou trois produits à des prix exceptionnels. L’opération débute le lundi 17 novembre 2014. Elle se poursuivra  jusqu’au samedi 6 décembre inclus. Les délais de livraison seront de deux à trois semaines, selon les fabricants. Donc les commandes faites la dernière semaine, du 1 au 6 décembre, risquent d’arriver qu'au début du mois de janvier 2015. Liste de fabricants participant à cette dernière opération de l’année : Apple Computer, Lenovo, Toshiba, Brother, LaCie et Western Digital. Par exemple, pour Apple, seulement le MacBook Pro 15” Retina, toutes configurations et tous claviers possibles, fait partie de cette opération. Pour les autres fabricants mentionnés nous aurons, dès lundi, des détails sur les propositions qui nous seront offertes. Pour toute demande d’information ou commande, envoyer un mail à : cmc.orders@cern.ch. Cordialement, Votre CMC Team.

### CERN Bulletin

France @ CERN | Come and meet 37 French companies at the 2014 “France @ CERN” Event | 1-3 December
The 13th “France @ CERN” event will take place from 1 December to 3 December 2014. Thanks to Ubifrance, the French agency for international business development, 37 French firms will have the opportunity to showcase their know-how at CERN.   These companies are looking forward to meeting you during the B2B sessions which will be held on Tuesday, 2 December (afternoon) and on Wednesday, 3 December (afternoon) in buildings 500 and 61 or at your convenience in your own office. The fair’s opening ceremony will take place on Tuesday, 2 December (morning) in the Council Chamber in the presence of Rolf Heuer, Director-General of CERN and Nicolas Niemtchinow, Ambassador, Permanent Representative of France to the United Nations in Geneva and to international organisations in Switzerland. For more information about the event and the 37 participating French firms, please visit: http://www.la-france-au-cern.com/

### CERN Bulletin

Upcoming renovations in Building 63
La Poste will close its doors in Building 63 on Friday, 28 November. It moves to Building 510 and where it will open on 1 December (see picture).   UNIQA will close its HelpDesk in Building 63 on Wednesday, 26 November and will re-open the next day in Building 510. La Poste and UNIQA are expected to return to their renovated office space between April and May 2015.

### The Great Beyond - Nature blog

Energy outlook sees continuing dominance of fossil fuels

Just as the United States and China agreed on a landmark deal to curb greenhouse-gas emissions, the world’s leading energy think tank says that demand for fossil fuels is likely to keep growing for at least another 20 years.

IEA

In its latest World Energy Outlook, released on 12 November, the Paris-based International Energy Agency (IEA) estimates that global consumption of primary energy — the energy contained in raw fossil fuels — will increase by 37% by 2040, driven mostly by growing demand in Asia, Africa, the Middle East and Latin America.

Crude-oil consumption is expected to rise from the current 90 million barrels a day to 104 million barrels a day, but demand for oil will plateau by 2040, according to IEA scenarios. Coal demand will already peak in the 2020s, thanks to efforts such as China’s to reduce air pollution and carbon emissions. But the demand for natural gas, the only fossil fuel that in the IEA’s scenarios is still growing after 2040, will rise by more than half, the report says.

The output from US shale projects, which has been booming — propelling the country to become the world’s largest producer of oil and gas — is expected to decline in the 2020s, the IEA says. Even so, there are sufficient untapped resources to meet the growth in consumption. And despite a recent slump in the prices of oil and gas, the IEA warns that rising tensions in parts of the Middle East and in Ukraine pose incalculable threats to global energy security.

“A well-supplied oil market in the short-term should not disguise the challenges that lie ahead, as the world is set to rely more heavily on a relatively small number of producing countries,” the IEA’s chief economist Fatih Birol said when the report was released in London. “The apparent breathing space provided by rising output in the Americas over the next decade provides little reassurance.”

Widespread safety concerns over the use of nuclear power mean that few countries — including China, India, Korea and Russia — are planning to increase their installed nuclear capacity. Nearly 200 of the 434 reactors that were operational at the end of 2013 are set to be retired in the period to 2040. Germany and other countries that decided after the Fukushima-Daiichi accident in 2011 to phase out nuclear power altogether are facing the challenge of addressing the resulting shortfall in electricity generation.

No country has as yet found a long-term solution to the problem of disposing of radioactive waste, the IEA notes.

The IEA reckons that renewable sources — mainly wind and solar — will provide nearly half of the global increase in power generation to 2040. By then, low-carbon sources, including nuclear, are expected to supply about a quarter of the global energy consumption.

However, the IEA  also predicts that between now and 2040 the world will add 1 trillion tonnes of carbon dioxide to the atmosphere — using up the budget that climate scientists say can give the world a reasonable chance to limit the rise in global average temperatures to 2˚C or less.

That calculation will sound cynical to more than half a billion people in sub-Saharan Africa — the regional focus of the report — who live without access to modern energy. Africa’s poorest suffer in fact the most extreme form of energy insecurity in the world, says the IEA.

### ZapperZ - Physics and Physicists

The Physics of Thor's Hammer
Not that you should take any of these seriously, but some time, entertainment reading like this can be "fun".

Jim Kakalios, the author of The Physics of Superheroes, has written an article on the physics of Thor's hammer. I think what I am more interested in is the details trying to explain the initial inconsistencies of what was seen (such as the hammer appearing to be too heavy for everyone to lift, yet, it isn't so heavy that it crushed the books and table that it was resting on). I think that is more fascinating because in many storyline, such inconsistencies are often either overlooked or simply brushed aside. To me, that is where the physics is, because someone who notices such inconsistencies are very aware of the physics, i.e. if such-and-such is true, then how come so-and-so doesn't also occur?

Zz.

### Tommaso Dorigo - Scientificblogging

PhD Positions For Chinese Students in Padova
I am using my blog to advertise the opening of PhD positions in Padova University, to work at several research projects and obtain a PhD in Physics. These are offered to Chinese students through the China Scolarship Council. More information is available at this link.
If you are a bright Chinese student who speaks at least some English and is willing to spend three years working in data analysis for Higgs physics in the CMS experiment, I will take you - so what are you waiting for ? Applications close soon!

Below is a table with deadlines and information.

## November 13, 2014

### Quantum Diaries

Dark Matters: Creation from Annihilation

Hanging around a pool table might seem like an odd place to learn physics, but a couple of hours on our department’s slanted table could teach you a few things about asymmetry. The third time a pool ball flew off the table and hit the far wall I knew something was broken. The pool table’s refusal to obey the laws of physics gives aspiring physicists a healthy distrust of the simplified mechanics they learnt in undergrad. Whether in explaining why pool balls bounce sideways off lumpy cushions or why galaxies exist, asymmetries are vital to understanding the world around us. Looking at dark matter theories that interact asymmetrically with visible matter can give us new clues as to why matter exists.

Alternatives to the classic WIMP (weakly interacting massive particles) dark matter scenario are becoming increasingly important. Natural supersymmetry is looking less and less likely, and could be ruled out in 2015 by the Large Hadron Collider. Asymmetric dark matter theories provide new avenues to search for dark matter and help explain where the material in our universe comes from -baryogenesis. Baryogenesis is in some ways a more important cosmological problem than dark matter. The Standard Model of particle physics describes all the matter that you are familiar with, from trees to stars, but fails to explain how this matter came to be. In fact, the Standard Model predicts a sparsely populated universe, where most of the matter and antimatter has long since annihilated each another. In particle colliders, whenever a particle of matter is created, an opposing particle of antimatter is also created. Antimatter is matter with all its charges reversed, like a photo negative. While it is often said that opposites attract, in the particle physics world opposites annihilate. But when we look at the universe around us, all we see is matter. There are no antistars and antiplanets, no antihumans living on some distant world. So if matter and antimatter are always created together, how did this happen? If there were equal amounts of matter and antimatter, each would annihilate the other in the first fractions of a second and our universe would be stillborn. The creation of this asymmetry between matter and antimatter is known as baryogenesis, and is one of the strongest cosmological confirmations of physics beyond the Standard Model. The exact amount of asymmetry determines how much matter, and consequently how many stars and galaxies, exists now.

And what about the other 85% of matter in the universe? This dark matter has only shown itself through gravitational interactions, but it has shaped the evolution of the universe. Dark matter keeps galaxies from tearing themselves apart, and outnumbers visible matter five to one. Five to one is a curious ratio. If dark and visible matter were entirely different substances with a completely independent history, you would not expect almost the same amount of dark and normal matter. This is like counting the number of trees in the world and finding that it’s the same as the number of pebbles. While we know that dark and visible matter are not the same substance (the Standard Model does not include any dark matter candidates), this similarity cannot be ignored. The similarity in abundances between dark and visible matter implies that they were caused by the same mechanism, created in the same way. As the abundance of matter is determined by the asymmetry between antimatter and matter, this leads us to a relationship between baryogenesis and dark matter.

Asymmetric dark matter theories have attracted significant attention in the last few years, and are now studied by physicists across the world. This has give us a cornucopia of asymmetric dark matter theories. Despite this, there are several common threads and predictions that allow us to test many of them at once. In asymmetric dark matter theories baryogenesis is caused by interactions between dark and normal matter. By having dark matter interact differently with matter and antimatter, we can get marginally more matter in the universe then antimatter. After the matter and antimatter annihilate each other, there is some minuscule amount of matter left standing. These leftovers go on to become the universe you know. Typically, a similar asymmetry in dark matter and its antiparticle is also made, so there is a similar amount of dark matter left over as well. This promotes dark matter from being a necessary, yet boring spectator in the cosmic tango to an active participant, saving our universe from desolation. Asymmetric dark matter also provides new ways to search for dark matter, such as neutrinos generated from dark matter in the sun. As asymmetric dark matter interacts with normal matter, large bodies like the sun and the earth can capture a reservoir of dark matter, sitting at their core. This can generate ghostlike neutrinos, or provide an obstacle for dark matter in direct detection experiments. Asymmetric dark matter theories can also tell us where we do not expect to see dark matter. A large effort has been made to see tell-tale signs of dark matter annihilating with its antiparticle throughout the universe, but it is yet to meet with success. While experiments like the Fermi space telescope have found potential signals (such as a 130 GeV line in 2012), these signals are ambiguous or fail to survive the test of time. The majority of asymmetric dark matter theories predict that there is no signal, as all the anti dark matter has long since been destroyed.

As on the pool table, even little asymmetries can have a profound effect on what we see. While much progress is made from finding new symmetries, we can’t forget the importance of imperfections in science. Asymmetric dark matter can explain where the matter in our universe came from, and gives dark and normal matter a common origin. Dark matter is no longer a passive observer in the evolution of our universe; it plays a pivotal role in the world around us.

### ZapperZ - Physics and Physicists

Newton Lecture 2014
The 2014 Newton Lecture given by Deborah Jin, who to me, already deserves a Nobel Prize in physics.

Zz.

### Symmetrybreaking - Fermilab/SLAC

Big lessons from a Tiny Titan

When it comes to explaining how a massive machine works, sometimes smaller is better.

Tiny Titan is a supercomputer. Kind of.

It should probably be clarified that the term supercomputer, in this sense, would have more to do with how the machine works than its size.

A miniature mock-up of Titan—the behemoth supercomputer spanning 4352 square feet at Oak Ridge National Laboratory—Tiny Titan barely fills two square feet. And while Titan uses tens of thousands of CPUs and GPUs to perform advanced mathematics at 27 quadrillion calculations per second, Tiny Titan uses nine single-core processors to draw colorful little waves on a monitor.

“Tiny Titan communicates in the same way that a normal supercomputer does,” says Robert French, a user support specialist at Oak Ridge that helped design the micro machine over the past year.

Using an interactive fluid simulation developed by Adam Simpson, also from Oak Ridge, Tiny Titan’s processors work in tandem to animate waves of colored particles. As individual particles flow across the monitor, they pass from one processor to another, changing color on screen to match an LED on one of Tiny Titan’s corresponding cores. Turning off one or more of the cores, as its program allows, puts a greater burden on the remaining cores and causes the program to lag.

The demo offers a dollhouse-scale illustration of the way supercomputers rely on hundreds or thousands of smaller, synchronized computers to run simulations of tremendously detailed events—such as supernovae, climate change, nuclear fusion or molecular interactions. Creatively coded software breaks up large problems into lots of little ones. The code delegates the smaller pieces to separate processors and then finally brings everything back together in the right order.

These concepts are hard to teach, French says, if your primary example is a supercomputer like Titan, which “looks like 200 giant refrigerators sitting next to each other.”

The mission of the Tiny Titan project is to introduce parallel computing into more schools’ curricula. Basic computer science classes teach serial processing: how a single processor works one task at a time. But many of these classes shy away from parallel computing.

A functional model that students can watch and tinker with may be a good starting point for some classrooms, Simpson says.

Members of the Oak Ridge team aren’t the only ones to come to this conclusion. A group at the University of California, San Diego—future home of the planned Comet supercomputer—assembled a similar computer cluster in 2013 called Meteor. The machine teaches the basics of parallel computing through games.

For its part, Tiny Titan is flashy, sleekly bundled to fit on a desktop, and comes with an Xbox controller to boot.

For that, thank team member Anthony DiGirolamo. Tiny Titan’s first incarnation was an unflattering mess of wires on a plastic cart. “But as we refined it,” DiGirolamo says, “we were able to explain what’s going on inside Titan by developing a transparent display case.”

The Oak Ridge team has posted on their GitHub page an exact supply list and an uncomplicated tutorial on how to build a Tiny Titan. They hope it will be part of a revolution in how we teach young students about computers.

And that’s something Tiny Titan can feel big about.

### Symmetrybreaking - Fermilab/SLAC

Big lessons from a Tiny Titan

When it comes to explaining how a massive machine works, sometimes smaller is better.

Tiny Titan is a supercomputer. Kind of.

It should probably be clarified that the term supercomputer, in this sense, would have more to do with how the machine works than its size.

A miniature mock-up of Titan—the behemoth supercomputer spanning 4352 square feet at Oak Ridge National Laboratory—Tiny Titan barely fills two square feet. And while Titan uses tens of thousands of CPUs and GPUs to perform advanced mathematics at 27 quadrillion calculations per second, Tiny Titan uses nine single-core processors to draw colorful little waves on a monitor.

“Tiny Titan communicates in the same way that a normal supercomputer does,” says Robert French, a user support specialist at Oak Ridge that helped design the micro machine over the past year.

Using an interactive fluid simulation developed by Adam Simpson, also from Oak Ridge, Tiny Titan’s processors work in tandem to animate waves of colored particles. As individual particles flow across the monitor, they pass from one processor to another, changing color on screen to match an LED on one of Tiny Titan’s corresponding cores. Turning off one or more of the cores, as its program allows, puts a greater burden on the remaining cores and causes the program to lag.

The demo offers a dollhouse-scale illustration of the way supercomputers rely on hundreds or thousands of smaller, synchronized computers to run simulations of tremendously detailed events—such as supernovae, climate change, nuclear fusion or molecular interactions. Creatively coded software breaks up large problems into lots of little ones. The code delegates the smaller pieces to separate processors and then finally brings everything back together in the right order.

These concepts are hard to teach, French says, if your primary example is a supercomputer like Titan, which “looks like 200 giant refrigerators sitting next to each other.”

The mission of the Tiny Titan project is to introduce parallel computing into more schools’ curricula. Basic computer science classes teach serial processing: how a single processor works one task at a time. But many of these classes shy away from parallel computing.

A functional model that students can watch and tinker with may be a good starting point for some classrooms, Simpson says.

Members of the Oak Ridge team aren’t the only ones to come to this conclusion. A group at the University of California, San Diego—future home of the planned Comet supercomputer—assembled a similar computer cluster in 2013 called Meteor. The machine teaches the basics of parallel computing through games.

For its part, Tiny Titan is flashy, sleekly bundled to fit on a desktop, and comes with an Xbox controller to boot.

For that, thank team member Anthony DiGirolamo. Tiny Titan’s first incarnation was an unflattering mess of wires on a plastic cart. “But as we refined it,” DiGirolamo says, “we were able to explain what’s going on inside Titan by developing a transparent display case.”

The Oak Ridge team has posted on their GitHub page an exact supply list and an uncomplicated tutorial on how to build a Tiny Titan. They hope it will be part of a revolution in how we teach young students about computers.

And that’s something Tiny Titan can feel big about.

### Tommaso Dorigo - Scientificblogging

A Picture More Awe-Inspiring Than The One Of The Surface Of Comet Gerasimenko
This one is definitely too juicy to ignore - I need to join the crowd of bystanders-in-awe.
As you may have heard, ESA's ROSETTA spacecraft successfully landed yesterday on the solid nucleus of comet 67/P, Churyumov-Gerasimenko - a 2.5 mile long conglomerate of rock and ice. I refrain from giving detail of that enormous achievement for humankind, because I rather want to comment on this rather funny twist of the whole story. But still let's first enjoy at least one nice picture of the surface of that distant solar system body...

## November 12, 2014

### Matt Strassler - Of Particular Significance

How Far We Have Come(t)

It wasn’t that long ago, especially by cometary standards, that humans viewed the unpredictable and spectacular arrival of a comet, its tail spread across the sky unlike any star or planet, as an obviously unnatural event. How could an object flying so dramatically and briefly through the heavens be anything other than a message from a divine force? Even a few hundred years ago…

Today a human-engineered spacecraft descended out of the starry blackness and touched one.

We have known for quite some time that our ancestors widely maligned these icy rocks, often thinking them messengers of death and destruction.  Yes, a comet is, at some level, not much more than an icy rock. Yet, heated by the sun, it can create one of our sky’s most bewitching spectacles. Actually two, because not only can a comet itself be a fabulous sight, the dust it leaves behind can give us meteor showers for many years afterward.

But it doesn’t stop there.  For comets, believed to be frozen relics of the ancient past, born in the early days of the Sun and its planets, may have in fact been messengers not of death but of life.   When they pummeled our poor planet in its early years, far more often than they do today, their blows may have delivered the water for the Earth’s oceans and the chemical building blocks for its biology.   They may also hold secrets to understanding the Earth’s history, and perhaps insights into the more general questions of what happens when stars and their planets form.  Indeed, as scientific exploration of these objects moves forward, they may teach us the answers to questions that we have not yet even thought to ask.

Will the Philae lander maintain its perch or lose its grip? Will it function as long as hoped? No matter what, today’s landing was as momentous as the first spacecraft touchdowns on the Moon, Venus, Mars, Titan (Saturn’s largest moon), and a small asteroid — and also, the first descent of a spacecraft into Jupiter’s atmosphere. Congratulations to those who worked so hard and so long to get this far! Now let’s all hope that they, and their spacecraft, can hang on a little longer.

Filed under: Astronomy Tagged: astronomy, comets, spacecraft