# Particle Physics Planet

## November 25, 2014

### Quantum Diaries

Graduating, part 1: The Defense

It’s been a crazy 3 weeks since I officially finished my PhD. I’m in the transition from being a grad student slowly approaching insanity to a postdoc who has everything figured out, and it’s a rocky transition.

The end of the PhD at Wisconsin has two steps. The first is the defense, which is a formal presentation of my research to the professors and committee, our colleagues, and very few friends and family. The second is actually turning the completed dissertation to the grad school, with the accompanying “margin check” appointment with the grad school. In between, the professors can send me comments about the thesis. I’ve heard so many stories of different universities setting up the end of a degree differently, it’s pretty much not worth going into the details. If you or someone you know is going through this process, you don’t need a comparison of how it works at different schools, you just need a lot of support and coping mechanisms. All the coping mechanisms you can think of, you need them. It’s ok, it’s a limited time, don’t feel guilty, just get through it. There is an end, and you will reach it.

The days surrounding the defense were planned out fairly carefully, including a practice talk with my colleagues, again with my parents (who visited for the defense), and delivery burritos. I ordered coffee and doughnuts for the defense from the places where you get those, and I realized why such an important day has such a surprisingly small variety of foods: because deviating from the traditional food is so very far down my list of priorities when there’s the physics to think about, and the committee, and the writing. The doughnuts just aren’t worth messing with. Plus, the traditional place to get doughnuts is already really good.

We even upheld a tradition the night before the defense. It’s not really a tradition per se, but I’ve seen it once and performed it once, so that makes it a tradition. If you find it useful, you can call it an even stronger tradition! We played an entire soundtrack and sung along, with laptops open working on defense slides. When my friend was defending, we watched “Chicago” the musical, and I was a little hoarse the next day. When I was defending, we listened to Leonard Bernstein’s version of Voltaire’s “Candide,” which has some wonderful wordplay and beautiful writing for choruses. The closing message was the comforting thought that it’s not going to be perfect, but life will go on.

“We’re neither wise nor pure nor good, we’ll do the best we know. We’ll build our house, and chop our wood, and make our garden grow.”

Hearing that at the apex of thesis stress, I think it will always make me cry. By contrast, there’s also a scene in Candide depicting the absurd juxtaposition of a fun-filled fair centered around a religious inquisition and hanging. Every time someone said they were looking forward to seeing my defense, I thought of this hanging-festival scene. I wonder if Pangloss had to provide his own doughnuts.

The defense itself went about as I expected it would. The arguments I presented had been polished over the last year, the slides over the last couple weeks, and the wording over a few days. My outfit was chosen well in advance to be comfortable, professional, and otherwise unremarkable (and keep my hair out my way). The seminar itself was scheduled for the time when we usually have lab group meetings, so the audience was the regular lab group albeit with a higher attendance-efficiency factor. The committee members were all present, even though one had to switch to a 6am flight into Madison to avoid impending flight cancellations. The questions from the committee mostly focused on understanding the implications of my results for other IceCube results, which I took to mean that my own work was presented well enough to not need further explanation.

It surprised me, in retrospect, how quickly the whole process went. The preparation took so long, but the defense itself went so quickly. From watching other people’s defenses, I knew to expect a few key moments: an introduction from my advisor, handshakes from many people at the end of the public session, the moment of walking out from the closed session to friends waiting in the hallway, and finally the first committee member coming out smiling to tell me they decided to pass me. I knew to look for these moments, and they went by so much faster in my own defense than I remember from my friends. Even though it went by so quickly, it still makes a difference having friends waiting in the hallway.

People asked me if it was a weight off my shoulders when I finally defended my thesis. It was, in a way, but even more it felt like cement shoes off my feet. Towards the end of the process, for the last year or so, a central part of myself felt professionally qualified, happy, and competent. I tried desperately to make that the main part. But until the PhD was finished, that part wasn’t the exterior truth. When I finished, I felt like the qualifications I had on paper matched how qualified I felt about myself. I’m still not an expert on many things, but I do know the dirty details of IceCube software and programing. I have my little corner of expertise, and no one can take that away. Degrees are different from job qualifications that way: if you stop working towards a PhD several years in, it doesn’t count as a fractional part of a degree; it’s just quitting. But if you work at almost any other job for a few years, you can more or less call it a few years of experience. A month before my defense, part of me knew I was so so so close to being done, but that didn’t mean I could take a break.

And now, I can take a break.

### Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 3848 — November 20, 2014
Larry Crumpler returns with an update on Opportunity's recent activities, and its road ahead.

### Symmetrybreaking - Fermilab/SLAC

Students join the hunt for exotic new physics

Students will help the MoEDAL experiment at CERN seek evidence of magnetic monopoles, microscopic black holes and other phenomena.

For the first time, a high school has joined a high-energy physics experiment as a full member. Students from the Simon Langton Grammar School in Canterbury, England, have become participants in the newest experiment at the Large Hadron Collider at CERN.

The students will help with the search for new exotic particles such as magnetic monopoles, massive supersymmetric particles, microscopic black hole remnants, Q-balls and strangelets through an experiment called MoEDAL (Monopole and Exotics Detector at the LHC).

The students, who take part in a school-based research lab, will remotely monitor radiation backgrounds at the experiment.

The Simon Langton school has worked with the experiment’s chips, called Timepix, during previous projects that included a cosmic-ray detector the students helped to design. It was launched aboard a dishwasher-sized UK tech demonstration satellite by a commercial firm in July.

“I think it’s enormously exciting for these students to think about what they could find,” says Becky Parker, the physics teacher who oversees the Langton school’s involvement. “It’s empowering for them that they could be a part of these amazing discoveries… You can’t possibly teach them about particle physics unless you can teach them about discovery.”

The state-of-the-art array of Timepix chips that the Langton group will monitor is the only real-time component of the four detector systems that comprise the MoEDAL experiment, which is operated by a collaboration of 66 physicists from 23 institutes in 13 countries on 4 continents.

The MoEDAL detector acts as a giant camera, with 400 stacks of plastic detectors as its “film.” MoEDAL is also designed to capture the particle messengers of new physics for further study in a 1-ton pure-aluminum trapping system.

MoEDAL is sensitive to massive, long-lived particles predicted by a number of “beyond the Standard Model” theories that other LHC experiments may be incapable of detecting and measuring.

“It is very exciting to be on the forefront of groundbreaking physics, which includes such amazing insight into what the best physicists of the world are doing,” says 16-year-old Langton student Ellerey Ireland.

“MoEDAL has allowed me to see the passion and determination of physicists and opened my mind to where physics can lead,” says Langton student, Saskia Jamieson Bibb, also 16. “I am planning to study physics at university.”

One of the hypothetical particles MoEDAL is designed to detect is the magnetic monopole—essentially a magnet with only one pole. Blas Cabrera, a physics professor at Stanford who is also part of the Particle Physics and Astrophysics faculty at SLAC National Accelerator Laboratory, measured a possible magnetic monopole event in 1982. A group from Imperial College London found a similar possible event in 1986.

More recently analogues of magnetic monopoles have been created in laboratory experiments. But at MoEDAL, they’ll have a chance to catch the real thing, says University of Alberta physicist James Pinfold, the spokesperson for MoEDAL and a visiting professor at King’s College London.

The theoretical base supporting the existence of magnetic monopoles is strong, he says. “Of all new physics scenarios out there today, magnetic monopoles are the most certain to actually exist.”

Confirmation of the existence of magnetic monopoles could clue researchers in to the nature of the big bang itself, as these particles are theorized to have emerged at the onset of our universe.

“The discovery of the magnetic monopole or any other exotic physics by MoEDAL would have incredible ramifications that would revolutionize the way we see things,” Pinfold says. “Such a discovery would be as important as that of the electron.”

Like what you see? Sign up for a free subscription to symmetry!

### Tommaso Dorigo - Scientificblogging

Volunteer-Based Peer Review: A Success
A week ago I offered readers of this blog to review a paper I had just written, as its publication process did not include any form of screening (as opposed to what is customary for articles in particle physics, which receive multiple review stages). That's not the first time for me: in the past I did the same with other articles, and usually I received good feedback. So I knew this could work.

read more

### Peter Coles - In the Dark

Doomsday is Cancelled…

Last week I posted an item that included a discussion of the Doomsday Argument. A subsequent comment on that post mentioned a paper by Ken Olum, which I finally got around to reading over the weekend, so I thought I’d post a link here for those of you worrying that the world might come to an end before the Christmas holiday.

You can find Olum’s paper on the arXiv here. The abstract reads (my emphasis):

If the human race comes to an end relatively shortly, then we have been born at a fairly typical time in history of humanity. On the other hand, if humanity lasts for much longer and trillions of people eventually exist, then we have been born in the first surprisingly tiny fraction of all people. According to the Doomsday Argument of Carter, Leslie, Gott, and Nielsen, this means that the chance of a disaster which would obliterate humanity is much larger than usually thought. Here I argue that treating possible observers in the same way as those who actually exist avoids this conclusion. Under this treatment, it is more likely to exist at all in a race which is long-lived, as originally discussed by Dieks, and this cancels the Doomsday Argument, so that the chance of a disaster is only what one would ordinarily estimate. Treating possible and actual observers alike also allows sensible anthropic predictions from quantum cosmology, which would otherwise depend on one’s interpretation of quantum mechanics.

I think Olum does identify a logical flaw in the argument, but it’s by no means the only one. I wouldn’t find it at all surprising to be among the first “tiny fraction of all people”, as my genetic characteristics are such that I could not be otherwise. But even if you’re not all that interested in the Doomsday Argument I recommend you read this paper as it says some quite interesting things about the application of probabilistic reasoning elsewhere in cosmology, an area in which quite a lot is written that makes no sense to me whatsoever!

### Emily Lakdawalla - The Planetary Society Blog

Calling Serious Asteroid Hunters
I am happy to announce a new call for proposals for The Planetary Society’s Gene Shoemaker Near Earth Object (NEO) grant program. Proposals are due Feb. 2, 2015.

## November 24, 2014

### arXiv blog

Yahoo Labs' Algorithm Identifies Creativity in 6-Second Vine Videos

Nobody knew how to automatically identify creativity until researchers at Yahoo Labs began studying the Vine livestream.

### Christian P. Robert - xi'an's og

prayers and chi-square

One study I spotted in Richard Dawkins’ The God delusion this summer by the lake is a study of the (im)possible impact of prayer over patient’s recovery. As a coincidence, my daughter got this problem in her statistics class of last week (my translation):

1802 patients in 6 US hospitals have been divided into three groups. Members in group A was told that unspecified religious communities would pray for them nominally, while patients in groups B and C did not know if anyone prayed for them. Those in group B had communities praying for them while those in group C did not. After 14 days of prayer, the conditions of the patients were as follows:

• out of 604 patients in group A, the condition of 249 had significantly worsened;
• out of 601 patients in group B, the condition of 289 had significantly worsened;
• out of 597 patients in group C, the condition of 293 had significantly worsened.

Use a chi-square procedure to test the homogeneity between the three groups, a significant impact of prayers, and a placebo effect of prayer.

This may sound a wee bit weird for a school test, but she is in medical school after all so it is a good way to enforce rational thinking while learning about the chi-square test! (Answers: [even though the data is too sparse to clearly support a decision, esp. when using the chi-square test!] homogeneity and placebo effect are acceptable assumptions at level 5%, while the prayer effect is not [if barely].)

Filed under: Books, Kids, Statistics, University life Tagged: binomial distribution, chi-square test, exercises, medical school, prayer, Richard Dawkins, The God Delusion

### astrobites - astro-ph reader's digest

Gas to Black Holes: Direct formation of a supermassive black hole in galaxy mergers
Title: Direct Formation of Supermassive Black Holes in Metal-Enriched Gas at the Heart of High-Redshift Galaxy Mergers

Authors: L. Mayer, D. Fiacconi, S. Bonoli, T. Quinn, R. Roskar, S. Shen, J. Wadsley

First Author’s Institution: Center for Theoretical Astrophysics and Cosmology, Inst. for Comp. Sci., & Physik Institut, University of Zurich, Zurich, Switzerland

Paper Status: Submitted to The Astrophysical Journal

Massive galaxies like our Milky Way all contain a supermassive black hole (SMBH) at their center, with masses ranging from 106 to 109 solar masses. The SMBH suspected to sit in the center of our galaxy, known as Sgr A*, is estimated to be around four million solar masses. Although we know they exist, how they form is still an unanswered question in astronomy. The challenging question is how so much mass can collapse into such a small volume (about 100 AU for our SMBH) fast enough that we observe them in the early universe as the power source of quasars, less than a billion years after the Big Bang (z ~ 7).

There are three likely possibilities, all of which involve forming “seed” black holes that grow over time to SMBH size: 1) low mass seeds from the deaths of the first stars, 2) the direct collapse of massive regions of gas into a black hole, forming massive seeds, and 3)  mergers of stars in dense star clusters, forming a very massive star, and, in its death, a very massive black hole. The authors use hydrodynamic simulations to examine the direct collapse to a SMBH of a region of gas formed from the merger of two Milky Way mass galaxies.

## Merging Galaxies: A Recipe for a SMBH

The authors use a simulation code called GASOLINE2, which, at its core, models the flow of gas as individual particles in what is called smooth particle hydrodynamics. The biggest challenge in creating direct collapse SMBH seeds is keeping the gas cloud coherent throughout the process. These massive clouds can often break apart, or fragment, during collapse, forming stars or less massive black holes. The authors use a more efficient, lower resolution setup to simulate the merger of two galaxies of masses around 1012 solar masses each, then “zoom-in” with higher resolution in the final merger stages to observe the gas collapse at the core of the newly formed galaxy, exploring whether or not the cloud can direct collapse, or fragments over time. Fig.1 gives a projection of the surface density of gas of their two galaxies, and a zoom into the core of one, roughly three thousand years before the galaxies merge.

Fig. 1: The gas surface density of the merging galaxy pair around three thousand years before the final merger occurs. Each panel shows a successive zoom in the simulation, with the final showing the central few parsecs of one of the two galaxies. (Source: FIg. 1 from Mayer et. al. 2014)

## The Direct Gas Collapse

The new work the authors put into modeling the direct gas collapse process is to include the effects of radiative cooling and a model that accounts for changes in opacity due to dust, dust heating and cooling, atomic and molecular heating and cooling, and cosmic ray heating. These processes together may stabilize the cloud against fragmentation, more easily forming a SMBH seed, or they may cause dramatic fragmentation of the cloud (bad news for the SMBH). Fig. 2 shows the central galaxy region, where the massive cloud ultimately forms, at five thousand years after the two galaxies merged. The panels show 4 different simulations, each of which testing the effects of including or removing different physical processes. In each case, the central region is a single, massive (roughly 109 solar masses) disk-like structure. The gas clumps around the core are examples of gas fragmentation that could ultimately form stars.

Fig. 2: Five thousand years after the merger of the two galaxies shown in Fig. 1, this image gives the gas surface density for the new galaxy and its core. The four panels each give the results of four different simulations the authors used to test the importance of different physics. (Source: Fig. 3 of Mayer et. al. 2014)

As the core evolves, it remains intact from heating due to shocks, turbulence, and its high opacity to radiation; these all prevent cooling, which can spawn fragmentation. Unfortunately, the authors can only follow the evolution of the central core for around 50 thousand years before they hit computational limits. By this, I mean that continuing to evolve the simulation would require too high a resolution than is computationally possible. In addition, as the core collapses and shrinks in size, the minimum time step drops dramatically and the simulation speed slows to a crawl.

Fig. 3: Gas surface density for the core shown in Run 4 of Fig. 2, 30 thousand years after the galaxy mergers. By this point, the core has fragmented into two massive gas clouds that may ultimately form two SMBHs. (Source: Fig. 10 of Mayer et. al. 2014)

Fig. 3 shows the final evolved state of the core 30 thousand years after merger for Run 4 shown in Fig. 2. As shown here, the core actually fragments into two massive gas clumps, one at the center, and one slightly off-center. These clumps are about 109 and 108 solar masses respectively, and may ultimately form two SMBH’s that could eventually merge into a single SMBH as the galaxy evolves.

## The Cloud’s Final Fate

Using analytic calculations and results from previous work, the authors make some simple arguments for how the final gas clouds in Fig. 3 can form into black holes via direct collapse. They argue it is possible that these can form into SMBH’s in a ten thousand year process through a collapse generated by general relativistic instabilities. This work provides new insight into how SMBHs may form in the early universe from the direct collapse of gas clouds. The authors conclude by suggesting that future simulations including general relativity and observations by the James Webb Space Telescope and the Atacama Large Millimeter Array will be invaluable to better understanding how SMBH’s can form from direct collapse of gas clouds.

### Emily Lakdawalla - The Planetary Society Blog

Close to the end for Venus Express
Venus Express is nearly out of fuel. Any day could be the last of its long mission to Venus.

### Emily Lakdawalla - The Planetary Society Blog

In Pictures: Expedition 42 Crew Launches to Station
Three more humans are in space today following the launch of Soyuz TMA-15M from the chilly steppes of Kazakhstan.

### Quantum Diaries

Neutrinos, claymation and ‘Doctor Who’ at this year’s physics slam

This article appeared in Fermilab Today on Nov. 24, 2014.

Wes Ketchum of the MicroBooNE collaboration is the Physics Slam III champion. Ketchum’s slam was on the detection of particles using liquid argon. Photo: Cindy Arnold

On Nov. 21, for the third year in a row, the Fermilab Lecture Series invited five scientists to battle it out in an event called a physics slam. And for the third year in a row, the slam proved wildly popular, selling out Ramsey Auditorium more than a month in advance.

More than 800 people braved the cold to watch this year’s contest, in which the participants took on large and intricate concepts such as dark energy, exploding supernovae, neutrino detection and the overwhelming tide of big data. Each scientist was given 10 minutes to discuss a chosen topic in the most engaging and entertaining way possible, with the winner decided by audience applause.

Michael Hildreth of the University of Notre Dame kicked things off by humorously illustrating the importance of preserving data — not just the results of experiments, but the processes used to obtain those results. Marcelle Soares-Santos of the Fermilab Center for Particle Astrophysics took the stage dressed as the Doctor from “Doctor Who,” complete with a sonic screwdriver and a model TARDIS, to explore the effects of dark energy through time.

Joseph Zennamo of the University of Chicago brought the audience along on a high-energy journey through the “Weird and Wonderful World of Neutrinos,” as his talk was called. And Vic Gehman of Los Alamos National Laboratory blew minds with a presentation about supernova bursts and the creation of everything and everyone in the universe.

The slammers at this year’s Fermilab Physics Slam were, Michael Hildreth, University of Notre Dame (far left); Marcelle Soares-Santos, Fermilab (second from left); Vic Gehman, Los Alamos National Laboratory (third from left); Wes Ketchum, Fermilab (second from right); Joseph Zennamo, University of Chicago. Fermilab Director Nigel Lockyer (third from right) congratulated all the participants. Photo: Cindy Arnold

The winner was Fermilab’s Wes Ketchum, a member of the MicroBooNE collaboration. Ketchum’s work-intensive presentation used claymation to show how different particles interact inside a liquid-argon particle detector, depicting them as multicolored monsters bumping into one another and creating electrons for the detector’s sensors to pick up. Audience members won’t soon forget the sight of a large oxygen monster eating red-blob electrons.

After the slam, the five scientists took questions from the audience, including one about dark matter and neutrinos from an eight-year-old boy, sparking much discussion. Chris Miller, speech professor at the College of DuPage, made his third appearance as master of ceremonies for the Physics Slam, and thanked the audience — particularly the younger attendees — for making the trek to Fermilab on a Friday night to learn more about science.

Video of this year’s Physics Slam is available on Fermilab’s YouTube channel.

Andre Salles

### Peter Coles - In the Dark

Farewell to Blackberry…

I’m not really a great one for gadgets so I rarely post about technology. I just thought I’d do a quick post because the weekend saw the end of an era. I had been using a Blackberry smartphone for some time, the latest one being a Blackberry Curve, and even did a few posts on here using the WordPress App for Blackberry. I never found that particular bit of software very easy to use, however, so it was strictly for emergencies only (e.g. when stuck on a train). Other than that I got on pretty well with the old thing, except for the fact that there was no easy way to receive my work email from Sussex University on it. That has been a convenient excuse for me to ignore such communications while away from the internet, but recently it’s become clear that I need to be better connected to deal with pressing matters.

Anyway a few weeks ago I got a text message from Vodafone telling me I was due a free upgrade on my contract so I decided to bite the bullet, ditch the Blackberry and acquire an Android phone instead. I’m a bit allergic to those hideously overpriced Apple products, you see, which made an iPhone unthinkable.  On Saturday morning I paid a quick visit to the vodafone store in Cardiff and after a nice chat – mainly about Rugby (Wales were playing the All Blacks later that day) and the recent comet landing – I left with a new Sony Xperia Z2. I feel a bit sorry for turning my back on Blackberry; they really were innovators at one point, but they made some awful business decisions and have been left behind by the competition. Incidentally, the original company Research In Motion (RIM) was doing well enough 15 years ago to endow the PeRIMeter Institute for Theoretical Physics in Waterloo, Ontario, which was one of the reasons for my loyalty to date. The company is now called Blackberry Limited and has recently gone through major restructuring in its struggle for survival.

The Xperia Z2 is a nice phone, with a nice big display, generally very easy to find your way around, and with a lot more apps available than for Blackberry. I’ve got my Sussex email working and got Twitter, Facebook and WordPress installed; the latter is far better on Android than on Blackberry. The only thing I don’t like is the autocorrect/autocomplete, which is wretched, and which  I haven’t yet figured out how to switch off. The other thing is that it’s completely waterproof, but I haven’t taken it into the shower yet.

I feel quite modern for a change – my old Blackberry did make me feel like an old fogey sometimes – but since I’ve now signed up for another two years of contract before my next upgrade, there’s plenty of time for technology to overtake me again.

### Andrew Jaffe - Leaves on the Line

Oscillators, Integrals, and Bugs

I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

Unfortunately, Wolfram Inc.’s Mathematica (the most recent version 10.0.1) disagrees:

The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?

### Peter Coles - In the Dark

Autumn: A Dirge

I.

The warm sun is failing, the bleak wind is wailing,
The bare boughs are sighing, the pale flowers are dying,
And the Year
On the earth her death-bed, in a shroud of leaves dead,
Is lying.
Come, Months, come away,
From November to May,
In your saddest array;
Follow the bier
Of the dead cold Year,
And like dim shadows watch by her sepulchre.

II.

The chill rain is falling, the nipped worm is crawling,
The rivers are swelling, the thunder is knelling
For the Year;
The blithe swallows are flown, and the lizards each gone
To his dwelling;
Come, Months, come away;
Put on white, black, and gray;
Let your light sisters play –
Ye, follow the bier
Of the dead cold Year,
And make her grave green with tear on tear.

by Percy Bysshe Shelley (1792-1822)

### Symmetrybreaking - Fermilab/SLAC

Creating a spark

Science has a long history of creativity generated through collaboration between fields.

A principle of 18th century mechanics holds that if a physical system is symmetric in some way, then there is a conservation law associated with the symmetry. Mathematician Emmy Noether generalized this principle in a proof in 1918. Her theorem, in turn, has provided a very powerful tool in physics, helping to describe the conservation of energy and momentum.

Science has a long history of creativity generated through this kind of collaboration between fields.

In the process of sharing ideas, researchers expose assumptions, discern how to clearly express concepts and discover new connections between them. These connections can be the sparks of creativity that generate entirely new ideas.

In 1895, physicist Wilhelm Roentgen discovered X-rays while studying the effects of sending an electric current through low-pressure gas. Within a year, doctors made the first attempts to use them to treat cancer, first stomach cancer in France and later breast cancer in America. Today, millions of cancer patients’ lives are saved each year with clinical X-ray machines.

A more recent example of collaboration between fields is the Web, originally developed as a way for high-energy physicists to share data. It was itself a product of scientific connection, between hypertext and Internet technologies.

In only 20 years, it has transformed information flow, commerce, entertainment and telecommunication infrastructure.

This connection transformed all of science. Before the Web, learning about progress in other fields meant visiting the library, making a telephone call or traveling to a conference. While such modest impediments never stopped interdisciplinary collaboration, they often served to limit opportunity.

With the Web have come online journals and powerful tools that allow people to search for and instantly share information with anyone, anywhere, anytime. In less than a generation, a remarkable amount of the recorded history of scientific progress of the last roughly 3600 years has become instantly available to anyone with an Internet connection.

Connections provide not only a source of creativity in science but also a way to accelerate science, both by opening up entirely new ways of formulating and testing theory and by providing direct applications of the fruits of basic R&D. The former opens new avenues for understanding our world. The latter provides applications of technologies outside their fields of origin. Both are vital.

High-energy physics is actively working with other fields to jointly solve new problems. One example of this is the Accelerator Stewardship Program, which studies ways that particle accelerators can be used in energy and the environment, medicine, industry, national security and discovery science. Making accelerators that meet the cost, size and operating requirements of other applications requires pushing the technology in new directions. In the process we learn new ways to solve our own problems and produce benefits that are widely recognized and sought after. Other initiatives aim to strengthen intellectual connections between particle physics itself and other sciences.

Working in concert with other fields, we will gain new ways of understanding the world around us.

Like what you see? Sign up for a free subscription to symmetry!

### ZapperZ - Physics and Physicists

Fermilab Physics Slam 2014
A very entertaining video to watch if you were not at this year's Physics Slam.

Zz.

## November 23, 2014

### Christian P. Robert - xi'an's og

an ABC experiment

In a cross-validated forum exchange, I used the code below to illustrate the working of an ABC algorithm:

#normal data with 100 observations
n=100
x=rnorm(n)
#observed summaries
sumx=c(median(x),mad(x))

#normal x gamma prior
priori=function(N){
return(cbind(rnorm(N,sd=10),
1/sqrt(rgamma(N,shape=2,scale=5))))
}

ABC=function(N,alpha=.05){

prior=priori(N) #reference table

#pseudo-data
summ=matrix(0,N,2)
for (i in 1:N){
xi=rnorm(n)*prior[i,2]+prior[i,1]
summ[i,]=c(median(xi),mad(xi)) #summaries
}

#normalisation factor for the distance
mads=c(mad(summ[,1]),mad(summ[,2]))
#distance
dist=(abs(sumx[1]-summ[,1])/mads[1])+
(abs(sumx[2]-summ[,2])/mads[2])
#selection
posterior=prior[dist<quantile(dist,alpha),]}


Hence I used the median and the mad as my summary statistics. And the outcome is rather surprising, for two reasons: the first one is that the posterior on the mean μ is much wider than when using the mean and the variance as summary statistics. This is not completely surprising in that the latter are sufficient, while the former are not. Still, the (-10,10) range on the mean is way larger… The second reason for surprise is that the true posterior distribution cannot be derived since the joint density of med and mad is unavailable.

After thinking about this for a while, I went back to my workbench to check the difference with using mean and variance. To my greater surprise, I found hardly any difference! Using the almost exact ABC with 10⁶ simulations and a 5% subsampling rate returns exactly the same outcome. (The first row above is for the sufficient statistics (mean,standard deviation) while the second row is for the (median,mad) pair.) Playing with the distance does not help. The genuine posterior output is quite different, as exposed on the last row of the above, using a basic Gibbs sampler since the posterior is not truly conjugate.

Filed under: Books, pictures, R, Statistics, University life Tagged: ABC, Gibbs sampling, MCMC, mean, median, median absolute deviation, Monte Carlo Statistical Methods, normal model, summary statistics

### Lubos Motl - string vacua and pheno

Anton Kapustin: Quantum geometry, a reunion of math and physics
I think that this 79-page presentation by Caltech's Anton Kapustin is both insightful and entertaining.

If you are looking for the "previous slide" button, you may achieve this action simply by clicking 78 times. Click once for the "next slide".

If you have any problems with the embedded Flash version of the talk [click for full screen] above, download Anton's PowerPoint file which you may display using a Microsoft Office viewer or an OpenOffice or a LibreOffice or a Chrome extension or Google Docs or in many other ways.

Spoilers are below.

Anton describes the relationship between mathematics and physics, mathematicians and physicists, and so on. He focuses on the noncommutative character of algebras of observables in quantum mechanics. No mathematician really believed the Feynman's path integral and no physicist was interested in the mathematics by people like Grothendieck.

However, some smart opportunists in the middle – for example, Maxim Kontsevich – were able to derive interesting results (from mathematicians' viewpoint) using the path integral methods applied to the Poisson manifolds. And it wasn't just some lame undergraduate Feynman path integral that was needed. It was the stringy path integral that may be formulated using an associative product.

Hat tip: John Preskill, Twitter

### ZapperZ - Physics and Physicists

Research Gate
Anyone else here on Research Gate?

First of all, let me first declare that I'm not on Facebook, don't have a Twitter account, etc.. etc. This blog is my only form of "social media" involvement in physics, if you discount online physics forums. So I'm not that into these social media activities. Still, I've been on Research Gate for several years after being invited into it by a colleague.

If you're not familiar with it, Research Gate is a social media platform for ... you guessed it ... researchers. You reveal as much about yourself as you wish in your profile, and you can list all your papers and upload them. The software also "trolls" the journals and online to find publications that you may have authored and periodically asks you to verify that they are yours. Most of mine that are currently listed were found by the software, so it is pretty good.

Of course, the other aspect of such a social media is that you can "follow" others. The software, like any good social media AI, will suggest people that you might know, such as your coauthors, people from the same institution as yours, or any other situation where your name and that person's name appear in the same situation or document. It also keeps tabs on what the people that follows you or ones that you follow are doing, such as new publications being updated, job change, etc.. etc. It also tells you how many people viewed your profile, how many read your publications, and how many times your publications have been downloaded from the Research Gate site.

Another part of Research Gate is that you can submit a question in a particular field, and if that is a field that you've designated as your area of expertise, it will alert you to it so that you have the option of responding. I think this is the most useful feature of this community because this is what makes it "science specific", rather than just any generic social media program.

I am still unsure of the overall usefulness and value of this thing. So far it has been "nice", but I have yet to see it as being a necessity. Although, I must say, I'm pleasantly surprised to see some prominent names in my field of study who are also on it, which is why I continued to be on it as well.

So, if you are also on it, what do you think of it? Do you think this will eventually evolve into something that almost all researchers will someday need?

Zz.

### arXiv blog

Linguistic Mapping Reveals How Word Meanings Sometimes Change Overnight

Data mining the way we use words is revealing the linguistic earthquakes that constantly change our language.

## November 22, 2014

### Christian P. Robert - xi'an's og

Challis Lectures

I had a great time during this short visit in the Department of Statistics, University of Florida, Gainesville. First, it was a major honour to be the 2014 recipient of the George H. Challis Award and I considerably enjoyed delivering my lectures on mixtures and on ABC with random forests, And chatting with members of the audience about the contents afterwards. Here is the physical award I brought back to my office:

More as a piece of trivia, here is the amount of information about the George H. Challis Award I found on the UF website:

This fund was established in 2000 by Jack M. and Linda Challis Gill and the Gill Foundation of Texas, in memory of Linda’s father, to support faculty and student conference travel awards and the George Challis Biostatistics Lecture Series. George H. Challis was born on December 8, 1911 and was raised in Italy and Indiana. He was the first cousin of Indiana composer Cole Porter. George earned a degree in 1933 from the School of Business at Indiana University in Bloomington. George passed away on May 6, 2000. His wife, Madeline, passed away on December 14, 2009.

Cole Porter, indeed!

On top of this lecturing activity, I had a full academic agenda, discussing with most faculty members and PhD students of the Department, on our respective research themes over the two days I was there and it felt like there was not enough time! And then, during the few remaining hours where I did not try to stay on French time (!), I had a great time with my friends Jim and Maria in Gainesville, tasting a fantastic local IPA beer from Cigar City Brewery and several fantastic (non-local) wines… Adding to that a pile of new books, a smooth trip both ways, and a chance encounter with Alicia in Atlanta airport, it was a brilliant extended weekend!

Filed under: Books, pictures, Statistics, Travel, University life, Wines Tagged: ABC, Cigar City Brewery, Cole Porter, finite mixtures, Florida, Gainesville, George H. Challis Award, random forests

### Georg von Hippel - Life on the lattice

Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

Please do not hesitate to contact us at coordinator@mitp.uni-mainz.de if you have any questions.

We hope you will be able to join us in Mainz in 2015!

With best regards,

the organizers:
Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

### Clifford V. Johnson - Asymptotia

Luncheon Reflections
You know, I never got around to mentioning here that I am now Director (co-directing with Louise Steinman who runs the ALOUD series) of the Los Angeles Institute for the Humanities (LAIH), a wonderful organisation that I have mentioned here before. It is full of really fascinating people from a range of disciplines: writers, artists, historians, architects, musicians, critics, filmmakers, poets, curators, museum directors, journalists, playwrights, scientists, actors, and much more. These LAIH Fellows are drawn from all over the city, and equally from academic and non-academic sources. The thing is, you'll find us throughout the city involved in all sorts of aspects of its cultural and intellectual life, and LAIH is the one organisation in the city that tries to fully bring together this diverse range of individuals (all high-acheivers in their respective fields) into a coherent force. One of the main things we do is simply sit together regularly and talk about whatever's on our minds, stimulating and shaping ideas, getting updates on works in progress, making suggestions, connections, and so forth. Finding time in one's schedule to just sit together and exchange ideas with no particular agenda is an important thing to do and we take it very seriously. We do this at [...] Click to continue reading this post

### Emily Lakdawalla - The Planetary Society Blog

Quick update about our website
The last two weeks have been extraordinary for The Planetary Society. As amazing as this increased traffic is, it has brought to light some issues with our website including latency and missing content that we are still working on fixing.

## November 21, 2014

### Christian P. Robert - xi'an's og

a pile of new books

I took the opportunity of my weekend trip to Gainesville to order a pile of books on amazon, thanks to my amazon associate account (and hence thanks to all Og’s readers doubling as amazon customers!). The picture above is missing two  Rivers of London volumes by Ben Aaraonovitch that I already read and left at the office. And reviewed in incoming posts. Among those,

(Obviously, all “locals” sharing my taste in books are welcome to borrow those in a very near future!)

Filed under: Books, Travel, University life Tagged: amazon associates, book reviews, Booker Prize, Florida, Gainesville, Hugo Awards, John Scalzi, Robin Hobb, The Name of the Wind, Walter Miller

### The Great Beyond - Nature blog

Gates Foundation announces world’s strongest policy on open access research

The Bill & Melinda Gates Foundation has announced the world’s strongest policy in support of open research and open data. If strictly enforced, it would prevent Gates-funded researchers from publishing in well-known journals such as Nature and Science.

On 20 November, the medical charity, of Seattle, Washington, announced that from January 2015, researchers it funds must make open their resulting papers and underlying data-sets immediately upon publication — and must make that research available for commercial re-use. “We believe that published research resulting from our funding should be promptly and broadly disseminated,” the foundation states. It says it will pay the necessary publication fees (which often amount to thousands of dollars per article).

The Foundation is allowing two years’ grace: until 2017, researchers may apply a 12-month delay before their articles and data are made free. At first glance, this suggests that authors may still — for now — publish in journals that do not offer immediate open-access (OA) publishing, such as Science and Nature. These journals permit researchers to archive their peer-reviewed manuscripts elsewhere online, usually after a delay of 6-12 months following publication.

Allowing a year’s delay makes the charity’s open-access policy similar to those of other medical funders, such as the Wellcome Trust or the US National Institutes of Health (NIH). But the charity’s intention to close off this option by 2017 might put pressure on paywalled journals to create an open-access publishing route.

However, the Gates Foundation’s policy has a second, more onerous twist which appears to put it directly in conflict with many non-OA journals now, rather than in 2017. Once made open, papers must be published under a license that legally allows unrestricted re-use — including for commercial purposes. This might include ‘mining’ the text with computer software to draw conclusions and mix it with other work, distributing translations of the text, or selling republished versions.  In the parlance of Creative Commons, a non-profit organization based in Mountain View, California, this is the CC-BY licence (where BY indicates that credit must be given to the author of the original work).

This demand goes further than any other funding agency has dared. The UK’s Wellcome Trust, for example, demands a CC-BY license when it is paying for a paper’s publication — but does not require it for the archived version of a manuscript published in a paywalled journal. Indeed, many researchers actively dislike the thought of allowing such liberal re-use of their work, surveys have suggested. But Gates Foundation spokeswoman Amy Enright says that “author-archived articles (even those made available after a 12-month delay) will need to be available after the 12 month period on terms and conditions equivalent to those in a CC-BY license.”

Most non-OA publishers do not permit authors to apply a CC-BY license to their archived, open, manuscripts. Nature, for example, states that openly archived manuscripts may not be re-used for commercial purposes. So do the American Association for the Advancement of ScienceElsevier and Wiley and many other publishers (in relation to their non-OA journals).

“It’s a major change. It would be major if publishers that didn’t previously use CC-BY start to use it, even for the subset of authors funded by the Gates Foundation. It would be major if publishers that didn’t previously allow immediate or unembargoed OA start to allow it, again even for that subset of authors. And of course it would be major if some publishers refused to publish Gates-funded authors,” says Peter Suber, director of the Office for Scholarly Communication at Harvard University in Cambridge, Massachusetts.

“You could say that Gates-funded authors can’t publish in journals that refuse to use CC-BY. Or you could say that those journals can’t publish Gates-funded authors. It may look like a stand-off but I think it’s the start of a negotiation,” Suber adds — noting that when the NIH’s policy was announced in 2008, many publishers did not want to accommodate all its terms, but now all do.

That said, the Gates Foundation does not leave as large a footprint in the research literature as the NIH. It only funded 2,802 research articles in 2012 and 2013, Enright notes; 30% of these were published in open access journals. (Much of the charity’s funding goes to development projects, rather than to research which will be published in journals).

The Gates Foundation also is not clear on how it will enforce its mandate; many researchers are still resistant to the idea of open data, for instance. (And most open access mandates are not in fact strictly enforced; only recently have the NIH and the Wellcome Trust begun to crack down). But Enright says the charity will be tracking what happens and will write to non-compliant researchers if needs be. “We believe that the foundation’s Open Access Policy is in alignment with current practice and trends in research funded in the public interest.  Hence, we expect that the policy will be readily understood, adopted and complied with by the researchers we fund,” she says.

### Sean Carroll - Preposterous Universe

Guest Post by Alessandra Buonanno: Nobel Laureates Call for Release of Iranian Student Omid Kokabee

Usually I start guest posts by remarking on what a pleasure it is to host an article on the topic being discussed. Unfortunately this is a sadder occasion: protesting the unfair detention of Omid Kokabee, a physics graduate student at the University of Texas, who is being imprisoned by the government of Iran. Alessandra Buonanno, who wrote the post, is a distinguished gravitational theorist at the Max Planck Institute for Gravitational Physics and the University of Maryland, as well as a member of the Committee on International Freedom of Scientists of the American Physical Society. This case should be important to everyone, but it’s especially important for physicists to work to protect the rights of students who travel from abroad to study our subject.

Omid Kokabee was arrested at the airport of Teheran in January 2011, just before taking a flight back to the University of Texas at Austin, after spending the winter break with his family. He was accused of communicating with a hostile government and after a trial, in which he was denied contact with a lawyer, he was sentenced to 10 years in Teheran’s Evin prison.

According to a letter written by Omid Kokabee, he was asked to work on classified research, and his arrest and detention was a consequence of his refusal. Since his detention, Kokabee has continued to assert his innocence, claiming that several human rights violations affected his interrogation and trial.

Since 2011, we, the Committee on International Freedom of Scientists (CIFS) of the American Physical Society, have protested the imprisonment of Omid Kokabee. Although this case has received continuous support from several scientific and international human rights organizations, the government of Iran has refused to release Kokabee.

Omid Kokabee has received two prestigious awards:

• The American Physical Society awarded him Andrei Sakharov Prize “For his courage in refusing to use his physics knowledge to work on projects that he deemed harmful to humanity, in the face of extreme physical and psychological pressure.”
• The American Association for the Advancement of Science awarded Kokabee the Scientific Freedom and Responsibility Prize.

Amnesty International (AI) considers Kokabee a prisoner of conscience and has requested his immediate release.

Recently, the Committee of Concerned Scientists (CCS), AI and CIFS, have prepared a letter addressed to the Iranian Supreme Leader Ali Khamenei asking that Omid Kokabee be released immediately. The letter was signed by 31 Nobel-prize laureates. (An additional 13 Nobel Laureates have signed this letter since the Nature blog post. See also this update from APS.)

Unfortunately, earlier last month, Kokabee’s health conditions have deteriorated and he has been denied proper medical care. In response, the President of APS, Malcolm Beasley, has written a letter to the Iranian President Rouhani calling for a medical furlough for Omid Kokabee so that he can receive proper medical treatment. AI has also made further steps and has requested urgent medical care for Kokabee.

Very recently, the Iran’s supreme court has nullified the original conviction of Omid Kokabee and has agreed to reconsider the case. Although this is positive news, it is not clear when the new trial will start. Considering Kokabee’s health conditions, it is very important that he is granted a medical furlough as soon as possible.

More public engagement and awareness is needed to solve this unacceptable case of violation of human rights and freedom of scientific research. You can help by tweeting/blogging about it and responding to this Urgent Action that AI has issued. Please note that the date on the Urgent Action is there to create an avalanche effect; it is not a deadline nor it is the end of action.

Alessandra Buonanno for the American Physical Society’s Committee on International Freedom of Scientists (CIFS).

### Lubos Motl - string vacua and pheno

An evaporating landscape? Possible issues with the KKLT scenario
By Dr Thomas Van Riet, K.U. Leuven, Belgium

What is this blog post about?

In 2003, in a seminal paper by Kachru, Kallosh, Linde and Trivedi (KKLT) (2000+ cites!), a scenario for constructing a landscape of de Sitter vacua in string theory with small cosmological constant was found. This paper was (and is) conceived as the first evidence that the string theory landscape contains a tremendous amount of de Sitter vacua (not just anti-de Sitter vacua) which could account for the observed dark energy.

The importance of this discovery should not be underestimated since it profoundly changed the way we think about how a fundamental, UV-complete theory of all interactions addresses apparent fine-tuning and naturalness problems we are faced with in high energy physics and cosmology. It changed the way we think string theory makes predictions about the low-energy world that we observe.

It is fair to say that, since the KKLT paper, the multiverse scenario and all of its related emotions have been discussed at full intensity, even been taken up by the media and it has sparked some (unsuccessful) attempts to classify string theory as non-scientific.

In this post I briefly outline the KKLT scenario and highlight certain aspects that are not often described in reviews but are crucial to the construction. Secondly I describe research done since 2009 that sheds doubts on the consistency of the KKLT scenario. I have tried to be as unbiased as possible. But near the end of this post I have taken the freedom to give a personal view on the matter.

The KKLT construction

The main problem of string phenomenology at the time of the KKLT paper was the so-called moduli-stabilisation problem. The string theory vacua that were constructed before the flux-revolution were vacua that, at the classical level, contained hundreds of massless scalars. Massless scalars are a problem for many reasons that I will not go into. Let us stick to the observed fact that they are not there. Obviously quantum corrections will induce a mass, but the expected masses would still be too low to be consistent with observations and various issues in cosmology. Hence we needed to get rid of the massless scalars. This is where fluxes come into the story since they provide a classical mass to many (but typically not all) moduli.

The above argument that masses due to quantum corrections are too low is not entirely solid. What is really the problem is that vacua supported solely by quantum corrections are not calculable. This is called the Dine-Seiberg problem and it roughly goes as follows: if quantum corrections are strong enough to create a meta-stable vacuum we necessarily are in the strong coupling regime and hence out of computational control. Fluxes evade the argument because they induce a classical piece of energy that can stabilize the coupling at a small value. Fluxes are used mainly as a tool for computational control, to stay within the supergravity approximation.

Step 1: fluxes and orientifolds

Step 1 in the KKLT scenario is to start from the classical IIB solution often referred to as GKP (1400+ cites), (see also this paper). What Giddings, Kachru and Polchinski did was to construct compactifications of IIB string theory (in the supergravity limit) down to 4-dimensional Minkowski space using fluxes and orientifolds. Orientifolds are specific boundary conditions for strings that are different from Dirichlet boundary conditions (which would be D-branes). The only thing that is required for understanding this post is to know that orientifolds are like D-branes but with negative tension and negative charge (anti D-brane charge). GKP understood that Minkowski solutions (SUSY and non-SUSY) can be build from balancing the negative energy of the orientifolds $$T_{{\rm O}p}$$ against the positive energy of the 3-form fluxes $$F_3$$ and $$H_3$$:$V = H_3^2 + F_3^2 + T_{{\rm O}p} = 0$ This scalar potential $$V$$ is such that it does not depend on the sizes of the compact dimensions. Those sizes are then perceived as massless scalar fields in four dimensions. Many other moduli directions have gained a mass due to the fluxes and all those masses are positive such that the Minkowski space is classically stable.

The 3-form fluxes $$H_3$$ and $$F_3$$ carry D3 brane charges, as can be verified from the Bianchi identity for the five-form field strength $$F_5$$$\dd F_5 = H_3 \wedge F_3 + Q_3\delta$ The delta-function on the right represent the D3/O3 branes that are really localised charge densities (points) in the internal dimensions, whereas the fluxes correspond to a smooth, spread out, charge distribution. Gauss' law tells us that a compact space cannot carry any charge and consequently the charges in the fluxes have opposite sign to the charges in the localised sources.

I want to stress the physics in the Bianchi identity. To a large extend one can think of the 3-form fluxes as a smeared configuration of actual D3 branes. Not only do they induce D3 charge, they also back-react on the metric because of their positive energy-momentum. We will see below that this is more than an analogy: the fluxes can even materialize into actual D3 branes.

This flux configuration is ‟BPS″, in the sense that various ingredients exert no force on each other: the orientifolds have negative tension such that the gravitational repulsion between fluxes and orientifolds exactly cancels the Coulomb attraction. This will become an issue once we insert SUSY-breaking anti-branes (see below).

Step 2: Quantum corrections

One of the major breakthroughs of the KKLT paper (which I am not criticizing here) is a rather explicit realization of how the aforementioned quantum corrections stabilize all scalar fields in a stable Anti-de Sitter minimum that is furthermore SUSY. As expected quantum corrections do give a mass to those scalar fields that were left massless at the classical level in the GKP solution. From that point of view it was not a surprise. The surprise was the simplicity, the level of explicitness, and most important, the fact that the quantum stabilization can be done in a regime where you can argue that other quantum corrections will not mess up the vacuum. Much of the original classical supergravity background is preserved by the quantum corrections since the stabilization occurs at weak coupling and large volume. Both coupling and volume are dynamical fields that need to be stabilized at self-consistent values, meaning small coupling and large (in string units) volume of the internal space. If this were not the case than one would be too far outside the classical regime for this quantum perturbation to be leading order.

So what KKLT showed is exactly how the Dine-Seiberg problem can be circumvented using fluxes. But, in my opinion, something even more important was done at this step in the KKLT paper. Prior to KKLT one could not have claimed on solid grounds that string theory allows solutions that are perceived to an observer as four-dimensional. Probably the most crude phenomenological demand on a string theory vacuum remained questionable. Of course flux compactifications were known, for example the celebrated Freund-Rubin vacua like $$AdS_5\times S^5$$ which were crucial for developing holography. But such vacua are not lower-dimensional in any phenomenological way. If we were to throw you inside the $$AdS_5\times S^5$$ you would not see a five-dimensional space, but you would observe all ten dimensions.

KKLT had thus found the first vacua with all moduli fixed that have a Kaluza-Klein scale that is hierarchically smaller than the length-scale of the AdS vacuum. In other words, the cosmological constant in KKLT is really tiny.

But the cosmological constant was negative and the vacuum of KKLT was SUSY. This is where KKLT came with the second, and most vulnerable, insight of their paper: the anti-brane uplifting.

Step 3: Uplifting with anti-D3 branes

Let us go back to the Bianchi identity equation and the physics it entails. If one adds D3 branes to the KKLT background the cosmological constant does not change and SUSY remains unbroken. The reason is that D3 branes are both BPS with respect to the fluxes and the orientifold planes. Intuitively this is again clear from the no-force condition. D3 branes repel orientifolds gravitationally as strong as they attract them "electromagnetically" and vice versa for the fluxes (recall that the fluxes can be seen as a smooth D3 distribution). This also implies that D3 branes can be put at any position of the manifold without changing the vacuum energy: the energy in the tension of the branes gets cancelled by the decrease in fluxes required to cancel the tadpole condition (Gauss' law).

Anti-D3 branes instead break SUSY. Heuristically that is straightforward since the no-force condition is violated. The anti-D3 branes can be drawn towards the non-dynamical O-planes without harm since they cannot annihilate with each other. The fluxes, however, are another story that I will get to shortly. The energy added by the anti-branes is twice the anti-brane tension $$T_{\overline{D3}}$$: the gain in energy due to the addition of fluxes, required to cancel off the extra anti-D3 charges, equals the tension of the anti-brane. Hence we get$V_{\rm NEW} = V_{\rm SUSY} + 2 T_{\overline{D3}}$ At first it seems that this new potential can never have a de Sitter critical point since $$T_{\overline{D3}}$$ is of the order of the string scale (which is a huge amount of energy) whereas $$V_{\rm SUSY}$$ was supposed to be a very tiny cosmological constant. One can verify that the potential has a runaway structure towards infinite volume. What comes to the rescue is space-time warping. Mathematically warping means that the space-time metric has the following form$\dd s_{10}^2 = e^{2A} \dd s_4^2 + \dd s_6^2$ where $$\dd s_4^2$$ is the metric of four-dimensional space, $$\dd s_6^2$$ the metric on the compact dimensions (conformal Calabi-Yau, in case you care) and $$\exp(2A)$$ is the warp-factor, a function that depends on the internal coordinates. A generic compactification contains warped throats, regions of space where the function $$\exp(A)$$ can become exponentially small. This is often depicted using phallus-like pictures of warped Calabi-Yau spaces, such as the one below (taken from the KPV paper (I will come to KPV in a minute)):

Consider some localized object with some non-zero energy, then that energy is significantly red-shifted in regions of high warping. For anti-branes the tension gets the following redshift factor$\exp(4A) T_{\overline{D3}}.$ This can bring a string scale energy all the way down to the lowest energy scales in nature. The beauty of this idea is that this redshift occurs dynamically; an anti-brane feels literally a force towards that region since that is where its energy is minimized. So this redshift effect seems completely natural, one just needs a warped throat.

The KKLT scenario then continues by observing that with a tunable warping, a new critical point in the potential arises that is a meta-stable de Sitter vacuum as shown in the picture below.

This was verified by KKLT explicitly using a Calabi-Yau with a single Kähler modulus .

The reason for the name uplifting then becomes obvious; near the critical point of the potential it indeed seems as if the potential is lifted with a constant value to a de Sitter value. This lifting did not happen with a constant value but the dependence of the uplift term on the Kähler modulus is practically constant when compared to the sharp SUSY part of the potential.

I am glossing over many issues, such as the stability of the other directions, but all of this seems under control (the arguments are based on a parametric separation between the complex structure moduli masses and the masses of the Kähler moduli).

The KKLT scrutiny

The issues with the KKLT scenario that have been discussed in the last five years have to do with back-reaction. As mentioned earlier, the no-force condition becomes violated once we insert the anti-D3 branes. Given the physical interpretation of the 3-form fluxes as a cloud of D3 branes, you can guess what the qualitative behavior of the back-reaction is: the fluxes are drawn gravitationally and electromagnetically towards the anti-branes, leading to a local increase of the 3-form flux density near the anti-brane.

Although the above interpretation was not given, this effect was first found in 2009 independently by Bena, Grana and Halmagyi in Saclay (France) and by McGuirk, Shiu and Sumitomo in Madison (Wisconsin, USA). These authors constructed the supergravity solution that describes a back-reacting anti-brane. Clearly this is an impossible job, were it not for three simplifying assumptions:
• They put the anti-brane inside the non-compact warped Klebanov-Strassler throat since that is the canonical example of a throat in which computations are doable. This geometry consists of a radial coordinate measuring the distance from the tip and five angles that span the manifold which is topologically $$S^2\times S^3$$. The non-compactness implies that we can circumvent the use of the quantum corrections of KKLT to have a space-time solution in the first place. Non-compact geometries work differently from compact ones. For example, the energy of the space-time (ADM mass) does not need to effect the cosmological constant of the 4D part of the metric. Roughly, this is because there is no volume modulus that needs to be stabilized. In the end one should ‟glue″ the KS throat, at large distance from the tip, to a compact Calabi-Yau orientifold.

• The second simplification was to smear the anti-D3 branes over the tip of the throat. This means that the solution describes anti-D3's homogeneously distributed over the tip. In practice this implies that the supergravity equations of motion become a (large) set of coupled ODE's.

• These two papers solved the ODE's approximately: They treated the anti-brane SUSY breaking as small and expanded the solution in terms of a SUSY-breaking parameter, keeping the first terms in the expansion.
Regardless of these assumptions it was an impressive task to solve the ODE's. In this task the Saclay paper was the more careful one in connecting the solution at small radius to the solution at large radius. In any case these two papers found the same result, which was unexpected at the time: The 3-form flux density became divergent at the tip of the throat. More precisely, the following scalar quantity diverges at the tip:$H_3^2 \to \infty.$ (I am ignoring the string coupling in all equations.) Diverging fluxes near brane sources are rather mundane (a classical electron has a diverging electric field near its position). But the real reason for the worry is that this singularity is not in the field sourced by the brane (since that should be the $$F_5$$-field strength and it indeed blows up as well).

In light of the physical picture I outlined above, this divergence is not that strange to understand. The D3 charges in the fluxes are being pulled towards the anti-D3 branes where they pile up. The sign of the divergence in the 3-form fluxes is indeed that of a D3 charge density and not anti-D3 charge density.

Whenever a supergravity solution has a singularity one has to accept that one is outside of the supergravity approximation and full-blown string theory might be necessary to understand it. And I agree with that. But still singularities can — and should — be interpreted and the interpretation might be sufficient to know or expect that stringy corrections will resolve it.

So what was the attitude of the community when these papers came out? As I recall it, the majority of string cosmologists are not easily woken up and the attitude of the majority of experts that took the time to form an opinion, believed that the three assumptions above (especially the last two) were the reason for this. To cut a long story short (and painfully not mention my own work on showing this was wrong) it is now proven that the same singularity is still there when the assumptions are undone. The full proof was presented in a paper that gets too little love.

So what was the reaction of the few experts that still cared to follow this? They turned to an earlier suggestion by Dymarsky and Maldacena that the real KKLT solution is not described by anti-D3 branes at the tip of the throat but by spherical 5-branes, that carry anti-D3 charges (a.k.a. the Myers effect). This then would resolve the singularity they argued (hoped?). In fact, a careful physicist could have predicted some singularity based on the analogy with other string theory models of 3 branes and 3-form fluxes. Such solutions often come with singularities that are only resolved when the 3-branes are being polarised. But such singularities can be of any form. The fact that it so nicely corresponds to a diverging D3 charge density should not be ignored — and it too often is.

So, again, I agree that the KKLT solution should really contain 5-branes instead of 3-branes and I will discuss this below. But before I do, let me mention a very solid argument of why also this seems not to help.

If indeed the anti-D3 branes ‟puff″ into fuzzy spherical 5-branes leading to a smooth supergravity solution then one should be able to ‟heat up″ the solution. Putting gravity solutions at finite temperature means adding an extra warp-factor in front of the time-component in the metric that creates an event horizon at a finite distance. In a well-known paper by Gubser it was argued that this provides us with a classification of acceptable singularities in supergravity. If a singularity can be cloaked by a horizon by adding sufficient temperature it has a chance of being resolved by string theory. The logic behind this is simple but really smart: if there is some stringy physics that resolves a sugra singularity one can still heat up the branes that live at the singularity. One can then add so much temperature that the horizon literally becomes parsecs in length such that the region at and outside the horizon become amendable to classical sugra and it should be smooth. Here is the suprise: that doesn't work. In a recent paper, the techniques of arXiv:1301.5647 were extended to include finite temperature and what happened is that the diverging flux density simply tracks the horizon, it does not want to fall inside. The metric Ansatz that was used to derive this no-go theorem is compatible with spherical 5-branes inside the horizon. So it seems difficult to evade this no-go theorem.

The reaction sofar on this from the community, apart from a confused referee report, is silence.

But still let us go back to zero temperature since there is some beautiful physics taking place. I said earlier that the true KKLT solution should include 5-branes instead of anti-D3 branes. This was described prior to KKLT in a beautiful paper by Kachru, Pearson and Verlinde, called KPV (again the same letter ‛K′). The KPV paper is both the seed and the backbone of the KKLT paper and the follow-up papers, like KKLMMT, but for some obscure reason is less cited. KPV investigated the ‟open-string″ stability of probe anti-D3 branes placed at the tip of the KS throat. They realised that the 3-form fluxes can materialize into actual D3 branes that annihilate the anti-D3 branes which implies a decay to the SUSY vacuum. But they found that this materialization of the fluxes occurs non-perturbatively if the anti-brane charge $$p$$ is small enough$\frac{p}{M} \ll 1.$ In the above equation $$M$$ denotes a 3-form flux quantum that sets the size of the tip of the KS throat. The beauty of this paper resides in the fact that they understood how the brane-flux annihilation takes place, but I necessarily have to gloss over this such that you cannot really understand it if you do not already know this. In any case, here it comes: the anti-D3 brane polarizes into a spherical NS5 brane wrapping a finite contractible 2-sphere inside the 3-sphere at the tip of the KS throat as in the below picture:

One can show that this NS5 brane carries $$p$$ anti-D3 charges at the South pole and $$M-p$$ D3 charges at the North pole. So if it is able to move over the equator from the South to the North pole, the SUSY-breaking state decays into the SUSY vacuum: recall that the fluxes have materialized into $$M$$ D3 branes that annihilate with the $$p$$ anti-D3 branes leaving $$M-p$$ D3 branes behind in the SUSY vacuum. But what pushes the NS5 to the other side? That is exactly the 3-form flux $$H_3$$. This part is easy to understand: an NS5 brane is magnetically charged with respect to the $$H_3$$ field strength. In the probe limit KPV found that this force is small enough to create a classical barrier if $$p$$ is small enough. So we get a meta-stable state, nice and very beautiful. But what would they have thought if they could have looked into the future to see that the same 3-form flux that pushes the NS5 brane diverges in the back-reacted solution? Not sure, but I cannot resist from quoting a sentence out of their paper
One forseeable quantitative difference, for example, is that the inclusion of the back-reaction of the NS5 brane might trigger the classical instabilities for smaller values of $$p/M$$ than found above.
It should be clear that this brane-flux mechanism is suggesting a trivial way to resolve the singularity. The anti-brane is thrown into the throat and starts to attract the flux, which keeps on piling up until it becomes too strong causing the flux to annihilate with the anti-brane. Then the flux pile-up stops since there is no anti-brane anymore. At no point does this time-dependent process lead to a singular flux density. The singularity was just an artifact of forcing an intrinsically time-dependent process into a static Ansatz. This idea is explained in two papers: arXiv:1202.1132 and arXiv:1410.8476 .

I am often asked whether a probe computation can ever fail, apart from being slightly corrected? I am not sure, but what I do know is that KPV do not really have a trustworthy probe regime: for details explained in the KPV paper, they have to work in the strongly coupled regime and they furthermore have a spherical NS5 brane wrapping a cycle of stringy length scale, which is also worrisome.

Still one can argue that the NS5 brane back-reaction will be slightly different from the anti-D3 back-reaction exactly such as to resolve the divergence. I am sympathetic to this (if one ignores the trouble with the finite temperature, which one cannot ignore). However, again computations suggest this does not work. Here I will go even faster since this guest blog is getting lengthy.

This issue has been investigated in some papers such as arXiv:1212.4828, and there it was shown, under certain assumptions, that the polarisation does not occur in a way to resolve the divergence. Note that, like the finite temperature situation, the calculation could have worked in favor of the KKLT model, but it did not! At the moment I am working on brane models which have exactly the same 3-form singularity but are conceptually different since the 4D space is AdS and SUSY is not broken. In this circumstance the same singularity does get resolved that way. My point is that the intuition of how the singularity should get resolved does work in certain cases, but it does not work sofar for models relevant to KKLT.

What is the reaction of the community? Well they are cornered to say that it is the simplifications made in the derivation of the ‛no polarisation′ result that is causing troubles.

But wait a minute... could it perhaps be that at this point in time the burden of proof has shifted? Apparently not, and that, in my opinion, starts becoming very awkward.

It is true that there is still freedom for the singularity to be resolved through brane polarisation. There is just one issue with that: to be able to compute this in a supergravity regime requires to tune parameters out of the small $$p$$ limit. Bena et. al. have pushed this idea recently in arXiv:1410.7776 and were so kind to assume the singularity gets resolved, but they found the vacuum is then necessarily tachyonic. It can be argued that this is obvious since they necessarily had to take the limit away from what KPV want for stability (remember $$p\ll M$$). But then again, the tachyon they find has nothing to do with a perturbative brane-flux annihilation. Once again a situation in which a honest-to-God computation could have turned into the favor of KKLT, it did not.

Here comes the bias of this post: were it not for a clear physical picture behind the singularity I might be finding myself in the position of being less surprised that there is a camp that is not too worried about the consistency of KKLT. But there is a clear picture with trivial intuition I already alluded to: the singularity, when left unresolved, indicates that the anti-brane is perturbatively unstable and once you realise that, the singularity is resolved by allowing the brane to decay. At least I hope the intuition behind this interpretation was clear. It simply uses that a higher charge density in fluxes (near the anti-D3) increases the probability for the fluxes to materialize into actual D3 branes that eat up the anti-branes. KPV told us exactly how this process occurs: the spherical NS5 brane should not feel a too strong force that pulls it towards the other side of the sphere. But that force is proportional to the density of the 3-form fluxes... and it diverges. End of story.

What now?

I guess that at some point these ‟anti-KKLT″ papers will stop being produced as their producers will run out of ideas for computations that probe the stability of the would-be KKLT vacuum. If the first evidence in favor of KKLT will be found in that endeavor, I can assure you that it will be published in that way. It just never happened thus far.

We are facing the following problem: to fully settle the discussion, computations outside the sugra regime have to be done (although I believe that the finite temperature argument suggests that this will not help). Were fluxes not invented to circumvent this? It seems that the anti-brane back-reaction brings us back to the Dine-Seiberg problem.

So we are left with a bunch of arguments against what is/was a beautiful idea for constructing dS vacua. The arguments against have an order of rigor higher than the original models. I guess we need an extra level of rigor on top from those that want to keep using the original KKLT model.

What about alternative de Sitter embeddings in string theory? Lots of hard work has been done there. Let me do injustice to it by summarizing it as follows: none of these models are convincing to me at least. They are borderline in the supergravity regime or we don't know whether it is trustworthy in supergravity (like with non-geometric fluxes). Very popular are F-term quantum corrections to the GKP vacuum which are used to stabilize the moduli in a dS vacuum. But none of this is from the full 10D point of view. Instead it is between 4D effective field theory and 10D. KKLT at least had a full 10-dimensional picture of uplifting and that is why it can be scrutinized.

It seems as if string theory is allergic to de Sitter vacua. Consider the following: any grad student can find an anti-de Sitter solution in string theory. Why not de Sitter? All claimed de Sitter solutions are always rather phenomenological in the sense that the cosmological constant is small compared with the KK scale. I guess we better first try to find unphysical dS vacua. Say a six-dimensional de Sitter solution with large cosmological constant. But we cannot, or nobody ever did this. Strange, right? Many say: "you just have to work harder". That ‛harder′ always implies ‛less explicit′ and then suddenly a landscape of de Sitter vacua opens up. I doubt that seriously, maybe it just means we are sweeping problems under the carpet of effective field theory?

I hope I have been able to convince you that the search for de Sitter vacua is tough if you want to do this truly top-down. The most popular construction method, the KKLT anti-brane uplifting, has a surprise: a singularity in the form of a diverging flux density. It sofar persistently survives all attempts to resolve it. This divergence is however resolved when you are willing to accept that the de Sitter vacuum is not meta-stable but instead a solution with decaying vacuum energy. Does string theory want to tell us something deep about quantum gravity?

### Christian P. Robert - xi'an's og

some LaTeX tricks

Here are a few LaTeX tricks I learned or rediscovered when working on several papers the past week:

1. I am always forgetting how to make aligned equations with a single equation number, so I found this solution on the TeX forum of stackexchange, Namely use the equation environment and then an aligned environment inside. Or the split environment. But it does not always work…
2. Another frustrating black hole is how to deal with integral signs that do not adapt to the integrand. Too bad we cannot use \left\int, really! Another stackexchange question led me to the bigints package. Not perfect though.
3. Pierre Pudlo also showed me the commands \graphicspath{{dir1}{dir2}} and \DeclareGraphicsExtensions{.pdf,.png,.jpg} to avoid coding the entire path to each image and to put an order on the extension type, respectively. The second one is fairly handy when working on drafts. The first one does not seem to work with symbolic links, though…

Filed under: Books, Kids, Statistics, University life Tagged: bigint, graphical extension, LaTeX, mathematical equations, StackExchange

### CERN Bulletin

CERN Bulletin Issue No. 47-48/2014
Link to e-Bulletin Issue No. 47-48/2014Link to all articles in this issue No.

### Peter Coles - In the Dark

Warning! Offensive Image…

No reasonable person could possibly take offence at that tweet from Emily Thornberry, yet she has had to resignfrom the Shadow Cabinet because of it. It is beyond belief how pathetic British politics and the British media have become.

### astrobites - astro-ph reader's digest

A New Way with Old Stars: Fluctuation Spectroscopy

Astronomers use models to derive properties of individual stars that we cannot directly observe, such as mass, age, and radius. This is also the case for a group of stars (a galaxy or a star cluster). How do we test how accurate these models are? Well, we compare model predictions against observations. One problem with current stellar population models is that they remain untested for old populations of stars (because they are rare). These old stars are important because they produce most of the light from massive elliptical galaxies. So a wrong answer from model means a wrong answer on various properties of massive elliptical galaxies such as their age and metallicity. (Houstan, we have a problem.)

Fear not — this paper introduces fluctuation spectroscopy as a new way to test stellar population models for elliptical galaxies. It focuses on a group of stars known as red giants, stars nearing the end of their lives. The spectra of red giants have features (TiO and water molecular bands) that can be used to obtain the chemical abundances, age, and initial mass function (IMF) of a galaxy. Red giants are very luminous. For instance, once our beloved Sun grows into old age as a red giant, it will be thousands of times more luminous than today. As such, red giants dominate the light of early-type galaxy (another name for elliptical galaxy). By looking at an image of an early-type galaxy, we can infer that bright pixels contain more red giants than faint pixels. Figure 1 illustrates this effect. Intensity variations from pixel-to-pixel are due to fluctuations in the number of red giants. By comparing the spectra of pixels with different brightness, one can isolate the spectral features of red giants. Astronomers can then analyse these spectral features to derive galaxy properties to be checked against model predictions.

FIG. 1 – Top left figure shows a model elliptical galaxy based on observation of NGC 4472. The right figure zooms in on a tiny part of the galaxy, and shows the pixel-to-pixel brightness variations within that tiny region. Figures on the bottom panel further zoom in on a bright (white) and a faint (black) pixel. The bright pixel (bottom left) contains many more bright red giant stars, represented as red dots, compared to the faint pixel (bottom right). The inset figures are color versus magnitude diagrams of the stars in these pixels, where there are more luminous giant stars (open circles) in the bright pixel.

The authors applied fluctuation spectroscopy on NGC 4472, the brightest galaxy in the Virgo cluster. They obtained images of the galaxy at six different wavelengths using the narrow-band filters (filters that allow only a few wavelengths of light, or emission lines, to pass through; see this or this) in the Advanced Camera for Surveys aboard the Hubble Space Telescope. In addition, they acquired deep broad-band images (images obtained using broad-band filters that allow a large portion of light to go through) of the galaxy. These broad-band images, because of their high signal-to-noise compared to the narrow-band images (broad-band images receive more light than narrow-band images and so have higher signals), are used to measure the flux in each pixel in order to measure how brightness changes. Next, the authors divided narrow-band images in two adjacent narrow-band filters. Recall that since narrow-band filters allow only certain emission lines to get through, the ratio of flux in two narrow-band filters –an “index image”– is a proxy to the distribution of stellar types in each pixel because different stars produce different emission lines. The money plot of this paper, Figure 2, shows the relation between the averaged indices of index image and surface brightness fluctuation; it illuminates the fact that pixels with more red giants (larger SBF) produce a different spectrum (indices of index images) than pixels with less giants (lower SBF).

By fitting observed index variations with models, we can obtain a predicted spectrum. The authors compared observed index variations of NGC 4472 with modeled index variations derived from Conroy & van Dokkum (2012) stellar population synthesis models, shown in Figure 3, which performs well in characterizing the galaxy.

The last thing that the authors analysed are the effects of changing model parameters on the indices of index images, in particular by varying age, metallicity, and the IMF. They found that the indices are sensitive to age and metallicity, thereby enabling them to exclude models that produce incompatible ages and metallicities with observations. One interesting result is that the indices are also sensitive to the presence of late M giant stars, which allows one to constrain their contribution to the total light from a galaxy. This is useful because standard stellar population synthesis models for early-type galaxies do not include the presence of these cool giants.

In conclusion, the authors introduced fluctuation spectroscopy as a probe of stellar type distributions in old populations. They applied this method to NGC 4472 and found that results of observation agree very well with model predictions. Various perturbations are introduced into the model with the most important result being that one can quantify the contribution of late M giants to the integrated light of early-type galaxies. Before ending, the authors propose directions for future work, which include obtaining actual spectra rather than narrow-band images and studying larger ranges of surface brightness fluctuations.

FIG. 2 – Vertical axis is the flux ratio in a narrow-band filter and the adjacent band. It is a measure of the different number of different stars present. The horizontal axis is surface brightness fluctuation, SBF. SBF = 1 is the mean, while SBF < 1 represents little fluctuation and SBF > 1 represents high fluctuation. There is a trend between index and SBF because red giants produce a larger-than-average brightness and a different spectrum that changes the index of different index images.

FIG. 3 – The top panel compares observed indices (dots) of NGC 4472 with model indices (lines). The vertical and horizontal axes are the same as Figure 2. The bottom panel shows the differences between observed and predicted indices. These figures suggest that model predictions agree amazingly well with observations.

## November 20, 2014

### astrobites - astro-ph reader's digest

Real-Time Stellar Evolution

Images of four planetary nebulae taken by the Hubble Space Telescope using a narrow Hα filter. All of these feature hydrogen-rich central stars.

To get an idea of how stars live and die, we can’t just pick one and watch its life unfold in real time. Most stars live for billions of years! So instead, we do a population census of sorts. Much like you can study how humans age by taking a “snapshot” of individuals ranging from newborn to elderly, so too can we study the lives of stars.

But like all good things in life (and stars), there are exceptions. Sometimes, stellar evolution happens on more human timescales—tens to hundreds of years rather than millions or billions. One such exception is the topic of today’s paper: planetary nebulae, and the rapidly dying stellar corpses responsible for all that glowing gas.

All stars similar to our Sun, or up to about eight times as massive, will end their lives embedded in planetary nebulae like these. The name is a holdover from their discovery and general appearance—we have long known that planetary nebulae have nothing to do with planets. Instead, they are the former outer layers of a star: an envelope of material hastily ejected when gravity can no longer hold a star together. In its final death throes, what’s left of the star rapidly heats up and begins to ionize gas in the nebula surrounding it.

A Deathly Glow

Ionized gas is the telltale sign that the central star in a planetary nebula isn’t quite done yet. When high-energy light from a dying star rams into gas in its planetary nebula, some atoms of gas are so energized that electrons are torn from their nuclei. Hotter central stars emit more light, making the ionized gas glow brighter. This final stage of stellar evolution is what the authors of today’s paper observe in real time for a handful of planetary nebulae.

Most planetary nebulae show increasing oxygen emission with time as the central star heats up and ionizes gas in the nebula. The stars are classified into one of three categories based on their spectra. Points indicate the average change in oxygen emission per year, and dashed lines show simple stellar evolution models for stars with final masses between 0.6 and 0.7 times that of the Sun.

The figure above shows how oxygen emission in many planetary nebulae has changed brightness over time. Each point represents data spanning at least ten years and brings together new observations with previously published values in the literature. Distinct symbols assign each star to one of three categories: stars with lots of hydrogen in their spectra (H rich), Wolf-Rayet ([WR]) stars with many emission lines in their spectra (indicating lots of hot gas very close to the star), and weak emission line stars (wels). The fact that most stars show an increase in planetary nebula emission—the stars are heating up—agrees with our expectations.

Oxygen emission flux as a function of time for three planetary nebulae over 30+ years. The top two systems, M 1-11 and M 1-12, have hydrogen-rich stars that cause increasing emission as expected. The bottom pane, SwSt 1, shows a Wolf-Rayet star with a surprising decreasing trend.

The earliest observation in this study is from 1978. Spectrographs and imaging techniques have improved markedly since then! While some changes in flux are from different observing techniques, the authors conclude that at least part of each flux increase is real. What’s more, hydrogen-rich stars seem to agree with relatively simple evolution models, shown as dashed lines on the figure above. (Stars move toward the right along the lines as they evolve.) More evolved stars cause oxygen in the nebula to glow ever brighter, but the rate of increase in oxygen emission slows as the star ages and loses fuel.

There’s Always an Oddball

However, the authors find that some planetary nebulae don’t behave quite as consistently. None of the more evolved Wolf-Rayet systems show increasing emission with time. In fact, one of them, in the bottom pane of the figure to the right, shows a steady decline in oxygen emission! This suggests the hot gas closest to the star may be weakening even as the star is getting hotter, but it is not fully understood.

This unique glimpse into real-time stellar evolution is possible because so many changes happen to a star as it nears the end of its life. Eventually, these hot stellar remnants will become white dwarfs and slowly cool for eternity. Until then, not-dead-yet stars and their planetary nebulae have lots to teach us.

### Symmetrybreaking - Fermilab/SLAC

CERN frees LHC data

Anyone can access collision data from the Large Hadron Collider through the new CERN Open Data Portal.

Today CERN launched its Open Data Portal, which makes data from real collision events produced by LHC experiments available to the public for the first time.

“Data from the LHC program are among the most precious assets of the LHC experiments, that today we start sharing openly with the world,” says CERN Director General Rolf Heuer. “We hope these open data will support and inspire the global research community, including students and citizen scientists.”

The LHC collaborations will continue to release collision data over the coming years.

The first high-level and analyzable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. Open source software to read and analyze the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” says CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations that have been prepared for educational purposes. These resources are accompanied by visualization tools.

All data on OpenData.cern.ch are shared under a Creative Commons CC0 public domain dedication. Data and software are assigned unique DOI identifiers to make them citable in scientific articles. And software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

CERN published a version of this article as a press release.

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

CERN frees LHC data

Anyone can access collision data from the Large Hadron Collider through the new CERN Open Data Portal.

Today CERN launched its Open Data Portal, which makes data from real collision events produced by LHC experiments available to the public for the first time.

“Data from the LHC program are among the most precious assets of the LHC experiments, that today we start sharing openly with the world,” says CERN Director General Rolf Heuer. “We hope these open data will support and inspire the global research community, including students and citizen scientists.”

The LHC collaborations will continue to release collision data over the coming years.

The first high-level and analyzable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. Open source software to read and analyze the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” says CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations that have been prepared for educational purposes. These resources are accompanied by visualization tools.

All data on OpenData.cern.ch are shared under a Creative Commons CC0 public domain dedication. Data and software are assigned unique DOI identifiers to make them citable in scientific articles. And software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.

CERN published a version of this article as a press release.

Like what you see? Sign up for a free subscription to symmetry!

### arXiv blog

Twitter "Exhaust" Reveals Patterns of Unemployment

Twitter data mining reveals surprising detail about socioeconomic indicators but at a fraction of the cost of traditional data-gathering methods, say computational sociologists.

Human behaviour is closely linked to social and economic status. For example, the way an individual travels round a city is influenced by their job, their income and their lifestyle.

### Peter Coles - In the Dark

Hubble Images With Music By Herschel

Too busy for a full post today, so here’s a little stocking filler. The, perhaps familiar, pictures are taken by the Hubble Space Telescope but the music is by noted astronomer (geddit?) Sir William Herschel – the Second Movement of his Chamber Symphony In F Major, marked Adagio e Cantabile. Although best known as an astronomer Herschel was a capable musician and composer with a style very obviously influenced by his near contemporary Georg Frideric Handel. Although music of this era puts me on a High Harpsichord Alert, I thought I’d share this example of music for those of you unfamiliar with his work…

### Jester - Resonaances

Update on the bananas
One of the most interesting physics stories of this year was the discovery of an unidentified 3.5 keV x-ray  emission line from galactic clusters. This so-called bulbulon can be interpreted as a signal of a sterile neutrino dark matter particle decaying into an active neutrino and  a photon. Some time ago I wrote about the banana paper that questioned the dark matter origin of the signal. Much has happened since, and I owe you an update. The current experimental situation is summarized in this plot:

To be more specific, here's what's happening.

•  Several groups searching for the 3.5 keV emission have reported negative results. One of those searched for the signal in dwarf galaxies, which offer a  much cleaner environment allowing for a more reliable detection. No signal was found, although the limits do not exclude conclusively the original bulbulon claim. Another study looked for the signal in multiple galaxies. Again, no signal was found, but this time the reported limits are in severe tension with the sterile neutrino interpretation of the bulbulon. Yet another study failed to find the 3.5 keV line in  Coma, Virgo and Ophiuchus clusters, although they detect it in the Perseus cluster. Finally, the banana group analyzed the morphology of the 3.5 keV emission from the Galactic center and Perseus and found it incompatible with dark matter decay.
• The discussion about the existence of the 3.5 keV emission from the Andromeda galaxy ongoing. The conclusions seem to depend on the strategy to determine the continuum x-ray emission. Using data from the XMM satellite, the banana group fits the background in the 3-4 keV range  and does not find the line, whereas this paper argues it is more kosher to fit in the 2-8 keV range, in which case the line can be detected in exactly the same dataset. It is not obvious who is right, although the fact that the significance of the signal depends so strongly on the background fitting procedure is not encouraging.
• The main battle rages on around K-XVIII (X-n stands for the X atom stripped of n-1 electrons; thus, K-XVIII is the potassium ion with 2 electrons). This little bastard has emission lines at 3.47 keV and 3.51 keV which could account for the bulbulon signal. In the original paper, the bulbuline group invokes a model of plasma emission that allows them to constrain  the flux due to the K-XVIII emission from  the  measured ratios of the strong S-XVI/S-XV and Ca-XX/Ca-XIX lines. The banana paper argued that the bulbuline model is unrealistic as it  gives inconsistent predictions for some plasma line ratios. The bulbuline group pointed out that the banana group used wrong numbers to estimate the line emission strenghts. The banana group maintains that their conclusions still hold when the error is corrected. It all boils down to the question whether the allowed range for the K-XVIII emission strength assumed by the bulbine group is conservative enough. Explaining the 3.5 keV feature solely by K-XVIII requires assuming element abundance ratios that are very different than the solar one, which may or may not be realistic.
•  On the other hand, both groups have converged on the subject of chlorine. In the banana  paper it  was pointed out that the 3.5 keV line may be due to the Cl-XVII (hydrogen-like chlorine ion) Lyman-β transition which happens to be at 3.51 keV. However the bulbuline group subsequently derived limits on the corresponding Lyman-α line at 2.96 keV. From these limits, one can deduce in a fairly model-independent way that the contribution of Cl-XVII Lyman-β transition is negligible.

To clarify the situation we need more replies to comments on replies, and maybe also  better data from future x-ray satellite missions. The significance of the detection depends, more than we'd wish, on dirty astrophysics involved in modeling the standard x-ray emission from galactic plasma. It seems unlikely that the sterile neutrino model with the originally reported parameters will stand, as it is in tension with several other analyses. The probability of the 3.5 keV signal being of dark matter origin is certainly much lower than a few months ago. But the jury is still out, and it's not impossible to imagine that more data and more analyses will tip the scales the other way.

Further reading: how to protect yourself from someone attacking you with a banana.

### Tommaso Dorigo - Scientificblogging

Extraordinary Claims: Review My Paper For \$10
Bringing the concept of peer review to another dimension, I am offering you to read a review article I just wrote. You are invited to contribute to its review by suggesting improvements, corrections, changes or amendments to the text. I sort of need some scrutiny of this paper since it is not a report of CMS results -and thus I have not been forced by submit it for internal review to my collaboration.

read more

## November 19, 2014

### astrobites - astro-ph reader's digest

Could we detect signs of life on a massive super-Earth?

Super-Earths are the Starbucks of the modern world–you can find them everywhere, its not exactly what you want but it’s just good enough to satisfy your desire for something better. Super-Earths are not technically Earth-like since they are up to 10 Earth masses and have thick hydrogen (H2) atmospheres. However, they are rocky like Earth, they have an atmosphere like Earth, and if they are in the habitable zone, there is a good chance they could have liquid water like Earth. Case and point: they are just good enough.

Unfortunately, in the next 15 years, the only way we will be able to characterize a super-Earth, is if it’s orbiting an M-type star. Since M-type stars are smaller and dimmer than the Sun, the planets orbiting them need to be closer in so that the planet get enough warmth to sustain liquid water. Therefore, habitable zone planets around M-type stars could be observed in transit once every ~20 days rather than once every year for an Earth twin. This bodes well for future missions that will try and characterize exoplanets such as the James Webb Space Telescope (JWST).

So, if super-Earths orbiting M-type stars are our best bet at characterization, it pays to think about what signs of life, or biosignatures, could hypothetically be detected in one of their atmospheres. Seager et al. investigate several biosignatures and aim to identify which are likely to build up to detectable levels in an H2-dominated super-Earth orbiting an M-type star.

Biosignatures and Photochemistry

To test the “build up” of any molecule, let’s say ABX, in an atmosphere, you need to know what molecular species are creating ABX and what molecular species or processes are destroying ABX. In the world of photochemistry, we refer to these as sources and sinks. The photochemical model that Seager et al. use includes 111 species, involved in 824 chemical reactions and 71 photochemical reactions. Dwell on that parameter space… A photochemical reaction occurs when a molecule absorbs a photon of light and is broken down into smaller components. We call this process photolysis and it can be a major sink for biosignatures, depending on how much UV flux the star is giving off. Let’s take Earth as an example.

Since oxygen, O2, is a abundantly produced by life on Earth, it is one of Earth’s dominant biosignature gases. O2 is destroyed by photolysis when it interacts with, you guessed it, UV light. On Earth though, UV radiation from our Sun isn’t that high, so O2 is free to build up in the atmosphere. If we were to increase the UV radiation Earth received, it is likely that O2 would all be destroyed and would cease being one of Earth’s dominant biosignature gases.

Because M stars might have a much higher UV flux than our Sun, it is uncertain how much UV flux a super-Earth orbiting an M star will receive. Therefore, in order to asses which biosignature gases will build up on an exoplanetary atmosphere orbiting an M star, we need to assess each of the bisoignature gas’s removal rate, or the rate at which a molecule is destroyed by photolysis or any other reaction.

The rate at which H, O, and OH destroy CH3Cl as a function of UV flux received from the parent star. The dashed lines represent the case of a 10% N2, 90% H2 atmosphere. The diamond and the circle show cases for an N2 dominated atmosphere and a present day atmosphere, respectively, Main point: Removal rate increases with UV flux. Image credit: Seager et al. (2013) ApJ

In order to illustrate this effect, Seager et al. took a biosignature gas, CH3Cl, and calculated the removal rate by reactions with H, O and OH as a function of UV flux. As we’d expect, the figure above shows that the removal rate increases with UV flux. This means that if we encounter a super-Earth around an M-type star that has a high UV flux, the rate of removal of a biosignature gas will depend largely on the concentration of the gas and how quickly it is being destroyed by H, O and OH.

The Most Likely Biosignature Gas

After considering the removal rate of several biosignature gases, Seager et al. find that ammonia (NH3) is likely to build up in the atmosphere of a super-Earth orbiting an M star. NH3 is created when a microbe harvests energy from a chemical energy gradient. On Earth, ammonia is not produced in large quantities so there isn’t a lot of it in our atmosphere. However, if an alien world produced as much ammonia as humans produced oxygen, it may actually be detectable in their atmosphere.

In a world where NH3 is a viable biosignature, life would be vastly different from what we see on Earth. It would need to be able to break the H2 and N2 bonds in the reaction: 3H2 + N2→ 2NH3. Since this reaction is exothermic (releases heat), it could be used to harvest energy. Is this possible though? Seager et. al. say that although there is no chemical reaction on Earth that can break both bonds of H2 and N2, there is no physical reason that it can’t happen.

Thermal emission spectra for a 90% H2, 10% N2 super-Earth (10 Earth masses, 1.75 Earth radii). Each color spectrum represents a different concentration of ammonia. Higher ammonia concentrations create stronger emission features. Main point: If life was producing lots of NH3, we would be able to see it in the spectrum of a super-Earth orbiting an M star. Image credit: Seager et al. ApJ, 2013

The plot above shows what the spectrum of a planet would look like if it were producing lots of ammonia. This spectrum is taken in “thermal emission” which means that we are looking at the planet when it is just about to disappear behind its parent star. There are strong NH3 emission features (labeled) from 2-100 microns. JWST will be able to make observations in the 1-30 micron range and will likely observe at least a handful of super-Earths orbiting M-type stars. So, should we expect to find one of these NH3 producing life forms? This is where I leave the Seager et. al. paper and let your imagination take over.

### Clifford V. Johnson - Asymptotia

Chalkboards Everywhere!
I love chalkboards (or blackboards if you prefer). I love showing up to give a talk somewhere and just picking up the chalk and going for it. No heavily over-packed slides full of too many fast moving things, as happens too much these days. If there is coloured chalk available, that's fantastic - special effects. It is getting harder to find these boards however. Designers of teaching rooms and other spaces seem embarrassed by them, and so they either get smaller or disappear, often in favour of the less than magical whiteboard. So in my continued reinvention of the way I produce slides for projection (I do this every so often), I've gone another step forward in returning to the look (and [...] Click to continue reading this post

### Symmetrybreaking - Fermilab/SLAC

LHCb experiment finds new particles

A new LHCb result adds two new composite particles to the quark model.

Today the LHCb experiment at CERN’s Large Hadron Collider announced the discovery of two new particles, each consisting of three quarks.

The particles, known as the Xi_b'- and Xi_b*-, were predicted to exist by the quark model but had never been observed. The LHCb collaboration submitted a paper reporting the finding to the journal Physical Review Letters.

Similar to the protons that the LHC accelerates and collides, these two new particles are baryons and made from three quarks bound together by the strong force.

But unlike protons—which are made of two up quarks and one down quark—the new Xi_b particles both contain one beauty quark, one strange quark and one down quark. Because the b quarks are so heavy, these particles are more than six times as massive as the proton.

“We had good reason to believe that we would be able to see at least one of these two predicted particles,” says Steven Blusk, an LHCb researcher and associate professor of physics at Syracuse University. “We were lucky enough to see both. It’s always very exciting to discover something new.”

Even though these two new particles contain the same combination of quarks, they have a different configuration of spin—which is a quantum mechanical property that describes a particle’s angular momentum. This difference in spin makes Xi_b*- a little heavier than Xi_b'-.

“Nature was kind and gave us two particles for the price of one," says Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products. If it had been just a little lighter, we wouldn't have seen it at all.”

In addition to the masses of these particles, the research team studied their relative production rates, their widths—which is a measurement of how unstable they are—and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

“QCD is a powerful framework that describes the interactions of quarks, but it is not that precise,” Blusk says. “If we do see something new, we need to be able to say that is not the result of uncertainties in QCD, but that it is in fact something new and unexpected. That is why we need precision data and precision measurements like these—to refine our models.”

The LHCb detector is one of the four main Large Hadron Collider experiments. It is specially designed to study hadrons and search for new particles.

“As you go up in mass, it becomes harder to discover new particles and requires unique detector capabilities,” Blusk says. “These new measurements really exploit the strengths of the LHCb detector, which has the unique ability to clearly identify hadrons.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared—after its first long shutdown—to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

“I’m a firm believer that whenever you look for something, there is always the possibility that you will instead find something completely unexpected,” Blusk says. “Doing these generic searches opens the door for discovering new physics. We are just starting to explore b-baryon sector, and more data from the next run of the LHC will allow us to discover more particles not see before.”

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

LHCb experiment finds new particles

A new LHCb result adds two new composite particles to the quark model.

Today the LHCb experiment at CERN’s Large Hadron Collider announced the discovery of two new particles, each consisting of three quarks.

The particles, known as the Xi_b'- and Xi_b*-, were predicted to exist by the quark model but had never been observed. The LHCb collaboration submitted a paper reporting the finding to the journal Physical Review Letters.

Similar to the protons that the LHC accelerates and collides, these two new particles are baryons and made from three quarks bound together by the strong force.

But unlike protons—which are made of two up quarks and one down quark—the new Xi_b particles both contain one beauty quark, one strange quark and one down quark. Because the b quarks are so heavy, these particles are more than six times as massive as the proton.

“We had good reason to believe that we would be able to see at least one of these two predicted particles,” says Steven Blusk, an LHCb researcher and associate professor of physics at Syracuse University. “We were lucky enough to see two. It’s always very exciting to discover something new and unexpected.”

Even though these two new particles contain the same combination of quarks, they have a different configuration of spin—which is a quantum mechanical property that describes a particle’s angular momentum. This difference in spin makes Xi_b*- a little heavier than Xi_b'-.

“Nature was kind and gave us two particles for the price of one," says Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products’ masses. If it had been just a little lighter, we wouldn't have seen it at all.”

In addition to the masses of these particles, the research team studied their relative production rates, their widths—which is a measurement of how unstable they are—and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

“QCD is a powerful framework that describes the interactions of quarks, but it is difficult to compute properties of particles with high precision,” Blusk says. “If we do see something new, we need to be able to say that is not the result of uncertainties in QCD, but that it is in fact something new and unexpected. That is why we need precision data and precision measurements like these—to refine our models.”

The LHCb detector is one of the four main Large Hadron Collider experiments. It is specially designed to search for new forces of nature by studying the decays of particles containing beauty and charm quarks.

“As you go up in mass, it becomes harder to discover new particles,” Blusk says. “These new measurements really exploit the strengths of the LHCb detector, which has the unique ability to clearly identify hadrons.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared—after its first long shutdown—to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

“Whenever you look for something, there is always the possibility that you will instead uncover something completely unexpected,” Blusk says. “Doing these generic searches opens the door for discovering new particles. We are just starting to explore b-baryon sector, and more data from the next run of the LHC will allow us to discover more particles not see before.”

Like what you see? Sign up for a free subscription to symmetry!

### The n-Category Cafe

Integral Octonions (Part 8)

This time I’d like to summarize some work I did in the comments last time, egged on by a mysterious entity who goes by the name of ‘Metatron’.

As you probably know, there’s an archangel named Metatron who appears in apocryphal Old Testament texts such as the Second Book of Enoch. These texts rank Metatron second only to YHWH himself. I don’t think the Metatron posting comments here is the same guy. However, it’s a good name for someone interested in lattices and geometry, since there’s a variant of the Cabbalistic Tree of Life called Metatron’s Cube, which looks like this:

This design includes within it the ${\mathrm{G}}_{2}\mathrm\left\{G\right\}_2$ root system, a 2d projection of a stellated octahedron, and a perspective drawing of a hypercube.

Anyway, there are lattices in 26 and 27 dimensions that play rather tantalizing and mysterious roles in bosonic string theory. Metatron challenged me to find octonionic descriptions of them. I did.

Given a lattice $LL$ in $nn$-dimensional Euclidean space, there’s a way to build a lattice ${L}^{++}L^\left\{++\right\}$ in $\left(n+2\right)\left(n+2\right)$-dimensional Minkowski spacetime. This is called the ‘over-extended’ version of $LL$.

If we start with the lattice ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ in 8 dimensions, this process gives a lattice called ${\mathrm{E}}_{10}\mathrm\left\{E\right\}_\left\{10\right\}$, which plays an interesting but mysterious role in superstring theory. This shouldn’t come as a complete shock, since superstring theory lives in 10 dimensions, and it can be nicely formulated using octonions, as can the lattice ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$.

If we start with the lattice called ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_\left\{24\right\}$, this over-extension process gives a lattice ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$. This describes the ‘cosmological billiards’ for the 3d compactification of the theory of gravity arising from bosonic string theory. Again, this shouldn’t come as a complete shock, since bosonic string theory lives in 26 dimensions.

Last time I gave a nice description of ${\mathrm{E}}_{10}\mathrm\left\{E\right\}_\left\{10\right\}$: it consists of $2×22 \times 2$ self-adjoint matrices with integral octonions as entries.

It would be nice to get a similar description of ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$. Indeed, one exists! But to find it, it’s actually easier to go up to 27 dimensions, because the space of $3×33 \times 3$ self-adjoint matrices with octonion entries is 27-dimensional. And indeed, there’s a 27-dimensional lattice waiting to be described with octonions.

You see, for any lattice $LL$ in $nn$-dimensional Euclidean space, there’s also a way to build a lattice ${L}^{+++}L^\left\{+++\right\}$ in $\left(n+3\right)\left(n+3\right)$-dimensional Minkowski spacetime, called the ‘very extended’ version of $LL$.

If we do this to $L={\mathrm{E}}_{8}L = \mathrm\left\{E\right\}_8$ we get an 11-dimensional lattice called ${\mathrm{E}}_{11}\mathrm\left\{E\right\}_\left\{11\right\}$, which has mysterious connections to M-theory. But if we do it to ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_\left\{24\right\}$ we get a 27-dimensional lattice sometimes called ${\mathrm{K}}_{27}\mathrm\left\{K\right\}_\left\{27\right\}$. You can read about both these lattices here:

I’ll prove that both ${\mathrm{E}}_{11}\mathrm\left\{E\right\}_\left\{11\right\}$ and ${\mathrm{K}}_{27}\mathrm\left\{K\right\}_\left\{27\right\}$ have nice descriptions in terms of integral octonions. To do this, I’ll use the explanation of over-extended and very extended lattices given here:

These constructions use a 2-dimensional lattice called $\mathrm{H}\mathrm\left\{H\right\}$. Let’s get to know this lattice. It’s very simple.

### A 2-dimensional Lorentzian lattice

Up to isometry, there’s a unique even unimodular lattice in Minkowski spacetime whenever its dimension is 2 more than a multiple of 8. The simplest of these is $\mathrm{H}\mathrm\left\{H\right\}$: it’s the unique even unimodular lattice in 2-dimensional Minkowski spacetime.

There are various ways to coordinatize $\mathrm{H}\mathrm\left\{H\right\}$. The easiest, I think, is to start with ${ℝ}^{2}\mathbb\left\{R\right\}^2$ and give it the metric $gg$ with

$g\left(x,x\right)=-2uv g\left(x,x\right) = -2 u v $

when $x=\left(u,v\right)x = \left(u,v\right)$. Then, sitting in ${ℝ}^{2}\mathbb\left\{R\right\}^2$, the lattice ${ℤ}^{2}\mathbb\left\{Z\right\}^2$ is even and unimodular. So, it’s a copy of $\mathrm{H}\mathrm\left\{H\right\}$.

Let’s get to know it a bit. The coordinates $uu$ and $vv$ are called lightcone coordinates, since the $uu$ and $vv$ axes form the lightcone in 2d Minkowski spacetime. In other words, the vectors

$\ell =\left(1,0\right),\phantom{\rule{1em}{0ex}}\ell \prime =\left(0,1\right) \ell = \left(1,0\right), \quad \ell\text{'} = \left(0,1\right) $

are lightlike, meaning

$g\left(\ell ,\ell \right)=0,\phantom{\rule{1em}{0ex}}g\left(\ell \prime ,\ell \prime \right)=0 g\left(\ell,\ell\right) = 0 , \quad g\left(\ell\text{'}, \ell\text{'}\right) = 0 $

Their sum is a timelike vector

$\tau =\ell +\ell \prime =\left(1,1\right) \tau = \ell + \ell\text{'} = \left(1,1\right)$

since the inner product of $\tau \tau$ with itself is negative; in fact

$g\left(\tau ,\tau \right)=-2 g\left(\tau,\tau\right) = -2 $

Their difference is a spacelike vector

$\sigma =\ell -\ell \prime =\left(1,-1\right) \sigma = \ell - \ell\text{'} = \left(1,-1\right) $

since the inner product of $\sigma \sigma$ with itself is positive; in fact

$g\left(\sigma ,\sigma \right)=2 g\left(\sigma,\sigma\right) = 2 $

Since the vectors $\tau \tau$ and $\sigma \sigma$ are orthogonal and have length $\sqrt{2}\sqrt\left\{2\right\}$ in the metric $gg$, we get a square of area $22$ with corners

$0,\tau ,\sigma ,\tau +\sigma 0, \tau, \sigma, \tau + \sigma $

that is,

$\left(0,0\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,1\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,-1\right),\phantom{\rule{thickmathspace}{0ex}}\left(2,0\right) \left(0,0\right),\; \left(1,1\right),\; \left(1,-1\right), \;\left(2,0\right) $

If you draw a picture, you can see by dissection that this square has twice the area of the unit cell

$\left(0,0\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,0\right),\phantom{\rule{thickmathspace}{0ex}}\left(0,1\right),\phantom{\rule{thickmathspace}{0ex}}\left(1,1\right) \left(0,0\right),\; \left(1,0\right), \; \left(0,1\right) , \; \left(1,1\right) $

So, the unit cell has area 1, and the lattice is unimodular as claimed. Furthermore, every vector in the lattice has even inner product with itself, so this lattice is even.

### Over-extended lattices

Given a lattice $LL$ in Euclidean ${ℝ}^{n}\mathbb\left\{R\right\}^n$,

${L}^{++}=L\oplus \mathrm{H}L^\left\{++\right\} = L \oplus \mathrm\left\{H\right\} $

is a lattice in $\left(n+2\right)\left(n+2\right)$-dimensional Minkowski spacetime, also known as ${ℝ}^{n+1,1}\mathbb\left\{R\right\}^\left\{n+1,1\right\}$. This lattice ${L}^{++}L^\left\{++\right\}$ is called the over-extension of $LL$.

A direct sum of even lattices is even. A direct sum of unimodular lattices is unimodular. Thus if $LL$ is even and unimodular, so is ${L}^{++}L^\left\{++\right\}$.

All this is obvious. But here are some deeper facts about even unimodular lattices. First, they only exist in ${ℝ}^{n}\mathbb\left\{R\right\}^n$ when $nn$ is a multiple of 8. Second, they only exist in ${ℝ}^{n+1,1}\mathbb\left\{R\right\}^\left\{n+1,1\right\}$ when $nn$ is a multiple of 8.

But here’s the really amazing thing. In the Euclidean case there can be lots of different even unimodular lattices in a given dimension. In 8 dimensions there’s just one, up to isometry, called ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$. In 16 dimensions there are two. In 24 dimensions there are 24. In 32 dimensions there are at least 1,160,000,000, and the number continues to explode after that. On the other hand, in the Lorentzian case there’s just one even unimodular lattice in a given dimension, if there are any at all.

More precisely: given two even unimodular lattices in ${ℝ}^{n+1,1}\mathbb\left\{R\right\}^\left\{n+1,1\right\}$, they are always isomorphic to each other via an isometry: a linear transformation that preserves the metric. We then call them isometric.

Let’s look at some examples. Up to isometry, ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ is the only even unimodular lattice in 8-dimensional Euclidean space. We can identify it with the lattice of integral octonions, $O\subseteq 𝕆\mathbf\left\{O\right\} \subseteq \mathbb\left\{O\right\}$, with the inner product

$g\left(X,X\right)=2X{X}^{*} g\left(X,X\right) = 2 X X^*$

${L}^{++}L^\left\{++\right\}$ is usually called ${E}_{10}E_\left\{10\right\}$. Up to isometry, this is the unique even unimodular lattice in 10 dimensions. There are lots of ways to describe it, but last time we saw that it’s the lattice of $2×22 \times 2$ self-adjoint matrices with integral octonions as entries:

${𝔥}_{2}\left(O\right)=\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{cc}a& X\\ {X}^{*}& b\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \mathfrak\left\{h\right\}_2\left(\mathbf\left\{O\right\}\right) = \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{cc\right\} a & X \\ X^* & b \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\}, \; X \in \mathbf\left\{O\right\} \; \right\\right\} $

where the metric comes from $-2-2$ times the determinant:

$x=\left(\begin{array}{cc}a& X\\ {X}^{*}& b\end{array}\right)\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}⇒\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}g\left(x,x\right)=-\mathrm{det}\left(x\right)=2X{X}^{*}-2ab x = \left\left( \begin\left\{array\right\}\left\{cc\right\} a & X \\ X^* & b \end\left\{array\right\} \right\right) \;\; \implies \;\; g\left(x,x\right) = - \det\left(x\right) = 2 X X^* - 2 a b $

We’ll see a fancier formula like this later on.

There are 24 even unimodular lattices in 24-dimensional Euclidean space. One of them is

${\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8} \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 $

Another is ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_24$. This is the lattice of vectors in ${ℝ}^{24}\mathbb\left\{R\right\}^\left\{24\right\}$ where the components are integers and their sum is even. It’s also the root lattice of the Lie group $\mathrm{Spin}\left(48\right)\mathrm\left\{Spin\right\}\left(48\right)$.

If we take the over-extension of any of these lattices, we get an even unimodular lattice in 26-dimensional Minkowski spacetime… and all these are isometric! The over-extension process ‘washes out the difference’ between them. In particular,

${\mathrm{D}}_{24}^{++}\cong \left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{++} \mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\} \cong \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{++\right\} $

This is nice because up to a scale factor, ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ is the lattice of integral octonions. So, there’s a description of ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$ using three integral octonions! But the story is prettier if we go up an extra dimension.

### Very extended lattices

After the over-extended version ${L}^{++}L^\left\{++\right\}$of a lattice $LL$ in Euclidean space comes the ‘very extended’ version, called ${L}^{+++}L^\left\{+++\right\}$. If you ponder the paper by Gaberdiel et al, you can see this is the direct sum of the over-extension ${L}^{++}L^\left\{++\right\}$ and a 1-dimensional lattice called ${\mathrm{A}}_{1}\mathrm\left\{A\right\}_1$. ${\mathrm{A}}_{1}\mathrm\left\{A\right\}_1$ is just $\mathbb\left\{Z\right\}$ with the metric

$g\left(x,x\right)=2{x}^{2} g\left(x,x\right) = 2 x^2 $

It’s even but not unimodular.

In short, the very extended version of $LL$ is

${L}^{+++}={L}^{++}\oplus {\mathrm{A}}_{1}=L\oplus \mathrm{H}\oplus {\mathrm{A}}_{1}L^\left\{+++\right\} = L^\left\{++\right\} \oplus \mathrm\left\{A\right\}_1 = L \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 $

If $LL$ is even, so is ${L}^{+++}L^\left\{+++\right\}$. But if $LL$ is unimodular, this will not be true of ${L}^{+++}L^\left\{+++\right\}$.

The very extended version of ${\mathrm{E}}_{8}\mathrm\left\{E\right\}_8$ is called ${\mathrm{E}}_{11}\mathrm\left\{E\right\}_\left\{11\right\}$. This a fascinating thing, but I want to talk about the very extended version of ${\mathrm{D}}_{24}\mathrm\left\{D\right\}_\left\{24\right\}$, and how to describe it using octonions.

Let ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$ be the space of $3×33 \times 3$ self-adjoint octonionic matrices. It’s 27-dimensional since a typical element looks like

$x=\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& c\end{array}\right) x = \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end\left\{array\right\} \right\right) $

where $a,b,c\in ℝ,X,Y,Z\in 𝕆a,b,c \in \mathbb\left\{R\right\}, X,Y,Z \in \mathbb\left\{O\right\}$. It’s called the exceptional Jordan algebra. We don’t need to know about Jordan algebras now, but this concept encapsulates the fact that if $x\in {𝔥}_{3}\left(𝕆\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$, so is ${x}^{2}x^2$.

There’s a 2-parameter family of metrics on the exceptional Jordan algebra that are invariant under all Jordan algebra automorphisms. They have

$g\left(x,x\right)=\alpha tr\left({x}^{2}\right)+\beta tr\left(x{\right)}^{2} g\left(x,x\right) = \alpha \tr\left(x^2\right) + \beta \tr\left(x\right)^2 $

for $\alpha ,\beta \in ℝ\alpha, \beta \in \mathbb\left\{R\right\}$ with $\alpha \ne 0\alpha \ne 0$. Some are Euclidean and some are Lorentzian.

Sitting inside the exceptional Jordan algebra is the lattice of $3×33 \times 3$ self-adjoint matrices with integral octonions as entries:

${𝔥}_{3}\left(O\right)=\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& c\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b,c\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X,Y,Z\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right) = \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end\left\{array\right\} \right\right) :\; a,b,c \in \mathbb\left\{Z\right\}, \; X,Y,Z \in \mathbf\left\{O\right\} \; \right\\right\} $

And here’s the cool part:

Theorem. There is a Lorentzian inner product $gg$ on the exceptional Jordan algebra that is invariant under all automorphisms and makes the lattice ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ isometric to ${\mathrm{K}}_{27}\cong {\mathrm{D}}_{24}^{+++}\mathrm\left\{K\right\}_\left\{27\right\} \cong \mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\}$.

Proof. We will prove that the metric

$g\left(x,x\right)=tr\left({x}^{2}\right)-tr\left(x{\right)}^{2} g\left(x,x\right) = \tr\left(x^2\right) - \tr\left(x\right)^2 $

obeys all the conditions of this theorem. From what I’ve already said, it is invariant under all Jordan algebra automorphisms. The challenge is to show that it makes ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ isometric to ${\mathrm{D}}_{24}^{+++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\}$. But instead of ${\mathrm{D}}_{24}^{+++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\}$, we can work with $\left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{+++}\left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{+++\right\}$, since we have seen that

${\mathrm{D}}_{24}^{+++}\cong \left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{+++} \mathrm\left\{D\right\}_\left\{24\right\}^\left\{+++\right\} \cong \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{+++\right\} $

Let us examine the metric $gg$ in more detail. Take any element $x\in {𝔥}_{3}\left(O\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$:

$x=\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& c\end{array}\right) x = \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end\left\{array\right\} \right\right) $

where $a,b,c\in ℝ,X,Y,Z\in 𝕆a,b,c \in \mathbb\left\{R\right\}, X,Y,Z \in \mathbb\left\{O\right\}$. Then

$\mathrm{tr}\left({x}^{2}\right)={a}^{2}+{b}^{2}+{c}^{2}+2\left(X{X}^{*}+Y{Y}^{*}+Z{Z}^{*}\right) tr\left(x^2\right) = a^2 + b^2 + c^2 + 2\left(X X^* + Y Y^* + Z Z^*\right)$

while

$\mathrm{tr}\left(x{\right)}^{2}=\left(a+b+c{\right)}^{2} tr\left(x\right)^2 = \left(a + b + c\right)^2 $

Thus

$\begin{array}{ccl}g\left(x,x\right)& =& \mathrm{tr}\left({x}^{2}\right)-\mathrm{tr}\left(x{\right)}^{2}\\ & =& 2\left(X{X}^{*}+Y{Y}^{*}+Z{Z}^{*}\right)-2\left(ab+bc+ca\right)\end{array} \begin\left\{array\right\}\left\{ccl\right\} g\left(x,x\right) &=& tr\left(x^2\right) - tr\left(x\right)^2 \\ &=& 2\left(X X^* + Y Y^* + Z Z^*\right) - 2\left(a b + b c + c a\right) \end\left\{array\right\} $

It follows that with this metric, the diagonal matrices are orthogonal to the off-diagonal matrices. An off-diagonal matrix $x\in {𝔥}_{3}\left(O\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ is a triple $\left(X,Y,Z\right)\in {O}^{3}\left(X,Y,Z\right) \in \mathbf\left\{O\right\}^3$, and has

$g\left(x,x\right)=2\left(X{X}^{*}+Y{Y}^{*}+Z{Z}^{*}\right) g\left(x,x\right) = 2\left(X X^* + Y Y^* + Z Z^*\right) $

Thanks to the factor of 2, this metric makes the lattice of these off-diagonal matrices isometric to ${\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8$. Since

$\left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{+++}={\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus \mathrm{H}\oplus {\mathrm{A}}_{1} \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{+++\right\} = \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 $

it thus suffices to show that the 3-dimensional Lorentzian lattice of diagonal matrices in ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ is isometric to

$\mathrm{H}\oplus {\mathrm{A}}_{1} \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 $

A diagonal matrix $x\in {𝔥}_{3}\left(O\right)x \in \mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ is a triple $\left(a,b,c\right)\in {ℤ}^{3}\left(a,b,c\right) \in \mathbb\left\{Z\right\}^3$, and on these triples the inner product $gg$ is given by

$g\left(x,x\right)=-2\left(ab+bc+ca\right) g\left(x,x\right) = -2\left(a b + b c + c a\right) $

If we restrict attention to triples of the form $x=\left(a,b,0\right)x = \left(a,b,0\right)$, we get a 2-dimensional Lorentzian lattice: a copy of ${ℤ}^{2}\mathbb\left\{Z\right\}^2$ with inner product

$g\left(x,x\right)=-2ab g\left(x,x\right) = -2a b$

This is just $\mathrm{H}\mathrm\left\{H\right\}$.

We can use this to show that the lattice of all triples $\left(a,b,c\right)\in {ℤ}^{3}\left(a,b,c\right) \in \mathbb\left\{Z\right\}^3$, with the inner product $gg$, is isometric to $\mathrm{H}\oplus {\mathrm{A}}_{1}\mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1$.

Remember, ${\mathrm{A}}_{1}\mathrm\left\{A\right\}_1$ is a 1-dimensional lattice generated by a spacelike vector whose norm squared is 2. So, it suffices to show that the lattice ${ℤ}^{3}\mathbb\left\{Z\right\}^3$ is generated by vectors of the form $\left(a,b,0\right)\left(a,b,0\right)$ together with a spacelike vector of norm squared 2 that is orthogonal to all those of the form $\left(a,b,0\right)\left(a,b,0\right)$.

To do this, we need to describe the inner product $gg$ on ${ℤ}^{3}\mathbb\left\{Z\right\}^3$ more explicitly. For this, we can use polarization identity

$g\left(x,x\prime \right)=\frac{1}{2}\left(g\left(x+x\prime ,x+x\prime \right)-g\left(x,x\right)-g\left(x\prime ,x\prime \right)\right) g\left(x,x\text{'}\right) = \frac\left\{1\right\}\left\{2\right\}\left( g\left(x+x\text{'},x+x\text{'}\right) - g\left(x,x\right) - g\left(x\text{'},x\text{'}\right)\right) $

Remember, if $x=\left(a,b,c\right)x = \left(a,b,c\right)$ we have

$g\left(x,x\right)=-2\left(ab+bc+ca\right) g\left(x,x\right) = -2\left(a b + b c + c a\right) $

So, if we also have $x\prime =\left(a\prime ,b\prime ,c\prime \right)x\text{'} = \left(a\text{'},b\text{'},c\text{'}\right)$, the polarization identity gives

$g\left(x,x\prime \right)=-\left(ab\prime +a\prime b\right)-\left(bc\prime +bc\prime \right)-\left(ca\prime +c\prime a\right) g\left(x,x\text{'}\right) = -\left(a b\text{'}+a\text{'} b\right) - \left(b c\text{'}+ b c\text{'}\right) - \left(c a\text{'} + c\text{'}a\right)$

We are looking for a spacelike vector $x\prime =\left(a\prime ,b\prime ,c\prime \right)x\text{'} = \left(a\text{'},b\text{'},c\text{'}\right)$ that is orthogonal to all those of the form $x=\left(a,b,0\right)x = \left(a,b,0\right)$. For this, it is necessary and sufficient to have

$0=g\left(\left(1,0,0\right),\left(a\prime ,b\prime ,c\prime \right)\right)=-b\prime -c\prime 0 = g\left(\left(1,0,0\right),\left(a\text{'},b\text{'},c\text{'}\right)\right) = - b\text{'} - c\text{'} $

and

$0=g\left(\left(0,1,0\right),\left(a\prime ,b\prime ,c\prime \right)\right)=-a\prime -c\prime 0 = g\left(\left(0,1,0\right), \left(a\text{'},b\text{'},c\text{'}\right)\right) = - a\text{'} - c\text{'} $

An example is $x\prime =\left(1,1,-1\right)x\text{'} = \left(1,1,-1\right)$. This has

$g\left(x\prime ,x\prime \right)=-2\left(1-1-1\right)=2 g\left(x\text{'},x\text{'}\right) = -2\left(1 - 1 - 1\right) = 2 $

so it is spacelike, as desired. Even better, it has norm squared two. And even better, this vector $x\prime x\text{'}$, along with those of the form $\left(a,b,0\right)\left(a,b,0\right)$, generates the lattice ${ℤ}^{3}\mathbb\left\{Z\right\}^3$.

So we have shown what we needed: the lattice of all triples $\left(a,b,c\right)\in {ℤ}^{3}\left(a,b,c\right) \in \mathbb\left\{Z\right\}^3$ is generated by those of the form $\left(a,b,0\right)\left(a,b,0\right)$ together with a spacelike vector with norm squared 2 that is orthogonal to all those of the form $\left(a,b,0\right)\left(a,b,0\right)$. $\blacksquare \blacksquare$

This theorem has three nice spinoffs:

Corollary. With the same Lorentzian inner product $gg$ on the exceptional Jordan algebra, the lattice ${\mathrm{D}}_{24}^{++}\mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\}$ is isometric to the sublattice of ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ where a fixed diagonal entry is set equal to zero, e.g.:

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& Y\\ {X}^{*}& b& Z\\ {Y}^{*}& {Z}^{*}& 0\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X,Y,Z\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & 0 \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\}, \; X,Y,Z \in \mathbf\left\{O\right\} \; \right\\right\} $

Proof. Use the fact that with the metric $gg$, the diagonal matrices

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& 0& 0\\ 0& b& 0\\ 0& 0& 0\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & 0 \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\} \; \right\\right\} $

form a copy of $\mathrm{H}\mathrm\left\{H\right\}$, so the matrices above form a copy of

${\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus \mathrm{H}\cong \left({\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}\oplus {\mathrm{E}}_{8}{\right)}^{++}\cong {\mathrm{D}}_{24}^{++}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\blacksquare \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \cong \left(\mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{E\right\}_8\right)^\left\{++\right\} \cong \mathrm\left\{D\right\}_\left\{24\right\}^\left\{++\right\} \qquad \qquad \qquad \blacksquare $

Corollary. With the same Lorentzian inner product $gg$ on the exceptional Jordan algebra, the lattice ${\mathrm{E}}_{11}={E}_{8}^{+++}\mathrm\left\{E\right\}_\left\{11\right\} = E_8^\left\{+++\right\}$ is isometric to the sublattice of ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ where two fixed off-diagonal entries are set equal to zero, e.g.:

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& 0\\ {X}^{*}& b& 0\\ 0& 0& c\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b,c\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & 0 \\ X^* & b & 0 \\ 0 & 0 & c \end\left\{array\right\} \right\right) : \; a,b,c \in \mathbb\left\{Z\right\}, \; X\in \mathbf\left\{O\right\} \; \right\\right\} $

Proof. Use the fact that with the metric $gg$, the diagonal matrices

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& 0& 0\\ 0& b& 0\\ 0& 0& c\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b\in ℤ\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end\left\{array\right\} \right\right) : \; a,b \in \mathbb\left\{Z\right\} \; \right\\right\} $

form a copy of $\mathrm{H}\oplus {\mathrm{A}}_{1}\mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1$, so the matrices above form a copy of

${\mathrm{E}}_{8}\oplus \mathrm{H}\oplus {\mathrm{A}}_{1}\cong {\mathrm{E}}_{8}^{+++}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\blacksquare \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1 \cong \mathrm\left\{E\right\}_8^\left\{+++\right\} \qquad \qquad \qquad \blacksquare $

Corollary. With the same Lorentzian inner product $gg$ on the exceptional Jordan algebra, the lattice ${\mathrm{E}}_{10}={\mathrm{E}}_{8}^{++}\mathrm\left\{E\right\}_\left\{10\right\} = \mathrm\left\{E\right\}_8^\left\{++\right\}$ is isometric to the sublattice of ${𝔥}_{3}\left(O\right)\mathfrak\left\{h\right\}_3\left(\mathbf\left\{O\right\}\right)$ where two fixed off-diagonal entries and one diagonal entry are set equal to zero, e.g.:

$\left\{\phantom{\rule{thickmathspace}{0ex}}\left(\begin{array}{ccc}a& X& 0\\ {X}^{*}& b& 0\\ 0& 0& 0\end{array}\right):\phantom{\rule{thickmathspace}{0ex}}a,b,c\in ℤ,\phantom{\rule{thickmathspace}{0ex}}X\in O\phantom{\rule{thickmathspace}{0ex}}\right\} \left\\left\{ \; \left\left( \begin\left\{array\right\}\left\{ccc\right\} a & X & 0 \\ X^* & b & 0 \\ 0 & 0 & 0 \end\left\{array\right\} \right\right) : \; a,b,c \in \mathbb\left\{Z\right\}, \; X\in \mathbf\left\{O\right\} \; \right\\right\} $

Proof. Use the previous corollary; this is the obvious copy of ${\mathrm{E}}_{8}^{++}\cong {\mathrm{E}}_{8}\oplus \mathrm{H}\mathrm\left\{E\right\}_8^\left\{++\right\} \cong \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\}$ inside ${\mathrm{E}}_{8}^{+++}\cong {\mathrm{E}}_{8}\oplus \mathrm{H}\oplus {\mathrm{A}}_{1}\mathrm\left\{E\right\}_8^\left\{+++\right\} \cong \mathrm\left\{E\right\}_8 \oplus \mathrm\left\{H\right\} \oplus \mathrm\left\{A\right\}_1$. $\blacksquare \blacksquare$

## November 18, 2014

### Clifford V. Johnson - Asymptotia

Three Cellos
These three fellows, perched on wooden boxes, just cried out for a quick sketch of them during the concert. It was the LA Phil playing Penderecki's Concerto Grosso for Three Cellos, preceded by the wonderful Rapsodie Espagnole by Ravel and followed by that sublime (brought tears to my eyes - I'd not heard it in so long) serving of England, Elgar's Enigma Variations. . . . . . -cvj Click to continue reading this post

### Symmetrybreaking - Fermilab/SLAC

Auger reveals subtlety in cosmic rays

Scientists home in on the make-up of cosmic rays, which are more nuanced than previously thought.

Unlike the twinkling little star of nursery rhyme, the cosmic ray is not the subject of any well-known song about an astronomical wonder. And yet while we know all about the make-up of stars, after decades of study scientists still wonder what cosmic rays are.

Thanks to an abundance of data collected over eight years, researchers in the Pierre Auger collaboration are closer to finding out what cosmic rays—in particular ultrahigh-energy cosmic rays—are made of. Their composition would tell us more about where they come from: perhaps a black hole, a cosmic explosion or colliding galaxies.

Auger’s latest research has knocked out two possibilities put forward by the prevailing wisdom: that UHECRs are dominated by either lightweight protons or heavier nuclei such as iron. According to Auger, one or more middleweight components, such as helium or nitrogen nuclei, must make up a significant part of the cosmic-ray mix.

“Ten years ago, people couldn’t posit that ultrahigh-energy cosmic rays would be made of something in between protons and iron,” says Fermilab scientist and Auger collaborator Eun-Joo Ahn, who led the analysis. “The idea would have garnered sidelong glances.”

Cosmic rays are particles that rip through outer space at incredibly high energies. UHECRs, upwards of 1018 electronvolts, are rarely observed, and no one knows exactly where they originate.

One way physicists reach back to a cosmic ray’s origins is by looking to the descendants of its collisions. The collision of one of these breakneck particles with the Earth’s upper atmosphere sets off a domino effect, generating more particles that in turn collide with air and produce still more. These ramifying descendants form an air shower, spreading out like the branches of a tree reaching toward the Earth. Twenty-seven telescopes at the Argentina-based Auger Observatory look for ultraviolet light resulting from the cosmic rays, and 1600 detectors, distributed over a swath of land the size of Rhode Island, record the showers’ signals.

Scientists measure how deep into the atmosphere—how close to Earth—the air shower is when it maxes out. The closer to the Earth, the more lightweight the original cosmic ray particle is likely to be. A proton, for example, would penetrate the atmosphere more deeply before setting off an air shower than would an iron nucleus.

Auger scientists compared their data with three different simulation models to narrow the possible compositions of cosmic rays.

Auger’s favoring a compositional middle ground between protons and iron nuclei is based on a granular take on their data, a first for cosmic-ray research. In earlier studies, scientists distilled measurements of shower depths to two values: the average and standard deviation of all shower depths in a given cosmic-ray energy range. Their latest study, however, made no such generalization. Instead, it used the full distribution of data on air shower depth. If researchers measured 1000 different air shower depths for a specific UHECR energy, all 1000 data points—not just the average—went into Auger’s simulation models.

The result was a more nuanced picture of cosmic ray composition. The analysis also gave researchers greater insight into their simulations. For one model, the data and predictions could not be matched no matter the composition of the cosmic ray, giving scientists a starting point for constraining the model further.

“Just getting the distribution itself was exciting,” Ahn says.

Auger will continue to study cosmic rays at even higher energies, gathering more statistics to answer the question: What exactly are cosmic rays made of?

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

Auger reveals subtlety in cosmic rays

Scientists home in on the make-up of cosmic rays, which are more nuanced than previously thought.

Unlike the twinkling little star of nursery rhyme, the cosmic ray is not the subject of any well-known song about an astronomical wonder. And yet while we know all about the make-up of stars, after decades of study scientists still wonder what cosmic rays are.

Thanks to an abundance of data collected over eight years, researchers in the Pierre Auger collaboration are closer to finding out what cosmic rays—in particular ultrahigh-energy cosmic rays—are made of. Their composition would tell us more about where they come from: perhaps a black hole, a cosmic explosion or colliding galaxies.

Auger’s latest research has knocked out two possibilities put forward by the prevailing wisdom: that UHECRs are dominated by either lightweight protons or heavier nuclei such as iron. According to Auger, one or more middleweight components, such as helium or nitrogen nuclei, must make up a significant part of the cosmic-ray mix.

“Ten years ago, people couldn’t posit that ultrahigh-energy cosmic rays would be made of something in between protons and iron,” says Fermilab scientist and Auger collaborator Eun-Joo Ahn, who led the analysis. “The idea would have garnered sidelong glances.”

Cosmic rays are particles that rip through outer space at incredibly high energies. UHECRs, upwards of 1018 electronvolts, are rarely observed, and no one knows exactly where they originate.

One way physicists reach back to a cosmic ray’s origins is by looking to the descendants of its collisions. The collision of one of these breakneck particles with the Earth’s upper atmosphere sets off a domino effect, generating more particles that in turn collide with air and produce still more. These ramifying descendants form an air shower, spreading out like the branches of a tree reaching toward the Earth. Twenty-seven telescopes at the Argentina-based Auger Observatory look for ultraviolet light resulting from the cosmic rays, and 1600 detectors, distributed over a swath of land the size of Rhode Island, record the showers’ signals.

Scientists measure how deep into the atmosphere—how close to Earth—the air shower is when it maxes out. The closer to the Earth, the more lightweight the original cosmic ray particle is likely to be. A proton, for example, would penetrate the atmosphere more deeply before setting off an air shower than would an iron nucleus.

Auger scientists compared their data with three different simulation models to narrow the possible compositions of cosmic rays.

Auger’s favoring a compositional middle ground between protons and iron nuclei is based on a granular take on their data, a first for cosmic-ray research. In earlier studies, scientists distilled measurements of shower depths to two values: the average and standard deviation of all shower depths in a given cosmic-ray energy range. Their latest study, however, made no such generalization. Instead, it used the full distribution of data on air shower depth. If researchers measured 1000 different air shower depths for a specific UHECR energy, all 1000 data points—not just the average—went into Auger’s simulation models.

The result was a more nuanced picture of cosmic ray composition. The analysis also gave researchers greater insight into their simulations. For one model, the data and predictions could not be matched no matter the composition of the cosmic ray, giving scientists a starting point for constraining the model further.

“Just getting the distribution itself was exciting,” Ahn says.

Auger will continue to study cosmic rays at even higher energies, gathering more statistics to answer the question: What exactly are cosmic rays made of?

Like what you see? Sign up for a free subscription to symmetry!

### Quantum Diaries

Stanley Wojcicki awarded 2015 Panofsky Prize

This article appeared in Fermilab Today on Nov. 18, 2014.

Stanley Wojcicki

In late October, the American Physical Society Division of Particles and Fields announced that Stanford University professor emeritus of physics and Fermilab collaborator Stanley Wojcicki has been selected as the 2015 recipient of the W.K.H. Panofsky Prize in experimental particle physics. Panofsky, who died in 2007, was SLAC National Accelerator Laboratory’s first director, holding that position from 1961 to 1984.

“I knew Pief Panovsky for about 40 years, and I think he was a great man not only as a scientist, but also as a statesman and as a human being,” said Wojcicki, referring to Panofsky by his nickname. “So it doubles my pleasure and satisfaction in receiving an award that bears his name.”

Wojcicki was given the prestigious award “for his leadership and innovative contributions to experiments probing the flavor structure of quarks and leptons, in particular for his seminal role in the success of the MINOS long-baseline neutrino experiment.”

Wojcicki is a founding member of MINOS. He served as spokesperson from 1999 to 2004 and as co-spokesperson from 2004 to 2010.

“I feel a little embarrassed being singled out because, in high-energy physics, there is always a large number of individuals who have contributed and are absolutely essential to the success of the experiment,” he said. “This is certainly true of MINOS, where we had and have a number of excellent people.”

Wojcicki recalls the leadership of Caltech physicist Doug Michael, former MINOS co-spokesperson, who died in 2005.

“I always regret that Doug did not have a chance to see the results of an experiment that he very much contributed to,” Wojcicki said.

In 2006, MINOS measured an important parameter related to the mass difference between two neutrino types.

Fermilab physicist Doug Glenzinski chaired the Panofsky Prize review committee and says that the committee was impressed by Wojcicki’s work on flavor physics, which focuses on how particles change from one type to another, and his numerous contributions over decades of research.

“He is largely credited with making MINOS happen, with thinking about ways to advance neutrino measurements and with playing an active role in all aspects of the experiment from start to finish,” Glenzinski said.

More than 30 years ago, Wojcicki collaborated on charm quark research at Fermilab, later joining Fermilab’s neutrino explorations. Early on Wojcicki served on the Fermilab Users Executive Committee from 1969-71 and on the Program Advisory Committee from 1972-74. He has since been on many important committees, including serving as chair of the High-Energy Physics Advisory Panel for six years and as member of the P5 committee from 2005-08. He now continues his involvement in neutrino physics, participating in the NOvA and MINOS+ experiments.

“I feel really fortunate to have been connected with Fermilab since its inception,” Wojcicki said. “I think Fermilab is a great lab, and I hope it will continue as such for many years to come.”

Rich Blaustein

### The n-Category Cafe

The Kan Extension Seminar in the Notices

Emily has a two-page article in the latest issue of the Notices of the American Mathematical Society, describing her experience of setting up and running the Kan extension seminar. In my opinion, the seminar was an exciting innovation for both this blog and education at large. It also resulted in some excellent posts. Go read it!

### Lubos Motl - string vacua and pheno

CMS sees excess of same-sign dimuons "too"
An Xmas rumor deja vu

There are many LHC-related hep-ex papers on the arXiv today, and especially
Searches for the associated $$t\bar t H$$ production at CMS
by Liis Rebane of CMS. The paper notices a broad excess of like-sign dimuon events. See the last 2+1 lines of Table 1 for numbers.

Those readers who remember all 6,000+ blog posts on this blog know very well that back in December 2012, there was a "Christmas rumor" about an excess seen by the other major LHC collaboration, ATLAS.

ATLAS was claimed to have observed 14 events – which would mean a 5-sigma excess – of same-sign dimuon events with the invariant mass$m_{\rm inv}(\mu^\pm \mu^\pm) = 105\GeV.$ Quite a bizarre Higgs-like particle with $$Q=\pm 2$$, if a straightforward explanation exists. Are the ATLAS and CMS seeing the same deviation from the Standard Model?

## November 17, 2014

### Marco Frasca - The Gauge Connection

That’s a Higgs but how many?

CMS and ATLAS collaborations are yet up to work producing results from the datasets obtained in the first phase of activity of LHC. The restart is really near the corner and, maybe already the next summer, things can change considerably. Anyway what they get from the old data can be really promising and rather intriguing. This is the case for the recent paper by CMS (see here). The aim of this work is to see if a heavier state of Higgs particle exists and the kind of decay they study is $Zh\rightarrow l^+l^-bb$. That is, one has a signature with two leptons moving in opposite directions, arising from the dacy of the $Z$, and two bottom quarks arising from the decay of the Higgs particle. The analysis of this decay aims to get hints of existence of a heavier pseudoscalar Higgs state. This can be greatly important for SUSY extensions of the Standard Model that foresee more than one Higgs particle.

Often CMS presents its results with some intriguing open questions and also this is the case and so, it is worth this blog entry. Here is the main result

The evidence, as said in the paper, is that there is a 2.6-2.9 sigma evidence at 560 GeV and a smaller one at around 300 GeV. Look elsewhere effect reduces the former at 1.1 sigma and the latter is practically negligible. Overall, this is pretty negligible but, as always, with more data at the restart, could become something real or just fade away. It should be appreciated the fact that a door is left open anyway and a possible effect is pointed out.

My personal interpretation is that such higher excitations do exist but their production rates are heavily suppressed with the respect to the observed ground state at 126 GeV and so, negligible with the present datasets. I am also convinced that the current understanding of the breaking of SUSY, currently adopted in MSSM-like to go beyond the Standard Model, is not the correct one provoking the early death of such models. I have explained this in a coupled of papers of mine (see here and here). It is my firm conviction that the restart will yield exciting results and we should be really happy to have such a powerful machine in our hands to grasp them.

Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5

Marco Frasca (2012). Classical solutions of a massless Wess-Zumino model J.Nonlin.Math.Phys. 20:4, 464-468 (2013) arXiv: 1212.1822v2

Filed under: Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Higgs particle, Standard Model, Supersymmetry

### astrobites - astro-ph reader's digest

ASASSN-13co: A Type-Defying Supernova
Title: Discovery and Observations of the Unusually Bright Type-Defying II-P/II-L Supernova ASASSN-13co

Authors: T. W.-S. Holoien, et al.

First Author’s Institution: Department of Astronomy, The Ohio State University

Paper Status: Submitted to MNRAS

There are arguably a lot of things defy categorization, but it’s not everyday that we find something that suggests we do away with our categories altogether. The authors of today’s paper believe that the recently-discovered Type II supernova ASASSN-13co — read that as “assassin”, please — might just be one of the latter. Its unusual characteristics call into question the validity of the two classes (II-P and II-L, more on that later) into which we usually group Type II supernovae. As a result, they suggest that we treat Type II supernovae properties as a continuum, rather than the discrete designations we’ve become accustomed to assigning.

Death Throes of Massive Stars

Type II supernovae are identified by the hydrogen in their spectra (meaning that they still have a hydrogen envelope when they die). They are formed when a star with mass of 8-50 times that of the sun dies through core-collapse.

All stars produce energy through nuclear fusion, but massive stars can fuse much heavier nuclei than stars the size of our sun – all the way to nickel and iron, which have the highest binding energy of all elements. While the fusion of the lighter elements is an exothermic process, fusing iron uses up energy instead, so fusing elements heavier than iron isn’t energetically favorable. As a result, a core of iron and nickel (which then decays into iron) builds up in the center of a massive star. The core is supported by electron degeneracy pressure. When the mass of the iron-nickel core exceeds the Chandrasekhar limit (about 1.4 solar masses), however, electron degeneracy pressure is not enough to stop the core from collapsing. As the core collapses, the protons and electrons in the core of the star merge to form neutrons and neutrinos. The neutrinos can escape and carry away energy. At the same time, the outer layers of the star fall inward until neutron degeneracy pressure kicks in, stopping the collapse and causing the outer layers to rebound.  The combination of the pressure from the neutrinos and the rebound of the outer layers off of the core causes the star to be torn apart in a huge explosion – a core-collapse supernovae.

Left: Archival SDSS data of the host galaxy PGC 067159. Right: LCOGT image that was taken during the supernova. The circles have radii of 2 arcseconds and are centered on the supernova’s position. We can see that there was previously no visible object at the location of the supernova. An image like this is called a finding chart.

These supernovae exhibit a wide range of properties, but have generally been grouped into Type II-P or Type II-L supernovae.  Type II-P supernovae – the P stands for “plateau” – get their names from the long flat stretch present in their optical light curves.  Type II-L supernovae, on the other hand, show a relatively steady “linear” decline in their intensity after reaching peak brightness. However, it has recently been suggested that Type II supernovae light curves may not fall neatly into the two groups, but actually display a continuum of these properties. The authors of today’s paper hope that by studying unusually bright or hard-to-classify events, they will be able to better understand the variations in Type II supernovae and improve upon the current classification scheme.

Profiling an Unusual Supernova

The focus of the paper, ASASSN-13co, is a supernova that the authors state is both unusually bright and hard to classify. It was detected with the All-Sky Automated Survey for Supernovae (ASAS-SN) on August 29, 2013 in the V-band — an optical bandpass with a mean wavelength of 540 nm. The supernova had an apparent magnitude of 16.9 +/- 0.1 and coordinates RA = 21:40:38.72, Dec = +06:30:36.98. Using the Sloan Digital Sky Survey (SDSS), they located the host galaxy as the spiral galaxy PGC 067159, which was offset by 3 arcseconds from the source of the supernova.

The bolometric (total flux over all wavelengths) light curve of ASASSN-13co in red plotted against the light curves of the supernovae used in making the PP14 model, in grey. The thickness of the red indicates the 1-sigma uncertainty in the light curve. We can see that ASASSN-13co is one of the most luminous supernovae of the bunch and that unlike the other Type II-P SN shown, it does not have a long plateau phase.

After finding that ASASSN-13co had an unusually bright V-band absolute magnitude of -18.1 at the time of detection, they decided to launch an extensive follow-up campaign to fully characterize the event. They obtained photometric observations from space using the Swift X-ray Telescope and UVOT target-of-opportunity observations and from the ground using the Las Cumbres Observatory Global Telescope Network (LCOGTN). Since they do not have prior X-ray data from the host galaxy (and are therefore unable to determine if the X-ray flux comes from the supernova or the galaxy) they ultimately don’t include their X-ray data in the analysis. In addition, they have spectroscopic data from spectrographs located on the LCO du Pont 2.5-m telescope, the MDM Observatory Hiltner 2.4-m telescope, and the Apache Point Observatory 3.5-m telescope.

Finally, the authors also use a new model from Pejcha & Prieto 2014, which they designate as PP14, to calculate the light curve of the SN in the V-band, since they do not have follow-up data in the V-band. The model takes in measurements of the supernova’s flux and expansion velocities to calculate other information about the supernova, such as its light curve in other filters, its luminosity over all wavelengths, and the mass of nickel-56 that it produces.

Type-Defying

The V-band light curve (in absolute magnitudes) for ASASSN-13co, plotted in red again against a sample of various Type II SN from Anderson et al. 2014. ASASSN-13co has one of the brightest light curves, and it also seems to decline more slowly than the other bright SN light curves.

The spectroscopic data that the authors obtain allow them to determine that ASASSN-13co’s spectrum looks typical for a Type II-P supernova. However, the V-band light curves calculated using PP14 show that the duration of the plateau seems to fall between the values for typical Type II-P and Type II-L supernovae. Unlike a Type II-P, which has a rapid fall and then a long plateau phase, ASASSN-13co displays a steady decline in its luminosity. However, it defies easy categorization by exhibiting a steady decline in luminosity that is considerably slower than the decline of an average Type II-L supernova. On top of that, ASASSN-13co is just unusually bright for a Type II supernova.

ASASSN-13co’s unusual characteristics lead the authors to conclude that the supernova is not easily classified as Type II-P or a Type II-L. Instead, they offer this as another piece of evidence that the II-P and II-L designations for Type II SN are oversimplifications of the wide range of Type II supernovae characteristics. Lastly, they note that the PP14 model, which was able to provide a good fit to even the unusual ASASSN-13co, can be a useful tool for future studies of variations in Type II supernovae characteristics.

### arXiv blog

Machine-Learning Algorithm Ranks the World's Most Notable Authors

Deciding which books to digitise when they enter the public domain is tricky; unless you have an independent ranking of the most notable authors.

Public Domain Day, January 1, is the day on which previously copyrighted works become freely available to print, digitize, modify, or reuse in more or less any way. In most countries, this happens 50 or 70 years after the death of the author.

### Matt Strassler - Of Particular Significance

At the Naturalness 2014 Conference

Greetings from the last day of the conference “Naturalness 2014“, where theorists and experimentalists involved with the Large Hadron Collider [LHC] are discussing one of the most widely-discussed questions in high-energy physics: are the laws of nature in our universe “natural” (= “generic”), and if not, why not? It’s so widely discussed that one of my concerns coming in to the conference was whether anyone would have anything new to say that hadn’t already been said many times.

What makes the Standard Model’s equations (which are the equations governing the known particles, including the simplest possible Higgs particle) so “unnatural” (i.e. “non-generic”) is that when one combines the Standard Model with, say, Einstein’s gravity equations. or indeed with any other equations involving additional particles and fields, one finds that the parameters in the equations (such as the strength of the electromagnetic force or the interaction of the electron with the Higgs field) must be chosen so that certain effects almost perfectly cancel, to one part in a gazillion* (something like 10³²). If this cancellation fails, the universe described by these equations looks nothing like the one we know. I’ve discussed this non-genericity in some detail here.

*A gazillion, as defined on this website, is a number so big that it even makes particle physicists and cosmologists flinch. [From Old English, gajillion.]

Most theorists who have tried to address the naturalness problem have tried adding new principles, and consequently new particles, to the Standard Model’s equations, so that this extreme cancellation is no longer necessary, or so that the cancellation is automatic, or something to this effect. Their suggestions have included supersymmetry, warped extra dimensions, little Higgs, etc…. but importantly, these examples are only natural if the lightest of the new particles that they predict have masses that are around or below 1 TeV/c², and must therefore be directly observable at the LHC (with a few very interesting exceptions, which I’ll talk about some other time). The details are far too complex to go into here, but the constraints from what was not discovered at LHC in 2011-2012 implies that most of these examples don’t work perfectly. Some partial non-automatic cancellation, not at one part in a gazillion but at one part in 100, seems to be necessary for almost all of the suggestions made up to now.

So what are we to think of this?

• Maybe one of the few examples that is entirely natural and is still consistent with current data is correct, and will turn up at the LHC in 2015 or 2016 or so, when the LHC begins running at higher energy per collision than was available in 2011-2012.
• Maybe one of the examples that isn’t entirely natural is correct. After all, one part in 100 isn’t awful to contemplate, unlike one part in a gazillion. We do know of other weird things about the world that are improbable, such as the fact that the Sun and the Moon appear to be almost exactly the same size in the Earth’s sky. So maybe our universe is slightly non-generic, and therefore discoveries of new particles that we might have expected to see in 2011-2012 are going to be delayed until 2015 or beyond.
• Maybe naturalness is simply not a good guide to guessing our universe’s laws, perhaps because the universe’s history, or its structure, forced it to be extremely non-generic, or perhaps because the universe as a whole is generic but huge and variegated (this is often called a “multiverse”, but be careful, because that word is used in several very different ways — see here for discussion) and we can only live in an extremely non-generic part of it.
• Maybe naturalness is not a good guide because there’s something wrong with the naturalness argument, perhaps because quantum field theory itself, on which the argument rests, or some other essential assumption, is breaking down.

Some of the most important issues at this conference are: how can we determine experimentally which of these possibilities is correct (or whether another we haven’t thought of is correct)? In this regard, what measurements do we need to make at the LHC in 2015 and beyond? What theoretical directions concerning naturalness have been underexplored, and might any of them suggest new measurements at LHC (or elsewhere) that have not yet been attempted?

I am afraid my time is too limited to report on highlights. Most of the progress reported at this conference has been incremental rather than major steps; there weren’t any big new solutions to the naturalness problem proposed.  But it has been a good opportunity for an exchange of ideas among theorists and experimentalists, with a number of new approaches to LHC measurements being presented and discussed, and with some interesting conversation regarding the theoretical and conceptual issues surrounding naturalness, selection bias (sometimes called “anthropics”), and the behavior of quantum field theory.

Filed under: LHC News, Particle Physics Tagged: atlas, cms, Higgs, LHC, naturalness

## November 16, 2014

### The n-Category Cafe

Jaynes on Mathematical Courtesy

In the last years of his life, fierce Bayesian Edwin Jaynes was working on a large book published posthumously as Probability Theory: The Logic of Science (2003). Jaynes was a lively writer. In an appendix on “Mathematical formalities and style”, he really let rip, railing against modern mathematical style. Here’s a sample:

Nowadays, if you introduce a variable $xx$ without repeating the incantation that it is in some set or ‘space’ $XX$, you are accused of dealing with an undefined problem. If you differentiate a function $f\left(x\right)f\left(x\right)$ without first having stated that it is differentiable, you are accused of lack of rigor. If you note that your function $f\left(x\right)f\left(x\right)$ has some special property natural to the application, you are accused of lack of generality. In other words, every statement you make will receive the discourteous interpretation.

Discuss.

This is taken from the final section of this appendix, on “Mathematical courtesy”. Here’s most of the rest of it:

Obviously, mathematical results cannot be communicated without some decent standards of precision in our statements. But a fanatical insistence on one particular form of precision and generality can be carried so far that it defeats its own purpose; 20th century mathematics often degenerates into an idle adversary game instead of a communication process.

The fanatic is not trying to understand your substantive message at all, but only trying to find fault with your style of presentation. He will strive to read nonsense into what you are saying, if he can possibly find any way of doing so. In self-defense, writers are obliged to concentrate their attention on every tiny, irrelevant, nit-picking detail of how things are said rather than on what is said. The length grows; the content shrinks.

Mathematical communication would be much more efficient and pleasant if we adopted a different attitude. For one who makes the courteous interpretation of what others write, the fact that $xx$ is introduced as a variable already implies that there is some set $XX$ of possible values. Why should it be necessary to repeat that incantation every time a variable is introduced, thus using up two symbols where one would do? (Indeed, the range of values is usually indicated more clearly at the point where it matters, by adding conditions such as ($00 \lt x \lt 1$) after an equation.)

For a courteous reader, the fact that a writer differentiates $f\left(x\right)f\left(x\right)$ twice already implies that he considers it twice differentiable; why should he be required to say everything twice? If he proves proposition $AA$ in enough generality to cover his application, why should he be obliged to use additional space for irrelevancies about the most general possible conditions under which $AA$ would be true?

A scourge as annoying as the fanatic is his cousin, the compulsive mathematical nitpicker. We expect that an author will define his technical terms, and then use them in a way consistent with his definitions. But if any other author has ever used the term with a slightly different shade of meaning, the nitpicker will be right there accusing you of inconsistent terminology. The writer has been subjected to this many times; and colleagues report the same experience.

Nineteenth century mathematicians were not being nonrigorous by their style; they merely, as a matter of course, extended simple civilized courtesy to others, and expected to receive it in return. This will lead one to try to read sense into what others write, if it can possibly be done in view of the whole context; not to pervert our reading of every mathematical work into a witch-hunt for deviations from the Official Style.

Therefore […] we issue the following:

Emancipation Proclamation

Every variable $xx$ that we introduce is understood to have some set $XX$ of possible values. Every function $f\left(x\right)f\left(x\right)$ that we introduce is understood to be sufficiently well-behaved so that what we do with it makes sense. We undertake to make every proof general enough to cover the application we make of it. It is an assigned homework problem for the reader who is interested in the question to find the most general conditions under which the result would hold.

We could convert many 19th century mathematical works to 20th century standards by making a rubber stamp containing this Proclamation, with perhaps another sentence using the terms ‘sigma-algebra, Borel field, Radon-Nikodym derivative’, and stamping it on the first page.

Modern writers could shorten their works substantially, with improved readability and no decrease in content, by including such a Proclamation in the copyright message, and writing thereafter in the 19th century style. Perhaps some publishers, seeing these words, may demand that they do this for economic reasons; it would be a service to science.

### Michael Schmitt - Collider Blog

Quark contact interactions at the LHC

So far, no convincing sign of new physics has been uncovered by the CMS and ATLAS collaborations. Nonetheless, the scientists continue to look using a wide variety of approaches. For example, a monumental work on the coupling of the Higgs boson to vector particles has been posted by the CMS Collaboration (arXiv:1411.3441). The authors conducted a thorough and very sophisticated statistical analysis of the kinematic distributions of all relevant decay modes, with the conclusion that the data for the Higgs boson are fully consistent with the standard model expectation. The analysis and article are too long for a blog post, however, so please see the paper if you want to learn the details.

The ATLAS Collaboration posted a paper on generic searches for new physics signals based on events with three leptons (e, μ and τ). This paper (arXiv:1411.2921) is longish one describing a broad-based search with several categories of events defined by lepton flavor and charge and other event properties. In all categories the observation confirms the predictions based on standard model processes: the smallest p-value is 0.05.

A completely different search for new physics based on a decades-old concept was posted by CMS (arXiv:1411.2646). We all know that the Fermi theory of weak interactions starts with a so-called contact interaction characterized by an interaction vertex with four legs. The Fermi constant serves to parametrize the interaction, and the participation of a vector boson is immaterial when the energy of the interaction is low compared to the boson mass. This framework is the starting point for other effective theories, and has been employed at hadron colliders when searching for deviations in quark-quark interactions, as might be observable if quarks were composite.

The experimental difficulty in studying high-energy quark-quark scattering is that the energies of the outgoing quarks are not so well measured as one might like. (First, the hadronic jets that materialize in the detector do not precisely reflect the quark energies, and second, jet energies cannot be measured better than a few percent.) It pays, therefore, to avoid using energy as an observable and to get the most out of angular variables, which are well measured. Following analyses done at the Tevatron, the authors use a variable χ = exp(|y1-y2|), which is a simple function of the quark scattering angle in the center-of-mass frame. The distribution of events in χ can be unambiguously predicted in the standard model and in any other hypothetical model, and confronted with the data. So we have a nice case for a goodness-of-fit test and pairwise hypothesis testing.

The traditional parametrization of the interaction Lagrangian is:

where the η parameters have values -1, 0, +1 and specify the chirality of the interaction; the key parameter is the mass scale Λ. An important detail is that this interaction Lagrangian can interfere with the standard model piece, and the interference can be either destructive or constructive, depending on the values of the η parameters.

The analysis proceeds exactly as one would expect: events must have at least two jets, and when there are more than two, the two highest-pT jets are used and the others ignored. Distributions of χ are formed for several ranges of di-jet invariant mass, MJJ, which extends as high as 5.2 TeV. The measured χ distributions are unfolded, i.e., the effects of detector resolution are removed from the distribution on a statistical basis. The main sources of systematic uncertainty come from the jet energy scale and resolution and are based on an extensive parametrization of jet uncertainties.

Since one is looking for deviations with respect to the standard model prediction, it is very important to have an accurate prediction. Higher-order terms must be taken into account; these are available at next-to-leading order (NLO). In fact, even electroweak corrections are important and amount to several percent as a strong function of χ — see the plot on the right. The scale uncertainties are a few percent (again showing the a very precise SM prediction is non-trivial event for pp→2J) and fortunately the PDF uncertainties are small, at the percent level. Theoretical uncertainties dominate for MJJ near 2 TeV, while statistical uncertainties dominate for MJJ above 4 TeV.

The money plot is this one:

Optically speaking, the plot is not exciting: the χ distributions are basically flat and deviations due to a mass scale Λ = 10 TeV would be mild. Such deviations are not observed. Notice, though, that the electroweak corrections do improve the agreement with the data in the lowest χ bins. Loosely speaking, this improvement corresponds to about one standard deviation and therefore would be significant if CMS actually had evidence for new physics in these distributions. As far as limits are concerned, the electroweak corrections are “worth” 0.5 TeV.

The statistical (in)significance of any deviation is quantified by a ratio of log-likelihoods: q = -2ln(LSM+NP/LSM) where SM stands for standard model and NP for new physics (i.e., one of distinct possibilities given in the interaction Lagrangian above). Limits are derived on the mass scale Λ depending on assumed values for the η parameters; they are very nicely summarized in this graph:
The limits for contact interactions are roughly at the 10 TeV scale — well beyond the center-of-mass energy of 8 TeV. I like this way of presenting the limits: you see the expected value (black dashed line) and an envelope of expected statistical fluctuations from this expectation, with the observed value clearly marked as a red line. All limits are slightly more stringent than the expected ones (these are not independent of course).

The authors also considered models of extra spatial dimensions and place limits on the scale of the extra dimensions at the 7 TeV level.

So, absolutely no sign of new physics here. The LHC will turn on in 2015 at a significantly higher center-of-mass energy (13 TeV), and given the ability of this analysis to probe mass scales well above the proton-proton collision energy, a study of the χ distribution will be interesting.

### Clifford V. Johnson - Asymptotia

Nerd-Off Results
So I'm supposed to be writing 20 slides for a colloquium so let me see if I get this right really fast:- First round, the Koch Brothers bested the Justice League and Ultron was beaten up by Inspector Gadget meanwhile Ice Cube trumped Mr. Rogers and Stephen Hawking battled Charles Darwin but the audience loved them so much that they were asked to team up for the next round (before which Jon Snow did standup in the break) and in which they lost to Inspector Gadget who [...] Click to continue reading this post

### Lubos Motl - string vacua and pheno

CMS: locally 2.6 or 2.9 sigma excess for another $$560\GeV$$ Higgs boson $$A$$
And there are theoretical reasons why this could be the right mass

Yesterday, the CMS Collaboration at the LHC published the results of a new search:
Search for a pseudoscalar boson $$A$$ decaying into a $$Z$$ and an $$h$$ boson in the $$\ell^+\ell^- \bar b b$$ final state
They look at collisions with the $$\ell\ell bb$$ final state and interpret it using the two higgs doublet model scenarios.

There are no stunning excesses in the data.

But I think it's always a good idea to point out what is the most significant excess they see in the data, and the CMS folks do just that in this paper, too.

On page 10, one may see Figure 4 and Figure 5 that show the main results.

According to Figure 4, a new Higgs boson with $$\Gamma=0$$ has some cross section (multiplied by the branching ratio) that stays within the 2-sigma band but reveals a deficit "slightly exceeding 2 sigma" for $$m_A=240\GeV$$ and slight 2-sigma excesses for $m_A = 260\GeV, \quad 315\GeV, \quad 560 \GeV.$ And let's not forget about a different CMS search that suggested $$m_H=137\GeV$$.

The excess for $$m_A=560\GeV$$ has the local significance of 2.6 sigma which reduces to just 1.1 sigma "globally", after the look-elsewhere-effect correction.

As Figure 5 (which is similar but fuzzier) shows, this excess for $$m_A=560\GeV$$ becomes even larger, 2.9 sigma (or 1.6 sigma globally) if we assume a larger decay width of this $$A$$ boson, namely $$\Gamma=30\GeV$$. The significance levels are mentioned in the paper, too.

That is somewhat intriguing. If there's another search for such bosons, don't forget to look for similar excesses at this mass. But it's nothing to lose your sleep over, of course.

Recall that the minimum supersymmetric standard model – a special, more motivated subclass of the two-higgs-doublet model – predicts five Higgs particles because $$8-3=5$$ expresses the a priori real scalar degrees of freedom minus those eaten by the 3 broken symmetry generators.

These 5 bosons may be denoted $$h,H,A,H^\pm$$. The first three bosons are neutral, the last two are charged. $$A$$ is the only CP-odd CP-eigenstate.

If you want to get excited by a paper/talk that "predicted" this $$m_A=560\GeV$$ while $$m_h=125\GeV$$, open this June 2014 talk
The post-Higgs MSSM scenario
by Abdelahk Djouadi of CNRS Paris. On page 13, he deduces that a "best fit" in MSSM has$\tan\beta=1, \quad m_A = 560\GeV,\\ m_h = 125\GeV, \quad m_H = 580\GeV,\\ m_{H^\pm} = 563 \GeV$ although the sentence right beneath that indicates that the author thinks that many other points are rather good fits, too. Good luck to that prediction, anyway. ;-)

The very same scenario with the same values of the masses is also defended in this May 2014 paper by Jérémie Quevillon who argues that these values of the new Higgses are almost inevitable consequences of supersymmetry given the superpartner masses' being above $$1\TeV$$.

It sounds cool despite the fact that the simplest, truly MSSM-based scenarios corresponding to their "best fit" involve superpartners around $$100\TeV$$. The discovery of the Higgses near $$560\GeV$$ in 2015 would be circumstantial evidence in favor of supersymmetry, nevertheless.

Update: Abdelahk Djouadi told me that their scenario only predicts some 0.5 fb cross section (with the factors added) but one needs about 5 fb to explain the excess above. So it's bad news.

### ZapperZ - Physics and Physicists

"Should I Go Into Physics Or Engineering?"
I get asked that question a lot, and I also see similar question on Physics Forums. Kids who are either still in high school, or starting their undergraduate  years are asking which area of study should they pursue. In fact, I've seen cases where students ask whether they should do "theoretical physics" or "engineering", as if there is nothing in between those two extremes!

My response has always been consistent. I why them why can't they have their cake and eat it too?

This question often arises out of ignorance of what physics really encompasses. Many people, especially high school students, still think of physics as being this esoteric subject matter, dealing with elementary particles, cosmology, wave-particle duality, etc.. etc., things that they don't see involving everyday stuff. On the other hand, engineering involves things that they use and deal with everyday, where the product are often found around them. So obviously, with such an impression, those two areas of study are very different and very separate.

I try to tackle such a question by correcting their misleading understanding of what physics is and what a lot of physicists do. I tell them that physics isn't just the LHC or the Big Bang. It is also your iPhone, your medical x-ray, your MRI, your hard drive, your silicon chips, etc. In fact, the largest percentage of practicing physicists are in the field of condensed matter physics/material science, an area of physics that study the basic properties of materials, the same ones that are used in modern electronics. I point to them many of the Nobel Prize in physics that were awarded to condensed matter physicists or for invention of practical items (graphene, lasers, etc.). So already, the idea of having to choose between doing physics, and doing something "practical and useful" may not be mutually exclusive.

Secondly, I point to different areas of physics in which physics and engineering smoothly intermingle. I've mentioned earlier about the field of accelerator physics, in which you see both physics and engineering come into play. In fact, in this field, you have both physicists and electrical engineers, and they often do the same thing. The same can be said about those in instrumentation/device physics. In fact, I have also seen many high energy physics graduate students who work on detectors for particle colliders who looked more like electronics engineers than physicists! So for those working in this field, the line between doing physics and doing engineering is sufficiently blurred. You can do exactly what you want, leaning as heavily towards the physics side or engineering side as much as you want, or straddle exactly in the middle. And you can approach these fields either from a physics major or an electrical engineering major. The point here is that there are areas of study in which you can do BOTH physics and engineering!

Finally, the reason why you don't have to choose to major in either physics or engineering is because there are many schools that offer a major in BOTH! My alma mater, the University of Wisconsin-Madison (Go Badgers!) has a major called AMEP - Applied Mathematics, Engineering, and Physics - where with your advisor, you can tailor a major that straddles two of more of the areas in math, physics, and engineering. There are other schools that offer majors in Engineering Physics or something similar. In other words, you don't have to choose between physics or engineering. You can just do BOTH!

Zz.

### Tommaso Dorigo - Scientificblogging

A New Search For The A Boson With CMS
I am quite happy to report today that the CMS experiment at the CERN Large Hadron Collider has just published a new search which fills a gap in studies of extended Higgs boson sectors. It is a search for the decay of the A boson into Zh pairs, where the Z in turn decays to an electron-positron or a muon-antimuon pair, and the h is assumed to be the 125 GeV Higgs and is sought for in its decay to b-quark pairs.

If you are short of time, this is the bottomline: no A boson is found in Run 1 CMS data, and limits are set in the parameter space of the relevant theories. But if you have a bit more time to spend here, let's start with the beginning - What's the A boson, you might wonder for a start.

read more

## November 15, 2014

### Lubos Motl - string vacua and pheno

Is our galactic black hole a neutrino factory?
When I was giving a black hole talk two days ago, I would describe Sagittarius A*, a black hole in the center of the Milky Way, our galaxy, as our "most certain" example of an astrophysical black hole that is actually observed in the telescopes. Its mass is 4 million solar masses – the object is not a negligible dwarf.

Accidentally, a term paper and presentation I would do at Rutgers more than 15 years ago was about Sgr A*. Of course, I had no doubt it was a black hole at that time.

Today, science writers affiliated with all the usual suspects (e.g. RT) would run the story that Sgr A* is a high-energy neutrino factory.

Why now? Well, a relevant paper got published in Physical Review D. Again, it wasn't today, it was almost 2 months ago, but a rational justification of the explosion of hype in the mid of November 2014 simply doesn't exist. Someone in NASA helped the media to explode – by this press release – and they did explode, copying from each other in the usual way.

The actual paper was published as the July 2014 preprint
Neutrino Lighthouse at Sagittarius A*
by Bai, Barger squared, Lu, Peterson, and Salvado. Their main argument in favor of the bizarrely sounding claim that "Sgr A* produces high-energy neutrinos" comes from something that looks like a timing coincidence.

Chandra X-ray Observatory and its NuSTAR and Swift friend – all in space – would detect some outbursts or flares between 2010 and 2013. And the timing and (limited data about) locations seemed remarkably close to some detection of high-energy neutrinos by IceCube on the South Pole.

IceCube saw an exceptional neutrino 2-3 hours before a remarkable X-ray flare seen in the space X-ray telescopes, and so on. The confidence level is just around 99%. Yes, the word "before" sounds like the stories about OPERA that would detect "faster than light" neutrinos.

To my taste, the confidence level supporting the arguments is lousy. But even if I accept the possibility that the neutrinos are coming from the direction of Sgr A*, they're almost certainly not due to the black hole itself. Or at least, I would be stunned if the event horizon – which is what allows us to call the object a black hole – were needed for the emission of these high-energy neutrinos.

In particular, I emphasize that the Hawking radiation for such macroscopic black holes should be completely negligible, and emitting virtually no massive particles (and neutrinos are light from some viewpoints but very massive relatively to the typical Hawking quanta).

It seems much more likely to me that the X-rays as well as (possibly) the neutrinos are due to some messy astrophysical effects in the vicinity of the black hole. What are these astrophysical effects?

They propose that the neutrinos are created by decays of charged pions – which seems like a very likely birth of neutrinos to me (at least if one assumes that beyond the Standard Model physics is not participating). But these charged pions are there independently of the event horizon, aren't they? If the neutrinos arise from decaying charged pions near the black hole, there should also be neutral pions and their decays should produce gamma rays (near a TeV) which should be visible to the CTA, HAWC, H.E.S.S. and VERITAS experiments, they say.

At this moment, the paper has 3 citations.

The first one, by Brian Vlček et al. (sorry, it is vastly easier to choose the Czech name and write this complicated disclaimer than to remember the non-Czech name), refers to IceCube that says that the origin of the neutrinos could be LS 5039, a binary object, which is clearly distinct from Sgr A* but I guess it's close enough. Correct me if I misunderstood something about the apparent identification of these two explanations.

Murase talks about the neutrino flux around the Fermi bubbles in the complicated galactic central environment. These thoughts have the greatest potential to be relevant for fundamental physics, I think. Esmaili et al. counts the paper about the "neutrino lighthouse" among 15 or so "speculative" papers ignited by the IceCube's surprising observation of high-energy neutrinos.

So I do think that this lighthouse neutrino paper was overhyped, much like most papers that attract the journalists' attention, but sometimes it's good if random papers are reported in the media as long as they are not completely pathetic, and this one arguably isn't "quite" pathetic.

### Clifford V. Johnson - Asymptotia

Nerd Judgement
I’ve judged Poetry battles a number of times, essay competitions, art displays… but never Nerd-offs. Until tonight. Come to the Tournament of Nerds around midnight tonight at Upright Citizen’s Brigade. I’ll be one of the guest judges. I’ve no idea what I’m supposed to do, and my core “nerd” and … Click to continue reading this post

### John Baez - Azimuth

A Second Law for Open Markov Processes

guest post by Blake Pollard

What comes to mind when you hear the term ‘random process’? Do you think of Brownian motion? Do you think of particles hopping around? Do you think of a drunkard staggering home?

Today I’m going to tell you about a version of the drunkard’s walk with a few modifications. Firstly, we don’t have just one drunkard: we can have any positive real number of drunkards. Secondly, our drunkards have no memory; where they go next doesn’t depend on where they’ve been. Thirdly, there are special places, such as entrances to bars, where drunkards magically appear and disappear.

The second condition says that our drunkards satisfy the Markov property, making their random walk into a Markov process. The third condition is really what I want to tell you about, because it makes our Markov process into a more general ‘open Markov process’.

There are a collection of places the drunkards can be, for example:

$V= \{ \text{bar},\text{sidewalk}, \text{street}, \text{taco truck}, \text{home} \}$

We call this set $V$ the set of states. There are certain probabilities associated with traveling between these places. We call these transition rates. For example it is more likely for a drunkard to go from the bar to the taco truck than to go from the bar to home so the transition rate between the bar and the taco truck should be greater than the transition rate from the bar to home. Sometimes you can’t get from one place to another without passing through intermediate places. In reality the drunkard can’t go directly from the bar to the taco truck: he or she has to go from the bar to sidewalk to the taco truck.

This information can all be summarized by drawing a directed graph where the positive numbers labelling the edges are the transition rates:

For simplicity we draw only three states: home, bar, taco truck. Drunkards go from home to the bar and back, but they never go straight from home to the taco truck.

We can keep track of where all of our drunkards are using a vector with 3 entries:

$\displaystyle{ p(t) = \left( \begin{array}{c} p_h(t) \\ p_b(t) \\ p_{tt}(t) \end{array} \right) \in \mathbb{R}^3 }$

We call this our population distribution. The first entry $p_h$ is the number of drunkards that are at home, the second $p_b$ is how many are at the bar, and the third $p_{tt}$ is how many are at the taco truck.

There is a set of coupled, linear, first-order differential equations we can write down using the information in our graph that tells us how the number of drunkards in each place change with time. This is called the master equation:

$\displaystyle{ \frac{d p}{d t} = H p }$

where $H$ is a 3×3 matrix which we call the Hamiltonian. The off-diagonal entries are nonnegative:

$H_{ij} \geq 0, i \neq j$

and the columns sum to zero:

$\sum_i H_{ij}=0$

We call a matrix satisfying these conditions infinitesimal stochastic. Stochastic matrices have columns that sum to one. If we take the exponential of an infinitesimal stochastic matrix we get one whose columns sum to one, hence the label ‘infinitesimal’.

The Hamiltonian for the graph above is

$H = \left( \begin{array}{ccc} -2 & 5 & 10 \\ 2 & -12 & 0 \\ 0 & 7 & -10 \end{array} \right)$

John has written a lot about Markov processes and infinitesimal stochastic Hamiltonians in previous posts.

Given two vectors $p,q \in \mathbb{R}^3$ describing the populations of drunkards which obey the same master equation, we can calculate the relative entropy of $p$ relative to $q$:

$\displaystyle{ S(p,q) = \sum_{ i \in V} p_i \ln \left( \frac{p_i}{q_i} \right) }$

This is an example of a ‘divergence’. In statistics, a divergence a way of measuring the distance between probability distributions, which may not be symmetrical and may even not obey the triangle inequality.

The relative entropy is important because it decreases monotonically with time, making it a Lyapunov function for Markov processes. Indeed, it is a well known fact that

$\displaystyle{ \frac{dS(p(t),q(t) ) } {dt} \leq 0 }$

This is true for any two population distributions which evolve according to the same master equation, though you have to allow infinity as a possible value for the relative entropy and negative infinity for its time derivative.

Why is entropy decreasing? Doesn’t the Second Law of Thermodynamics say entropy increases?

Don’t worry: the reason is that I have not put a minus sign in my definition of relative entropy. Put one in if you like, and then it will increase. Sometimes without the minus sign it’s called the Kullback–Leibler divergence. This decreases with the passage of time, saying that any two population distributions $p(t)$ and $q(t)$ get ‘closer together’ as they get randomized with the passage of time.

That itself is a nice result, but I want to tell you what happens when you allow drunkards to appear and disappear at certain states. Drunkards appear at the bar once they’ve had enough to drink and once they are home for long enough they can disappear. The set of places where drunkards can appear or disappear $B$ is called the set of boundary states.  So for the above process

$B = \{ \text{home},\text{bar} \}$

is the set of boundary states. This changes the way in which the population of drunkards changes with time!

The drunkards at the taco truck obey the master equation. For them,

$\displaystyle{ \frac{dp_{tt}}{dt} = 7p_b -10 p_{tt} }$

still holds. But because the populations can appear or disappear at the boundary states the master equation no longer holds at those states! Instead it is useful to define the flow of drunkards into the $i^{th}$ state by

$\displaystyle{ \frac{Dp_i}{Dt} = \frac{dp_i}{dt}-\sum_j H_{ij} p_j}$

This quantity describes by how much the rate of change of the populations at the boundary states differ from that given by the master equation.

The reason why we are interested in open Markov processes is because you can take two open Markov processes and glue them together along some subset of their boundary states to get a new open Markov process! This allows us to build up or break down complicated Markov processes using open Markov processes as the building blocks.

For example we can draw the graph corresponding to the drunkards’ walk again, only now we will distinguish boundary states from internal states by coloring internal states blue and having boundary states be white:

Consider another open Markov process with states

$V=\{ \text{home},\text{work},\text{bar} \}$

where

$B=\{ \text{home}, \text{bar}\}$

are the boundary states, leaving

$I=\{\text{work}\}$

as an internal state:

Since the boundary states of this process overlap with the boundary states of the first process we can compose the two to form a new Markov process:

Notice the boundary states are now internal states. I hope any Markov process that could approximately model your behavior has more interesting nodes! There is a nice way to figure out the Hamiltonian of the composite from the Hamiltonians of the pieces, but we will leave that for another time.

We can ask ourselves, how does relative entropy change with time in open Markov processes? You can read my paper for the details, but here is the punchline:

$\displaystyle{ \frac{dS(p(t),q(t) ) }{dt} \leq \sum_{i \in B} \frac{Dp_i}{Dt}\frac{\partial S}{\partial p_i} + \frac{Dq_i}{Dt}\frac{\partial S}{\partial q_i} }$

This is a version of the Second Law of Thermodynamics for open Markov processes.

It is important to notice that the sum is only over the boundary states! This inequality tells us that relative entropy still decreases inside our process, but depending on the flow of populations through the boundary states the relative entropy of the whole process could either increase or decrease! This inequality will be important when we study how the relative entropy changes in different parts of a bigger more complicated process.

That is all for now, but I leave it as an exercise for you to imagine a Markov process that describes your life. How many states does it have? What are the relative transition rates? Are there states you would like to spend more or less time in? Are there states somewhere you would like to visit?

Here is my paper, which proves the above inequality:

• Blake Pollard, A Second Law for open Markov processes.

If you have comments or corrections, let me know!

## November 14, 2014

### CERN Bulletin

CHIS - Information concerning the health insurance of frontalier workers who are family members of a CHIS main member

We recently informed you that the Organization was still in discussions with the Host State authorities to clarify the situation regarding the health insurance of frontalier workers who are family members (as defined in the Staff Rules and Regulations) of a CHIS main member, and that we were hoping to arrive at a solution soon.

After extensive exchanges, we finally obtained a response a few days ago from the Swiss authorities, with which we are fully satisfied and which we can summarise as follows:

1) Frontalier workers who are currently using the CHIS as their basic health insurance can continue to do so.

2) Family members who become frontalier workers, or those who have not yet exercised their “right to choose” (droit d’option) can opt to use the CHIS as their basic health insurance. To this end, they must complete the form regarding the health insurance of frontaliers, ticking the LAMal box and submitting their certificate of CHIS membership (available from UNIQA).

3) For family members who joined the LAMal system since June 2014, CERN is in contact with the Swiss authorities and the Geneva Health Insurance Service with a view to securing an exceptional arrangement allowing them to leave the LAMal system and use the CHIS as their basic health insurance.

4) People who exercised their “right to choose” and opted into the French Sécurité sociale or the Swiss LAMal system before June 2014 can no longer change, as the decision is irreversible. As family members, however, they remain beneficiaries of the CHIS, which then serves as their complementary insurance.

5) If a frontalier family member uses the CHIS as his or her basic health insurance and the main member concerned ceases to be a member of the CHIS or the relationship between the two ends (divorce or dissolution of a civil partnership), the frontalier must join LAMal.

We hope that this information satisfies your expectations and concerns. We would like to thank the Host State authorities for their help in clarifying these highly complex issues.

We remind you that staff members, fellows and beneficiaries of the CERN Pension Fund must declare the professional situation and health insurance cover of their spouse or partner, as well as any changes in this regard, pursuant to Article III 6.01 of the CHIS Rules. In addition, in cases where a spouse or partner wishes to use the CHIS as his or her basic insurance and receives income from a professional activity or a retirement pension, the main member must pay a supplementary contribution based on the income of the spouse or partner, in accordance with Article III 5.07 of the CHIS Rules. For more information, see www.cern.ch/chis/DCSF.asp.

The CHIS team is on hand to answer any questions you may have on this subject, which you can submit to Chis.Info@cern.ch. The above information, as well as the Note Verbale from the Permanent Mission of Switzerland, is available in the frontaliers section of the CHIS website: www.cern.ch/chis/frontaliers.asp

### CERN Bulletin

Micro club
Opération NEMO   Pour finir en beauté les activités spéciales que le CMC a réalisé pendant cette année 2014, pour commémorer le 60ème anniversaire du CERN, et le 30ème du Micro Club, l’ Opération NEMO aura cette année un caractère très particulier. Nous allons proposer 6 fabricants de premier ordre qui offriront chacun deux ou trois produits à des prix exceptionnels. L’opération débute le lundi 17 novembre 2014. Elle se poursuivra  jusqu’au samedi 6 décembre inclus. Les délais de livraison seront de deux à trois semaines, selon les fabricants. Donc les commandes faites la dernière semaine, du 1 au 6 décembre, risquent d’arriver qu'au début du mois de janvier 2015. Liste de fabricants participant à cette dernière opération de l’année : Apple Computer, Lenovo, Toshiba, Brother, LaCie et Western Digital. Par exemple, pour Apple, seulement le MacBook Pro 15” Retina, toutes configurations et tous claviers possibles, fait partie de cette opération. Pour les autres fabricants mentionnés nous aurons, dès lundi, des détails sur les propositions qui nous seront offertes. Pour toute demande d’information ou commande, envoyer un mail à : cmc.orders@cern.ch. Cordialement, Votre CMC Team.

### CERN Bulletin

France @ CERN | Come and meet 37 French companies at the 2014 “France @ CERN” Event | 1-3 December
The 13th “France @ CERN” event will take place from 1 December to 3 December 2014. Thanks to Ubifrance, the French agency for international business development, 37 French firms will have the opportunity to showcase their know-how at CERN.   These companies are looking forward to meeting you during the B2B sessions which will be held on Tuesday, 2 December (afternoon) and on Wednesday, 3 December (afternoon) in buildings 500 and 61 or at your convenience in your own office. The fair’s opening ceremony will take place on Tuesday, 2 December (morning) in the Council Chamber in the presence of Rolf Heuer, Director-General of CERN and Nicolas Niemtchinow, Ambassador, Permanent Representative of France to the United Nations in Geneva and to international organisations in Switzerland. For more information about the event and the 37 participating French firms, please visit: http://www.la-france-au-cern.com/

### CERN Bulletin

Upcoming renovations in Building 63
La Poste will close its doors in Building 63 on Friday, 28 November. It moves to Building 510 and where it will open on 1 December (see picture).   UNIQA will close its HelpDesk in Building 63 on Wednesday, 26 November and will re-open the next day in Building 510. La Poste and UNIQA are expected to return to their renovated office space between April and May 2015.

### The Great Beyond - Nature blog

Energy outlook sees continuing dominance of fossil fuels

Just as the United States and China agreed on a landmark deal to curb greenhouse-gas emissions, the world’s leading energy think tank says that demand for fossil fuels is likely to keep growing for at least another 20 years.

IEA

In its latest World Energy Outlook, released on 12 November, the Paris-based International Energy Agency (IEA) estimates that global consumption of primary energy — the energy contained in raw fossil fuels — will increase by 37% by 2040, driven mostly by growing demand in Asia, Africa, the Middle East and Latin America.

Crude-oil consumption is expected to rise from the current 90 million barrels a day to 104 million barrels a day, but demand for oil will plateau by 2040, according to IEA scenarios. Coal demand will already peak in the 2020s, thanks to efforts such as China’s to reduce air pollution and carbon emissions. But the demand for natural gas, the only fossil fuel that in the IEA’s scenarios is still growing after 2040, will rise by more than half, the report says.

The output from US shale projects, which has been booming — propelling the country to become the world’s largest producer of oil and gas — is expected to decline in the 2020s, the IEA says. Even so, there are sufficient untapped resources to meet the growth in consumption. And despite a recent slump in the prices of oil and gas, the IEA warns that rising tensions in parts of the Middle East and in Ukraine pose incalculable threats to global energy security.

“A well-supplied oil market in the short-term should not disguise the challenges that lie ahead, as the world is set to rely more heavily on a relatively small number of producing countries,” the IEA’s chief economist Fatih Birol said when the report was released in London. “The apparent breathing space provided by rising output in the Americas over the next decade provides little reassurance.”

Widespread safety concerns over the use of nuclear power mean that few countries — including China, India, Korea and Russia — are planning to increase their installed nuclear capacity. Nearly 200 of the 434 reactors that were operational at the end of 2013 are set to be retired in the period to 2040. Germany and other countries that decided after the Fukushima-Daiichi accident in 2011 to phase out nuclear power altogether are facing the challenge of addressing the resulting shortfall in electricity generation.

No country has as yet found a long-term solution to the problem of disposing of radioactive waste, the IEA notes.

The IEA reckons that renewable sources — mainly wind and solar — will provide nearly half of the global increase in power generation to 2040. By then, low-carbon sources, including nuclear, are expected to supply about a quarter of the global energy consumption.

However, the IEA  also predicts that between now and 2040 the world will add 1 trillion tonnes of carbon dioxide to the atmosphere — using up the budget that climate scientists say can give the world a reasonable chance to limit the rise in global average temperatures to 2˚C or less.

That calculation will sound cynical to more than half a billion people in sub-Saharan Africa — the regional focus of the report — who live without access to modern energy. Africa’s poorest suffer in fact the most extreme form of energy insecurity in the world, says the IEA.

### ZapperZ - Physics and Physicists

The Physics of Thor's Hammer
Not that you should take any of these seriously, but some time, entertainment reading like this can be "fun".

Jim Kakalios, the author of The Physics of Superheroes, has written an article on the physics of Thor's hammer. I think what I am more interested in is the details trying to explain the initial inconsistencies of what was seen (such as the hammer appearing to be too heavy for everyone to lift, yet, it isn't so heavy that it crushed the books and table that it was resting on). I think that is more fascinating because in many storyline, such inconsistencies are often either overlooked or simply brushed aside. To me, that is where the physics is, because someone who notices such inconsistencies are very aware of the physics, i.e. if such-and-such is true, then how come so-and-so doesn't also occur?

Zz.

### Tommaso Dorigo - Scientificblogging

PhD Positions For Chinese Students in Padova
I am using my blog to advertise the opening of PhD positions in Padova University, to work at several research projects and obtain a PhD in Physics. These are offered to Chinese students through the China Scolarship Council. More information is available at this link.
If you are a bright Chinese student who speaks at least some English and is willing to spend three years working in data analysis for Higgs physics in the CMS experiment, I will take you - so what are you waiting for ? Applications close soon!

Below is a table with deadlines and information.

read more

## November 13, 2014

### Quantum Diaries

Dark Matters: Creation from Annihilation

Hanging around a pool table might seem like an odd place to learn physics, but a couple of hours on our department’s slanted table could teach you a few things about asymmetry. The third time a pool ball flew off the table and hit the far wall I knew something was broken. The pool table’s refusal to obey the laws of physics gives aspiring physicists a healthy distrust of the simplified mechanics they learnt in undergrad. Whether in explaining why pool balls bounce sideways off lumpy cushions or why galaxies exist, asymmetries are vital to understanding the world around us. Looking at dark matter theories that interact asymmetrically with visible matter can give us new clues as to why matter exists.

Alternatives to the classic WIMP (weakly interacting massive particles) dark matter scenario are becoming increasingly important. Natural supersymmetry is looking less and less likely, and could be ruled out in 2015 by the Large Hadron Collider. Asymmetric dark matter theories provide new avenues to search for dark matter and help explain where the material in our universe comes from -baryogenesis. Baryogenesis is in some ways a more important cosmological problem than dark matter. The Standard Model of particle physics describes all the matter that you are familiar with, from trees to stars, but fails to explain how this matter came to be. In fact, the Standard Model predicts a sparsely populated universe, where most of the matter and antimatter has long since annihilated each another. In particle colliders, whenever a particle of matter is created, an opposing particle of antimatter is also created. Antimatter is matter with all its charges reversed, like a photo negative. While it is often said that opposites attract, in the particle physics world opposites annihilate. But when we look at the universe around us, all we see is matter. There are no antistars and antiplanets, no antihumans living on some distant world. So if matter and antimatter are always created together, how did this happen? If there were equal amounts of matter and antimatter, each would annihilate the other in the first fractions of a second and our universe would be stillborn. The creation of this asymmetry between matter and antimatter is known as baryogenesis, and is one of the strongest cosmological confirmations of physics beyond the Standard Model. The exact amount of asymmetry determines how much matter, and consequently how many stars and galaxies, exists now.

And what about the other 85% of matter in the universe? This dark matter has only shown itself through gravitational interactions, but it has shaped the evolution of the universe. Dark matter keeps galaxies from tearing themselves apart, and outnumbers visible matter five to one. Five to one is a curious ratio. If dark and visible matter were entirely different substances with a completely independent history, you would not expect almost the same amount of dark and normal matter. This is like counting the number of trees in the world and finding that it’s the same as the number of pebbles. While we know that dark and visible matter are not the same substance (the Standard Model does not include any dark matter candidates), this similarity cannot be ignored. The similarity in abundances between dark and visible matter implies that they were caused by the same mechanism, created in the same way. As the abundance of matter is determined by the asymmetry between antimatter and matter, this leads us to a relationship between baryogenesis and dark matter.

Asymmetric dark matter theories have attracted significant attention in the last few years, and are now studied by physicists across the world. This has give us a cornucopia of asymmetric dark matter theories. Despite this, there are several common threads and predictions that allow us to test many of them at once. In asymmetric dark matter theories baryogenesis is caused by interactions between dark and normal matter. By having dark matter interact differently with matter and antimatter, we can get marginally more matter in the universe then antimatter. After the matter and antimatter annihilate each other, there is some minuscule amount of matter left standing. These leftovers go on to become the universe you know. Typically, a similar asymmetry in dark matter and its antiparticle is also made, so there is a similar amount of dark matter left over as well. This promotes dark matter from being a necessary, yet boring spectator in the cosmic tango to an active participant, saving our universe from desolation. Asymmetric dark matter also provides new ways to search for dark matter, such as neutrinos generated from dark matter in the sun. As asymmetric dark matter interacts with normal matter, large bodies like the sun and the earth can capture a reservoir of dark matter, sitting at their core. This can generate ghostlike neutrinos, or provide an obstacle for dark matter in direct detection experiments. Asymmetric dark matter theories can also tell us where we do not expect to see dark matter. A large effort has been made to see tell-tale signs of dark matter annihilating with its antiparticle throughout the universe, but it is yet to meet with success. While experiments like the Fermi space telescope have found potential signals (such as a 130 GeV line in 2012), these signals are ambiguous or fail to survive the test of time. The majority of asymmetric dark matter theories predict that there is no signal, as all the anti dark matter has long since been destroyed.

As on the pool table, even little asymmetries can have a profound effect on what we see. While much progress is made from finding new symmetries, we can’t forget the importance of imperfections in science. Asymmetric dark matter can explain where the matter in our universe came from, and gives dark and normal matter a common origin. Dark matter is no longer a passive observer in the evolution of our universe; it plays a pivotal role in the world around us.