Particle Physics Planet

May 27, 2016

Emily Lakdawalla - The Planetary Society Blog

Three-peat! SpaceX sticks another drone ship landing
SpaceX continued its impressive string of first stage recoveries today, sticking a Falcon 9 drone ship landing during the successful launch of THAICOM 8, a communications satellite.

May 27, 2016 11:03 PM

Christian P. Robert - xi'an's og

a bone of contention

“In an age in which ancient genomes can reveal startling links between historical populations, we should ask not just whether remains should be reburied, but who decides and on what grounds.”

An article in Nature described the story of fairly old remains (of the Kennewick Man) in North America that were claimed for reburial by several Native American groups and that were found to be closer [in a genetic sense] to groups that were geographically farther (from South America and even Australian aboriginal Australians). What I find difficult to understand (while it stands at the centre of the legal dispute) is how any group of individuals can advance a claim on bones that are 8,000 year old. With such a time gap (and assuming the DNA analysis is trustworthy) the number of individuals who share the owner of the bones as one ancestor is presumably very large and it is hard to imagine all those descendants coming to an agreement about the management of the said bones. Or even that any descendant has any right on the said bones after so many generations which may have seen major changes in the way deceased members of the community are treated. I am thus surprised that a judiciary court or the US government could even consider such requests.

Filed under: pictures Tagged: aborigines, Australia, DNA, Kennewick Man, Native Americans, Nature, USA

by xi'an at May 27, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

With retry scheduled tomorrow, NASA and Bigelow say BEAM will work—it's just a question of when
NASA will try again tomorrow to expand BEAM, the Bigelow Expandable Activity Module. During a press teleconference this afternoon, officials said they were confident the module was going to expand—it's just a question of when.

May 27, 2016 08:44 PM

Jester - Resonaances

CMS: Higgs to mu tau is going away
One interesting anomaly in the LHC run-1 was a hint of Higgs boson decays to a muon and a tau lepton. Such process is forbidden in the Standard Model by the conservation of  muon and tau lepton numbers. Neutrino masses violate individual lepton numbers, but their effect is far too small to affect the Higgs decays in practice. On the other hand, new particles do not have to respect global symmetries of the Standard Model, and they could induce lepton flavor violating Higgs decays at an observable level. Surprisingly, CMS found a small excess in the Higgs to tau mu search in their 8 TeV data, with the measured branching fraction Br(h→τμ)=(0.84±0.37)%.  The analogous measurement in ATLAS is 1 sigma above the background-only hypothesis, Br(h→τμ)=(0.53±0.51)%. Together this merely corresponds to a 2.5 sigma excess, so it's not too exciting in itself. However, taken together with the B-meson anomalies in LHCb, it has raised hopes for lepton flavor violating new physics just around the corner.  For this reason, the CMS excess inspired a few dozen of theory papers, with Z' bosons, leptoquarks, and additional Higgs doublets pointed out as possible culprits.

Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a negative background fluctuation, and they found Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much,  but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation...

So, if you have a cool model explaining the h→τμ  excess be sure to post it on arXiv before more run-2 data is analyzed ;)

by Jester ( at May 27, 2016 03:24 PM

Clifford V. Johnson - Asymptotia

Of Spies and Spacetime

Stephanie DeMarco interviewed me a few weeks ago for an article she was writing about the science in the TV show Agent Carter (season two). As you know, I did a lot of work for them on the science, some of which I've mentioned here, and we spoke about some of that and a lot of interesting other things besides. Well, her article appeared in Signal to Noise magazine, a publication all about communicating science, and it's really a nice piece. You can read it here. (The excellent title I used for this post is from her article.)


It is a pity that the show has not been renewed for a third season (I'm trying not [...] Click to continue reading this post

The post Of Spies and Spacetime appeared first on Asymptotia.

by Clifford at May 27, 2016 03:20 PM

astrobites - astro-ph reader's digest

Zeroing in: the life of a mixed-up galaxy

TITLE: Exploring the mass assembly of the early-type disc galaxy NGC 3115 with MUSE
AUTHORS: A. Guérou, E. Emsellem, D. Krajnović, R.M. McDermid, T. Contini, and P.M. Weilbacher
FIRST AUTHOR INSTITUTION: Institut de Recherche en Astrophysique et Planétologie, University of Toulouse
STATUS: Accepted by Astronomy & Astrophysics

In today’s trip to the menagerie of galaxies, we’ll be visiting NGC 3115, a nearby (a mere 32 million light-years distant), somewhat unusual galaxy sometimes called ‘The Spindle’ (although – in a brilliant demonstration of the imaginative nature of astronomers – at least one other galaxy, NGC 5866, has the same name … so we’ll stick to NGC 3115). This galaxy is the nearest S0 (pronounced ‘Ess-Zero’) galaxy to us. An S0, or `lenticular’ galaxy is a strange hybrid of the more famous spiral-type and elliptical-type galaxies. For more on this, see the Astrobites Guide to Galaxy Types, or indeed (if I can include a shameless plug) the first few paragraphs of a previous article, How to Grow a Galaxy. If you’ve ever seen the famous ‘Hubble tuning fork’ diagram (see Figure 1), you’ll know that S0’s live at the end of the ‘handle’.

The Hubble 'tuning fork' diagram of galaxy types. S0s/Lenticular galaxies are a halfway-house between the spirals and the ellipticals (image credit).

Fig. 1: The Hubble ‘tuning fork’ diagram of galaxy types. S0s/Lenticular galaxies are a halfway-house between the spirals and the ellipticals (image credit).

What makes these galaxies special? Well, in the current theory of galaxy evolution they represent the end of the line for spiral galaxies like our own Milky-Way. They look a bit like spiral galaxies, with a thin disc of material surrounding a central bulge (a bit like a pair of fried eggs stuck back-to-back…), but lack the prominent spiral arms and ongoing star-formation that are typical of spiral-type galaxies. Like the ellipticals, they are ‘red and dead’ with an aging population of red giant and red dwarf stars and no star formation (usually). What we’re seeing is a spiral galaxy which has used up or lost all its spare gas, but which hasn’t been involved in some kind of violent collision with another galaxy (which would have destroyed the delicate disc structure and left a classic elliptical galaxy): it’s the galactic equivalent of an aged retiree.

NGC 3115 is not only nearby, it’s also nearly edge-on (we’re looking sideways at the thin disc, rather than seeing it from above). This makes it ideal for studying the motion of the disc, for reasons which will become clear. It’s an interesting galaxy in a number of other ways, with sharply defined structures and a supermassive black hole with the mass of a billion suns. As such, it makes a spectacular target for the superb MUSE instrument (that’s the Multi Unit Spectroscopic Explorer) attached to the VLT (Very Large Telescope … but I’m sure whoever named it was being ironic. Surely no-one’s that unimaginative?). MUSE is, quite simply, breathtaking in its ambition, giving astronomers the ability to map vast regions of the sky in reasonable resolution, but more importantly yielding a detailed spectrum at every single pixel. Point it at a galaxy like NGC 3115, and if it can’t tell you everything you need to know about it, nothing can (we’ve featured MUSE data before). Today’s paper uses data from the instrument’s commissioning run, meaning it’s really an opportunity to showcase the instrument’s abilities.

So how, exactly, does MUSE tell us so much? The key is in the spectra – the measurement of how much light is emitted at at any particular wavelength. Galaxy spectra are the sum of the light from all the stars in the galaxy, or in this case covered by a single MUSE pixel (because it’s so close, NGC 3115 is about 1000 pixels across, so this gives a pretty impressive resolution). Individual features in the spectrum – absorption lines, for example – will be shifted from their expected wavelengths by doppler shift (see e.g. this explanation). Because stars in one part of the disc will be moving towards us and the stars in the other half will be moving away (after we subtract off the motion of the galaxy’s centre of mass, anyway), this means we can measure the wavelength of those features in each pixel to figure out how fast the disc is rotating (this is why it was important to find an edge-on disc). But wait, there’s more! Within each pixel, some stars will be moving away from us and others will be moving towards us, so sharp, narrow features in the spectrum get broadened when you add the light from all those stars together. In a small pixel, this motion has much less to do with the rotation of the galaxy, and much more to do with the random motions of the individual stars. If the galaxy is nicely ordered (as in spiral galaxy arms), the stars in a small region all move in lockstep, and so the features are narrow. If their motions are chaotic (as in elliptical galaxies) the features are broad. We call this velocity dispersion (literally just the spread of velocities), and it can tell us a lot.

One way to think about this is that the galaxy spectrum is like a star’s spectrum combined with a Gaussian function, a.k.a. a bell curve (better get used to the fact that those crop up everywhere) that describes the distribution of stellar velocities in a pixel. The mean value (where the bell curve peaks) tells you the velocity shift, and the variance (the width of the bell curve) tells you about the velocity dispersion. Now, it turns out you can go one step further and modify the Gaussian by stretching one side (skewing it), or flattening the top – doing this has a real physical meaning, and we can measure the skewness (which will turn out to be important in NGC 3115) and the kurtosis (the flattening – which isn’t so important, but which I include just so I can write ‘kurtosis’).

The authors measure these four things in every single pixel and produce the maps shown in Figure 2. The top panel shows the rotation of the disc; the second shows that the disc is very nice and ordered, while the core of the galaxy is chaotic; the third shows that the stellar motions are skewed in the disc, which actually tells us that this disc is embedded in a diffuse sphere of stars which doesn’t rotate as quickly as the disc. More subtly, right in the middle the map makes a ‘butterfly’ shape above and below the disc — the skewness reverses. The authors suggest this might be a sign of a breakdown in the disc into a central bar structure (see this for why that’s interesting; googling ‘galaxy bars’ will get you nowhere, I can tell you). Oh, and the bottom panel is the kurtosis. Which tells you nothing.

Fig. 2: The four panels (from top to bottom) show us the velocity, the velocity dispersion, the velocity skewness, and the velocity kurtosis (don't worry about that one) respectively. The bands stretching left and right are the galaxy disc, while the central blob is the galaxy bulge (Fig. 3 from the paper).

Fig. 2: The four panels (from top to bottom) show us the velocity, the velocity dispersion, the velocity skewness, and the velocity kurtosis (don’t worry about that one) respectively. The bands stretching left and right are the galaxy disc, while the central blob is the galaxy bulge (Fig. 3 from the paper).

Moving on, there’s quite a bit more that spectra can tell you. Instead of using the spectral features to probe stellar motions, they can also be used to tell you something about the other properties of the stars. For example, some features are only strong in young stars, so their strength in a given pixel tells you how many young stars are in that pixel. Using a few different features can get you reasonably precise age estimates (i.e. you can work out how long ago the stars in a pixel were formed, on average). Other features are sensitive to the metallicity, which just means the amount of elements other than hydrogen and helium that are found in the stars (that’s right – everything else is a ‘metal’ to an astronomer). This nets you Figure 3, the top panel of which is a map of the stellar ages (it neatly shows that the disc is relatively young (only nine billion years old!) while the bulge formed its many, many stars early in the life of the universe. Meanwhile, the bulge is clearly rather chemically enriched, whereas the stars in the disc are more similar to our own sun. This distinction between the stars in the bulge and the disc is typical of spiral galaxies, so seeing it in a lenticular galaxy supports our picture of these as aged spirals.

Fig. 3: The top panel shows how old the stars in the galaxy are, demonstrating that the disc is relatively young. The bottom panel shows how chemically enriched the different parts of the galaxy are, clearly showing that the bulge is quite distinct from the disc (Fig. 5 from the paper).

Fig. 3: The top panel shows how old the stars in the galaxy are, demonstrating that the disc is relatively young. The bottom panel shows how chemically enriched the different parts of the galaxy are, clearly showing that the bulge is quite distinct from the disc (Fig. 5 from the paper).

In summary, we are entering a golden age of astronomical observation, with lots of high-quality detailed data available. With instruments like MUSE we can learn more about the universe than ever before. In particular, the sheer level of detail in the data is precisely what we need to get enigmatic galaxies like NGC 3115 to spill their secrets.

by Paddy Alton at May 27, 2016 01:47 PM

Emily Lakdawalla - The Planetary Society Blog

Lunar Farside Landing Plans
Phil Stooke describes a research trip to the Regional Planetary Image Facility at the USGS in Flagstaff, where he discovered Jack Schmitt's proposed plans for a farside landing site for Apollo 17.

May 27, 2016 11:48 AM

Peter Coles - In the Dark

In My Dreams

In my dreams I am always saying goodbye and riding away,
Whither and why I know not nor do I care.
And the parting is sweet and the parting over is sweeter,
And sweetest of all is the night and the rushing air.

In my dreams they are always waving their hands and saying goodbye,
And they give me the stirrup cup and I smile as I drink,
I am glad the journey is set, I am glad I am going,
I am glad, I am glad, that my friends don’t know what I think.

by Stevie Smith (1902-1971)

by telescoper at May 27, 2016 09:52 AM

May 26, 2016

Christian P. Robert - xi'an's og

another riddle with a stopping rule

A puzzle on The Riddler last week that is rather similar to an earlier one. Given the probability (1/2,1/3,1/6) on {1,2,3}, what is the mean of the number N of draws to see all possible outcomes and what is the average number of 1’s in those draws? The second question is straightforward, as the proportions of 1’s, 2’s and 3’s in the sequence till all values are observed remain 3/6, 2/6 and 1/6. The first question follows from the representation of the average

\mathbb{E}[N]=\sum_{n=3}^\infty \mathbb{P}(N>n) + 3

as the probability to exceed n is the probability that at least one value is not observed by the n-th draw, namely


which leads to an easy summation for the expectation, namely


Checking the results hold is also straightforward:

averages <- function(n=1){
    x[,1]=as.integer(x[,2]<2) x[,3]=as.integer(x[,2]>2)

since this gives

for (t in 1:1e5) mumbl[t,]=averages()
> apply(mumbl,2,mean)
[1] 1.21766 2.43265 3.64759
> sum(apply(mumbl,2,mean))
[1] 7.2979
> apply(mumbl,2,mean)*c(6,3,2)
[1] 7.30596 7.29795 7.29518

Filed under: Books, Kids, R Tagged: 538, FiveThirtyEight, stopping rule, The Riddler

by xi'an at May 26, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

Three bright planets: Portraits from the Pyrenees
It's a great time to go outdoors and look at planets. I have three glorious planetary portraits to share today, sent to me by amateur astronomer Jean-Luc Dauvergne.

May 26, 2016 07:40 PM

Emily Lakdawalla - The Planetary Society Blog

Space station module expansion called off after BEAM doesn't budge
NASA and Bigelow Aerospace weren't able to get the space station's newest module up and running this morning. Another attempt could come as early as Friday.

May 26, 2016 03:58 PM

The n-Category Cafe

Good News

Various bits of good news concerning my former students Alissa Crans, Derek Wise and Jeffrey Morton.

Alissa Crans did her thesis on Lie 2-Algebras back in 2004. She got hired by Loyola Marymount University, got tenure there in 2011… and a couple of weeks ago she got promoted to full professor! Hurrah!

Derek Wise did his thesis on Topological Gauge Theory, Cartan Geometry, and Gravity in 2007. After a stint at U. C. Davis he went to Erlangen in 2010. When I was in Erlangen in the spring of 2014 he was working with Catherine Meusberger on gauge theory with Hopf algebras replacing groups, and a while back they came out with a great paper on that: Hopf algebra gauge theory on a ribbon graph. But the good news is this: last fall, he got a tenure-track job at Concordia University St Paul!

Jeffrey Morton did his thesis on Extended TQFT’s and Quantum Gravity in 2007. After postdocs at the University of Western Ontario, the Instituto Superior Técnico, Universität Hamburg, Mount Allison University and a visiting assistant professorship at Toledo University, he has gotten a tenure-track job at SUNY Buffalo State! I guess he’ll start there in the fall.

They’re older and wiser now, but here’s what they looked like once:

From left to right it’s Derek Wise, Jeffrey Morton and Alissa Crans… and then two more students of mine: Toby Bartels and Miguel Carrión Álvarez.

by john ( at May 26, 2016 02:10 PM

Symmetrybreaking - Fermilab/SLAC

Low-mass particles that make high-mass stars go boom

Simulations are key to showing how neutrinos help stars go supernova.

When some stars much more massive than the sun reach the end of their lives, they explode in a supernova, fusing lighter atoms into heavier ones and dispersing the products across space—some of which became part of our bodies. As Joni Mitchell wrote and Crosby Stills Nash & Young famously sang, “We are stardust, we are golden, we are billion-year-old carbon.” 

However, knowing this and understanding all the physics involved are two different things. We can’t make a true supernova in the lab or study one up close, even if we wanted to. For that reason, computer simulations are the best tool scientists have. Researchers program equations that govern the behavior of the ingredients inside the core of a star to see how they behave and whether the outcomes reproduce behavior we see in real supernovae. There are many ingredients, which makes the simulations extraordinarily complicated—but one type of particle could ultimately drive supernova explosion: the humble neutrino.

Neutrinos are well known for being hard to detect because they barely interact with other particles. However, the core of a dying star is a remarkably dense environment, and the nuclear reactions produce vast numbers of neutrinos. Both these things increase the likelihood of neutrinos hitting other particles and transferring energy. 

“We can estimate on a sheet of paper roughly how much energy neutrinos may deliver,” says Hans-Thomas Janka, a supernova researcher at the Max Planck Institute for Astrophysics in Garching, Germany. “The question still remains: Is that compatible with the detailed picture? What we need is to combine all the physics ingredients which play a role in the core of a collapsing star.”

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Things fall apart, the center cannot hold

Typically, all the nuclear fusion in a star happens in its core: That’s the only place hot and dense enough. In turn, the nuclear fusion supplies enough energy to keep the core from compressing under its own gravity. But when a star heavier than eight times the mass of our sun exhausts its nuclear fuel and fusion halts, the core collapses catastrophically. The result is a core-collapse supernova: a shock wave from the collapse tears the star apart while the core shrinks into a neutron star or black hole. The explosion leads to more nuclear fusion and the spread of nuclei into interstellar space, where it can eventually be used in making new stars and planets. (The other major supernova type involves an exploding white dwarf, the source of many other common atoms.)

Core-collapse supernovae are rare and extremely violent phenomena, sometimes outshining whole galaxies at their peak. The last relatively close-by supernova appeared in the sky in 1987, in the neighboring galaxy known as the Large Magellanic Cloud. Even if a supernova exploded close enough to observe in detail (while being far enough to be safe), we can't see deep inside to where the action is.

However, 24 neutrinos from the 1987 supernova showed up in particle detectors (built for studying proton decay). These neutrinos were likely born in nuclear reactions deep in the exploding star's interior and confirmed theoretical predictions from the 1960s, when astrophysicists first began to study exploding stars.

Supernova research really took off in the 1980s with growing computer power and the realization that a full understanding of core collapse would need to incorporate a lot of complicated physics.

“Core-collapse supernovae involve a huge variety of effects involving all four fundamental forces,” says Joshua Dolence of the US Department of Energy’s Los Alamos National Laboratory. “The predicted outcome of collapse—even the most basic question of ‘Does this star explode?’—can depend on how these effects are incorporated into simulations.”

In other words, if you don’t do the simulations right, the supernova never happens. While some stars may collapse directly into black holes instead of exploding, astronomers see both supernova explosions and their aftermaths (the most famous example being the Crab Nebula). Some simulations don’t ever show a kaboom, which is a problem: The energy released during the burst of neutrinos is enough to stall out the supernova before it explodes. 

If neutrinos cause the problem, they may also solve it. They carry energy away from one part of the dying star, but they may also transfer it to the stalled-out shockwave, breaking the stalemate and making the supernova happen. It’s not the only hypothesis, but currently it’s the best guess astrophysicists have, and most of the large computer simulations seem to support it so far. However, some of the most energetic supernovae—known as hypernovae—don’t seem to abide by the same rules, so it’s possible something other than neutrinos are responsible. What that something else might be is anyone’s guess.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Explosions in the sky

Core-collapse supernovae are natural laboratories for extreme physics. They involve particle physics, strong gravity as described by general relativity and nuclear physics, all mixed up with strong magnetic fields. All of those aspects must be implemented in computer code, which necessarily involves tough decisions about what details to include and what to leave out. 

“The major open questions revolve around understanding which physical effects are crucial to a quantitative understanding of supernova explosions,” Dolence says. His own work at Los Alamos involves testing the assumptions going into the various theoretical models for explosions and developing faster code to save on precious computer time. Janka’s work in Europe, by contrast, involves modeling the neutrino behavior as exactly as possible. 

Currently, both detailed and simplified approaches are needed, until researchers know exactly what physical processes are involved deep inside the dying star. Both methods use tens of millions of hours of computer time, distributed across multiple computers working in parallel. Even with certain simplifying assumptions, these simulations are some of the biggest around, meaning they require supercomputers at large research centers: the Leibniz Computing Center in Germany; the Barcelona Supercomputing Center in Spain; Los Alamos, Oak Ridge National Laboratory and Princeton University in the United States, and just a handful of others.

“We have no proof so far except our calculations that neutrinos are the cause of the explosion,” Janka says. “We need to compare models with [astronomical] observations in the future.”

The world’s current neutrino experiments are poised to catch neutrinos from the next event and are connected by the Supernova Early Warning System. But in the absence of a nearby supernova, massive supercomputer simulations are all we have. In the meantime, those simulations could still teach us about the extreme physics of dying stars and what role neutrinos play in their deaths.

by Matthew R. Francis at May 26, 2016 01:42 PM

Peter Coles - In the Dark

Jazz and Physics

No time for a full post today, so I’ll just share this intriguing picture I found on the interwebs of two great figures from very different fields: Jazz trumpet legend Louis Armstrong and pioneering quantum physicist, Niels Bohr.


When I first saw this I assumed it had been photoshopped, but I’m reliably informed that the picture is genuine and that it was taken in Copenhagen in 1959. Other than that I know nothing of the circumstances in which it was taken. I’d love to hear from anyone who knows the full story!

by telescoper at May 26, 2016 12:09 PM

astrobites - astro-ph reader's digest

Searching for signs of life with the James Webb Space Telescope

Title: Habitable worlds with JWST: transit spectroscopy of the TRAPPIST-1 system?

Authors: Joanna K. Barstow, Patrick G. J. Irwin

First Author’s Institution: University College London, London, UK

Status: Accepted as a letter in MNRAS


Figure 1: The mirror assembly of the James Webb Space Telescope. JWST will be the largest space telescope to date, and is scheduled for launch in late 2018. Image: NASA

For a few days at the beginning of May, astronomers around the world watched a webcam feed in a mix of excitement and trepidation. One by one, technicians at NASA’s Goddard Space Flight Center were carefully removing eighteen hexagonal covers, revealing the huge, golden primary mirror of the James Webb Space Telescope (JWST).

Scheduled for launch in 2018, JWST will be the largest space observatory built to date. Deployed 1.5 million kilometres from the Earth, its 6.5 meter mirror will peer deep into the infrared sky, cooled to just 50 Kelvin above absolute zero by a tennis court-sized sunshield.

But what is JWST for? The authors of today’s paper suggest one possible use for the giant new telescope: searching for signs of life on other planets.

Life on Earth has left unmistakable marks on the planet’s atmosphere, most notably the vast quantities of oxygen produced by photosynthetic organisms. Astronomers have therefore suggested that searching for oxygen, along with other “biomarkers” such as methane and ozone, in the atmospheres of planets beyond our Solar System could be the key to uncovering alien life. Recently, a planetary system has been discovered that Barstow & Iwrin show may be a good target for such observations.

Trappist-1 is a tiny, ultracool red star just 40 light years from the Earth. Less than a month ago, the discovery of not one, but three Earth-sized planets around Trappist-1 was announced. Intriguingly, each of these planets orbit near to the habitable zone, the region of space around a star where the temperatures are suitable for a planet to have liquid water on its surface. Whether or not the Trappist-1 planets actually have water is already being actively discussed.

To see if JWST could be used to find biomarkers on the Trappist-1 planets, the authors turn to spectroscopy. The technique would be to observe the system as each planet transits (passed in front of the star), splitting the light into its component colours, or wavelengths. Although most of the signal would come from Trappist-1 itself, a small amount of light would shine through the (hypothetical) atmosphere of the planet. The molecules in that atmosphere would each absorb light at specific wavelengths, leaving a distinctive mark on the spectrum.


Figure 2: Simulated spectrum of a Trappist-1 planet in the infrared wavelengths observed by JWST, assuming an atmosphere broadly similar to that of Earth. The various coloured lines show the spectrum for different amounts of ozone, ranging from 100 to 0.000001 times the amount in Earth’s atmosphere.

The authors first produced a theoretical  spectrum of the Trappist-1 planets’ atmospheres (Figure 2). At the wavelengths observed by JWST, the best biomarker to search for is ozone, which produces the strong signal seen at 9 microns in Figure 2. The authors find that, as long as the planet has more than one-hundredth the amount of ozone in Earth’s atmosphere, then it has a chance of showing up in the spectrum.


Simulated JWST spectra for the three Trappist-1 planets after, from left to right, 30, 60 and 90 transits. The temperature of the planets is uncertain, so the red and blue lines show the predicted result for hot and cold planets respectively.

But how long will it take to observe it? Figure 3 shows the next step, simulating the spectra after many JWST observations. For the innermost planet, Trappist-1b, the ozone feature shows only after 60 transits. However the other two planets are much more promising, with clear ozone feature after just 30 days. The authors get the best result for the furthest out planet, Trappist-1d, with not just ozone, but also carbon dioxide seen after 30 days. This is also thought to be the planet most likely to have liquid water on its surface. Is Trappist-1d then the ideal target for a search for life with JWST?

The authors point out a hitch in the plan. To get the full simulated spectrum, the transit will need to be observed 30 times each by two of JWST scientific instruments. Trappist-1d has a (roughly) 18 day orbit, and JWST will only be able to see the system for about 100 days a year. That means it will take at least ten years to get the full 60 transits needed! The second planet, Trappist-1c, wouldn’t take as long, but is less likely to be habitable.

Is that it then for biomarkers? Maybe not. The authors finish by pointing out that searches for close-by systems like Trappist-1 is ongoing, and closer systems to Earth may soon be discovered. If they are, this study shows that JWST will be able to search for signs of life in their atmospheres as well.


by David Wilson at May 26, 2016 11:21 AM

Tommaso Dorigo - Scientificblogging

B0 Meson Lifetime Difference Measured By ATLAS
I feel one could describe the new B-physics result by ATLAS as "stalking". A very subtle detail of the behavior of neutral B mesons has been recently measured, in search of deviations from Standard Model predictions - or for a confirmation of the model. 
First off I should give some background on what ATLAS is, and what neutral B mesons are. ATLAS is one of the big multi-purpose experiments of the Large Hadron Collider at CERN, the machine that discovered the Higgs boson in 2012 and which is poised to search for new physics for the next two decades, studying proton-proton collisions at 13 TeV in the center of mass.

read more

by Tommaso Dorigo at May 26, 2016 09:56 AM

Geraint Lewis - Cosmic Horizons

On The Relativity of Redshifts: Does Space Really “Expand”?
I've written an article 'On the Relativity of Redshifts: Does Space Really "Expand"?' which has appeared in Australian Physics (2016, 53(3), 95-100). For some reason, the arXiv has put the article on hold, so you can download it here.

I like it :)

by Cusp ( at May 26, 2016 01:52 AM

Clifford V. Johnson - Asymptotia



I'm trying to make the characters somewhat expressive, since you, the book's reader, will be spending a lot of time with them. This means constructing lots of hands doing things. Lots of hands. Hands take time, but are actually rather fun to construct from scratch. I start mine as two or three planes hinged together, and then go from there, subdividing until I'm done.

-cvj Click to continue reading this post

The post Gestures appeared first on Asymptotia.

by Clifford at May 26, 2016 01:38 AM

May 25, 2016

Christian P. Robert - xi'an's og

Computing the variance of a conditional expectation via non-nested Monte Carlo

Fushimi Inari-Taisha shrine, Kyoto, June 27, 2012The recent arXival by Takashi Goda of Computing the variance of a conditional expectation via non-nested Monte Carlo led me to read it as I could not be certain of the contents from only reading the title! The short paper considers the issue of estimating the variance of a conditional expectation when able to simulate the joint distribution behind the quantity of interest. The second moment E(E[f(X)|Y]²) can be written as a triple integral with two versions of x given y and one marginal y, which means that it can approximated in an unbiased manner by simulating a realisation of y then conditionally two realisations of x. The variance requires a third simulation of x, which the author seems to deem too costly and that he hence replaces with another unbiased version based on two conditional generations only. (He notes that a faster biased version is available with bias going down faster than the Monte Carlo error, which makes the alternative somewhat irrelevant, as it is also costly to derive.) An open question after reading the paper stands with the optimal version of the generic estimator (5), although finding the optimum may require more computing time than it is worth spending. Another one is whether or not this version of the expected conditional variance is more interesting (computation-wise) that the difference between the variance and the expected conditional variance as reproduced in (3) given that both quantities can equally be approximated by unbiased Monte Carlo…

Filed under: Books, pictures, Statistics, University life Tagged: conditional probability, debiasing, Monte Carlo approximations, Monte Carlo Statistical Methods, Rao-Blackwellisation

by xi'an at May 25, 2016 10:16 PM

astrobites - astro-ph reader's digest

KELT: The Extremely Little Telescope

In the era of extremely large telescopes, let’s take look at the opposite end: the extremely little telescopes. KELT, or the Kilodegree Extremely Little Telescope, is one of them.

Figure 1: The KELT-South telescope at the <a href="">South African Astronomical Observatory</a>. Figure 1 from the paper.

Figure 1: The KELT-South telescope at the South African Astronomical Observatory. Figure 1 from the paper.

KELT is a transiting exoplanet survey telescope. There are actually two KELTs: KELT-North at Winer Observatory in Arizona, and KELT-South at the South African Astronomical Observatory. Having telescopes in both hemispheres allows the KELT team to achieve complete coverage of the sky over the course of the year. Both telescopes are composed of a science-grade detector attached to a commercial camera lens with a 42mm aperture, on a computer controlled telescope mount, inside a smart weather-aware dome (see Figure 1).

KELT’s main science goal is to detect exoplanets using the transit method: KELT monitors stars for periodic brightness dips caused by orbiting planets passing in front of them. KELT has a large field of view, allowing for observations of a myriad of stars simultaneously. KELT focuses on breadth rather than depth, its search space being the nearest and brightest stars (specifically, KELT focuses on stars with 8 < V mag < 12).

A widely distributed approach to finding planets

Detecting planets around the nearest and brightest stars is especially interesting as these stars are easily accessible for follow-up verification using other telescopes. For this purpose, the KELT team has built a vast network of partners from other observatories all over the world (see Figure 2). Once the KELT science team has detected enough transits to accurately predict when the next transit of a given planet candidate will happen, they call up the KELT follow-up network to see if their prediction is correct, and to further characterize the planet’s orbit. If correct, KELT discovered a new planet.


Figure 2: KELT and the follow-up network: Locations of KELT-North, and KELT-South are shown in red. The location of follow-up observatories (blue pins) are distributed all over the world. Figure 2 from the paper.

Then, as the stars KELT focuses on are so bright, a plethora of other methods to characterize the planet are available. Larger, more precise telescopes, can be used to photometrically follow up future transits to precisely measure the radius of the planet. Precise spectroscopic follow-up can be used to measure the radial velocity (RV) variations of the host star, giving a measure of the planet’s mass. Combining radius and mass gives us the planet’s density, a handle on the planet’s bulk composition (is it a gas giant, or is it rocky?). Observing RV variations during a transit allows us to measure the Rossiter-McLaughlin effect (see astrobites on this effect here, here) to determine the orientation of the planet’s orbit with respect to the spin axis of its host star. And with detailed enough spectra during either primary (planet eclipses the star) or secondary transit (star eclipses the planet), we can measure the broad atmospheric composition of the planet.

A future of more planet discoveries

KELT has already detected a number of planets (see e.g. the discussion on KELT 1b, 2Ab, 3b, 4Ab, 6b, 7b, 8b, 10b, 14b, 15b in today’s paper), and the team has more discoveries on the way. Efforts such as KELT (check out also SuperWASPHATNet, MINERVA, MEarth, and Evryscope) prove that exciting astronomy can be done by smaller-scale telescopes. In fact, small ground-based wide-field transit surveys are very important in the exoplanet detection hierarchy: they allow us to single out the interesting stars that have evidence of orbiting planets, allowing us to allocate observational resources much more efficiently.

Disclaimer: I know people in the KELT science team. I just find small telescopes, and their capability to do interesting science so fascinating. This astrobite is for all the small telescopes—the exoplanet scouts—out there.

by Gudmundur Stefansson at May 25, 2016 04:45 PM

Peter Coles - In the Dark

The Price of Jackson

The chance conjunction on this blog of a post about the death of Professor  J.D. Jackson with another about the greed of academic publishers caught the attention of one Ian Jackson (son of the aforementioned Professor) and prompted him to forward me some correspondence between his father and the publisher of the famous textbook, Classical Electrodynamics (published by John Wiley & Sons).

I won’t copy it all here, but here is an excerpt:

The Letter of Agreement of 1996 stipulates that Wiley should not increase the net price more than 5% in any two year period with the author’s permission. A month or so ago I found out from the physics Editor that the US net price was $87, a big jump from the last number I knew. By knowing that the list price is closely 1.3 times the net, I could look at my records of the single copy list price on Wiley’s web site to find that they had increased the price by 5% at least once and probably twice beyond what was permitted by our agreement. I wrote a strong letter, citing chapter and verse about their obvious violation.

John David Jackson was obviously a generous man: the royalties for this book were divided among his four children (including my correspondent Ian). He goes on to add in a letter to all four of them, after the publishers agreed to reduce the list price:

Sorry to be keeping your royalties in check, but I was thinking of the poor students who are paying 1.3 x $82 = $106.60.

They do keep the book for the rest of their lives, so perhaps it is an OK investment.

I don’t remember how much I paid for my copy, but I don’t begrudge the amount because it’s an excellent book. You should always remember, however, that the author of a textbook typically only gets a small percentage (usually~10% ) of the net receipts.

The correspondence sent by Ian includes this hand-drawn graph by the late Professor Jackson:


It seems Professor Jackson shared my (low) opinion of academic publishers!

For the record, my textbook on Cosmology (co-authored with Francesco Lucchin) was also published by Wiley. A representative of the publisher explained to me that their pricing strategy involved trying to keep the revenue constant in time, so that as sales went down the price went up. My book is now very much out of date so I can understand why the sales have fallen off, but I find it hard to believe that the same is true of an enduring classic. Professor Jackson seems to have agreed; he described Wiley’s pricing strategy as “gouging”…

by telescoper at May 25, 2016 01:02 PM

Christian P. Robert - xi'an's og

the end of Series B!

I received this news from the RSS today that all the RSS journals are turning 100% electronic. No paper version any longer! I deeply regret this move on which, as an RSS member, I would have appreciated to be consulted as I find much easier to browse through the current issue when it arrives in my mailbox, rather than being t best reminded by an email that I will most likely ignore and erase. And as I consider the production of the journals the prime goal of the Royal Statistical Society. And as I read that only 25% of the members had opted so far for the electronic format, which does not sound to me like a majority. In addition, moving to electronic-only journals does not bring the perks one would expect from electronic journals:

  • no bonuses like supplementary material, code, open or edited comments
  • no reduction in the subscription rate of the journals and penalty fees if one still wants a paper version, which amounts to a massive increase in the subscription price
  • no disengagement from the commercial publisher, whose role become even less relevant
  • no access to the issues of the years one has paid for, once one stops subscribing.

“The benefits of electronic publishing include: faster publishing speeds; increased content; instant access from a range of electronic devices; additional functionality; and of course, environmental sustainability.”

The move is sold with typical marketing noise. But I do not buy it: publishing speeds will remain the same as driven by the reviewing part, I do not see where the contents are increased, and I cannot seriously read a journal article from my phone, so this range of electronic devices remains a gadget. Not happy!

Filed under: Books, pictures, Statistics, University life Tagged: academic journals, Electronic Journal of Statistics, JRSSB, Royal Statistical Society, RSS

by xi'an at May 25, 2016 12:18 PM

Lubos Motl - string vacua and pheno

Higgs to mu-tau decays would encourage \(B-L\) SSM
While Egyptian airplanes keep on collapsing (President el-Sisi's foes have decorated the same aircraft with graffiti "we will bring this aircraft down" some two years ago!) and Erdogan keeps on blackmailing Europe (and proving the cluelessness of the politicians who wanted or want to promote Turkey into a key solver of Europe's problems), a new Turkish-Egyptian hep-ph paper looks unexpectedly sexy:
Large \(BR(h \to \tau \mu)\) in Supersymmetric Models
Hammad, Khalil, and Un analyze how compatible the simple enough supersymmetric models are with a possibly emerging result of the LHC analyses – namely the decay of the Higgs to a flavor-violating pair\[

h\to\mu^\pm \tau^\mp

\] which seems to appear in roughly a 2-sigma excessive number of events both at ATLAS and CMS and the suggested branching ratio is around 1%.

The trio says that it's very unlikely that the normal minimal supersymmetric standard model may generate this flavor-violating decay. The supersymmetric seesaw model generates the the flavor mixing radiatively and the predicted branching ratio is much smaller than 1%.

On the other hand, they find out that the supersymmetric models with the \(B-L\) gauge symmetry predict the large branching ratio rather generically and naturally. That's cool because I have had other reasons to love the \(B-L\) models.

Note that \(B-L\) naturally arises as a gauge symmetry in the left-right-symmetric models that have been discussed (also on this blog) because of the (now largely abandoned) hints of new gauge bosons around \(2,3\TeV\). In these models, the left-right-asymmetric Standard Model formula for the electric charge \(Q=Y/2+T_3\) is replaced with \(Q=(B-L)/2+T_{3L}+T_{3R}\), a combination of generators of gauge groups \(U(1)_{B-L}\times SU(2)_L\times SU(2)_R\). With the extra \(SU(2)_R\), the left-handed and right-handed 2-component spinor parts of the quark and lepton fields may be reunified into a single doublet again while the \(125\GeV\) Higgs is supposed to be a state in the \((2,2)\) representation.

Such left-right-symmetric models may naturally be embedded in grand unified theories but \(SU(5)\) isn't enough. You need \(SO(10)\) or \(E_6\) – which I have always preferred, anyway, not only because they unify the representations of quarks and leptons into one (per generation; the three reps may be unified into one if one uses a flavor symmetry). One could argue that there exist several other hints, including the \(750\GeV\) cernette, which are more likely to be explained within left-right-symmetric or \(SO(10)\) or \(E_6\) models than asymmetric or \(SU(5)\)-based models.

The LHC collisions will be restarted tomorrow, on Thursday. Each major detector has collected about 0.8 inverse femtobarns in 2016. Within a month, the integrated luminosity for 2016 could surpass the luminosity for the whole year 2015, which was below 4/fb, and the LHC could accelerate its data collection campaign soon afterwards.

Many things are possible and perhaps likely: the \(750\GeV\) cernette, gluino near \(1.5\TeV\) and various other, less visible superpartners around \(0.5\TeV\), the flavor-violating decay of the Higgs, and others. 2016 could be the year when the experiments will jump ahead of the particle physics phenomenology – in the sense that the phenomenologists will have to work hard to catch up with the experiments. At the same moment, I tend to feel that the explanation of those new discoveries, if any, will be rather conservative – compatible with the scenarios that I have considered the prettiest ones for decades.

by Luboš Motl ( at May 25, 2016 07:48 AM

Robert Helling - atdotde

Holographic operator ordering?
Believe it or not, at the end of this week I will speak at a workshop on algebraic and constructive quantum field theory. And (I don't know which of these two facts is more surprising) I will advocate holography.

More specifically, I will argue that it seems that holography can be a successful approach to formulate effective low energy theories (similar to other methods like perturbation theory of weakly coupled quasi particles or minimal models). And I will present this as a challenge to the community at the workshop to show that the correlators computed with holographic methods indeed encode a QFT (according to your favorite set of rules, e.g. Whiteman or Osterwalder-Schrader). My [kudos to an anonymous reader for pointing out a typo] guess would be that this has a non-zero chance of being a possible approach to the construction of (new) models in that sense or alternatively to show that the axioms are violated (which would be even more interesting for holography).

In any case, I am currently preparing my slides (I will not be able to post those as I have stolen far too many pictures from the interwebs including the holographic doctor from Star Trek Voyager) and came up with the following question:

In a QFT, the order of insertions in a correlator matters (unless we fix an ordering like time ordering). How is that represented on the bulk side?

Does anybody have any insight about this?

by Robert Helling ( at May 25, 2016 07:25 AM

May 24, 2016

Peter Coles - In the Dark

In Memoriam – HMS Hood

Today is a solemn anniversary which surprisingly hasn’t been marked in the media. On this day 75 years ago, i.e. 24th May 1941, the Royal Navy battlecruiser HMS Hood was sunk by the German Battleship Bismarck in the Battle of the Denmark Strait. Of a ship’s complement of 1418 only three survived the sinking of HMS Hood; it was one of the greatest maritime disasters of the Second World War. I’m not one for dwelling excessively on the past, but I think it’s a shame this event has not been better remembered. We owe a lot to people like the 1415 who gave their lives that day, so I’m glad I remembered in time to pay my respects.


by telescoper at May 24, 2016 06:04 PM

astrobites - astro-ph reader's digest

Is there anything out there? A search for the widest separation planets

Title: High Contrast Imaging with Spitzer: Constraining the Frequency of Giant Planets out to 1000 AU separations

Authors: Stephen Durkan, Markus Janson, Joseph C. Carson

First Author’s Institution: Queen’s University Belfast, UK

Status: Accepted to ApJ

Direct Imaging is kind of the “new kid on the block” for exoplanet detection – although planets were first directly imaged 8 years ago, in 2008, even today only ~10 of the ~2000 known exoplanets have been directly imaged. You need the biggest, fanciest new instruments, like SPHERE and GPI to image new planets . . .

Or, you could go back and reanalyse Spitzer data from 2003.

That’s what the authors of today’s paper have tried to do: the idea is that the post-processing techniques for data analysis have got so much better over the last 13 years, that new detections can be be made even in old data. In fact, there is similar work going on looking at archival Hubble data from the 90s.

Spitzer data are particularly interesting, since they open up a new region in which to look for planets – wider separation planets can be found with Spitzer than with ground based telescopes.

Since Spitzer is in space and firing things into orbit is hard, it’s much smaller than lots of ground-based telescopes. This means the resolution is lower, and planets close to their host stars are impossible to detect. However, Spitzer is in space, meaning that atmospheric distortions aren’t a problem. On the ground, the atmosphere blurs out any light passing through it, and is generally a huge pain if you’re into observational astrometry. Very clever people invented very clever adaptive optics systems for dealing with the atmosphere – but these need a bright light in the sky to model the atmosphere and use it to correct the atmospheric distortions. This restricts the best quality observations to within a fraction of a degree of a bright target. Happily, most planets are orbiting stars, which are bright. However, if you look too far from the bright star, the adaptive optics stops working so well. None of this is a problem for Spitzer, which is in space, and so can actually reach wider separations than ground-based telescopes, without worrying about the pesky atmosphere.

For this planet search, the authors focus on the region between 100AU (roughly twice Pluto’s orbit) and 1000AU (half way to the Oort cloud) – Planet IX, if it is real, lies somewhere in the middle of this region. They use a total of 121 stars, from two archival Spitzer programs taken during 2003 and 2004. They’re mostly very close stars, and are fairly similar to the Sun.

Old Data, New Planets?

I mentioned earlier that data reduction of post-processing of images has improved dramatically in the last decade: one big new tool at our disposal is something called PCA, or Principal Component Analysis. The maths of this technique was figured out by Karl Pearson back in 1901, but it wasn’t applied to Exoplanet Imaging till as recently as 2012.

Figure 1: The top image shows the standard output of the Spitzer data processing pipeline. The bottom one shows the same star, after the team’s reanalysis: far more of the stellar light has been removed within the central grey box, by using a library of images to understand and then subtract the noise pattern. In this image, several planetary candidates are revealed above the star.

The objective is to remove as much light from the star as possible, and only leave the planetary signal in our images. It works like this: first, you build a library of ‘reference images’ that characterise the noise (both random noise and leaking starlight) in your image, that you’d like to remove. These references could be a whole series of images of the same star are taken at different orientations – the noise is fairly constant between the images, but the planet moves between each one, so it doesn’t get subtracted out.

Then, you test how similar each library star is to your current target, and build a custom reference image that’s as similar as possible to your target. This reference image is split into its ‘principle components’ – the set of simple patterns that add up to create your image. These components are subtracted one at a time to find the best noise removal solution (we’re balancing ‘remove all the noise’ with ‘don’t remove any of the planet signal’).

For this to work, you want your reference images to be as similar as possible to each other – ideally, photographs of the same star taken on the same night. The idea of turning your telescope to create the reference images is known as Angular Differential Imaging, and was first published in 2006. In other words, it was published three years after the Spitzer data was taken. So observing sequences weren’t set up like this, and ADI wasn’t an option with this data.

As such, for this work, all the targets act as reference images for all the other targets. That makes this survey an interesting test of PCA in very non-optimal conditions – as you can imagine, the noise in each image varies a lot more over the course of a year than in an hour, especially if each image is of a different star.

And the algorithm passes with flying colors! The authors put a lot of care into matching the images to each other very carefully, and then let PCA work its magic. The final results are like those in figure 1.

So, what’s out there?

Well, nothing. 34 candidates are initially found, but follow-up studies show that they are all either distant background stars, or data quirks – bad pixels on the chip, for example.

Figure 2: This plot show the sensitivity of the survey. The bottom axis shows the angle from the star in arcseconds, while the top axis shows this converted into AU (the distance between the Earth and the Sun) for the average distance of target stars. Bear in mind that Pluto orbits at ~40AU! The left and right axes show planet detection limits for brightness (in magnitudes) and mass (in multiples of Jupiter mass) respectively.

It’s still an interesting result though, because no-one has really tested planet frequency this far out before. In the final part of the paper, the authors simulate that we would be able to detect 42% of planets that had such wide orbits. The undetectable 58% would be too low in mass, or lined up badly – hidden behind the host, for example.

The number of stars observed, the detection fraction listed above, and the survey non-detections lead the authors to calculate that less than 9% of stars host super wide separation, massive planets. Lots of work is going on currently to understand the formation of planetary systems – and this paper adds one more constraint: a successful planetary formation theory would have to predict this super-wide lack of planets. We don’t yet understand the formation of planets, but we’re certainly inching closer!



by Elisabeth Matthews at May 24, 2016 05:29 PM

Sean Carroll - Preposterous Universe

The Big Picture: The Talk

I’m giving the last lecture on my mini-tour for The Big Picture tonight at the Natural History Museum here in Los Angeles. If you can’t make it, here’s a decent substitute: video of the talk I gave last week at Google headquarters in Mountain View.

I don’t think I’ve quite worked out all the kinks in this talk, but you get the general idea. My biggest regret was that I didn’t have the time to trace the flow of free energy from the Sun to photosynthesis to ATP to muscle contractions. It’s a great demonstration of how biological organisms are maintained through the creation of entropy.

by Sean Carroll at May 24, 2016 03:44 PM

Symmetrybreaking - Fermilab/SLAC

Of bison and bosons

What are all of the symbols in Fermilab’s unofficial seal?

When talking about Fermilab’s distinct visual and artistic aesthetic, it’s impossible not to mention Angela Gonzales. The artist – Fermilab’s 11th employee – joined the lab in 1967 and immediately began connecting the lab’s cutting-edge science with an artistic flair to match. She picked a color palette of bold blues and oranges and reds that would go on to adorn the campus’ buildings, and illustrated hundreds of posters, signs and report covers for the lab.

She also designed the iconic logo and a beautiful graphic that has become an unofficial seal for the laboratory, most commonly found on the back of T-shirts. But what do all of the symbols mean?

Wilson Hall

Perhaps the most iconic image related to Fermilab is the silhouette of the 16-story Wilson Hall. The building is named after Fermilab’s founding director, Robert Wilson, who (among many other responsibilities, such as overseeing the design and construction of the new particle accelerator complex) took a helicopter up to plot out the best aesthetic height for the central building on Fermilab’s 6800-acre campus. The concrete building has two independent, freestanding towers that are connected by a series of crossovers that rest on rollers.


These three boxes show different kinds of sights researchers at Fermilab encounter as they explore the building blocks of the universe. In the bottom left are the swirls of tracks created as particles smash into a fixed target and the resulting debris passes through a particle detector (such as a bubble chamber, pictured). In the lower right are tracks from a head-on particle collision created by Fermilab’s Tevatron collider and recorded by its CDF and DZero experiments. In the upper right box are the swirling arms of a galaxy, representing Fermilab’s particle astrophysics program. The smallest particles of matter build up to create the largest structures in the universe.

Accelerator complex

Hidden within this piece of art is something that even some Fermilab employees might not have picked up on: the lab’s particle accelerator chain. It started with protons in the Fermilab Linear Accelerator, the straight line leading into the small circle representing the Booster accelerator. From there, protons entered the Main Ring and then the Tevatron, or were fed to the triangular Antiproton Source, a nickel target that produced antiprotons when struck. Those protons (p) and antiprotons (p̄) would then collide in one of the two detectors on the Tevatron or head down the beamlines to the fixed-target areas, where researchers studied protons, neutrinos and mesons. Today Fermilab’s largest and most powerful accelerator is the Main Injector (pictured), a 2-mile ring that powers a new suite of particle experiments.

Prairie symbols

Fermilab is one of a handful of National Environmental Research Parks in the United States. The site contains hundreds of acres of ecosystems native to the Midwest, including tall grass prairie, oak savanna, woodlands and wetlands. The site is also home to coyotes, deer, birds (including the Canada geese depicted in the artwork), and, of course, bison. Wilson was a cowboy from Frontier, Wyoming, and he brought the first members of Fermilab’s bison herd to the site to represent the “frontier” of physics and the lab’s strong ties to the prairie.


Quarks (q) are some of the fundamental building blocks of matter. The six known kinds are up, down, charm, strange, top and bottom (u, d, c, s, t, b), which combine to form different subatomic particles. (There is also an antiquark version (q̄) of each quark variety). The protons and neutrons that make up your atoms comprise a combination of up and down quarks. Researchers at Fermilab discovered the bottom quark in 1977 and the top quark in 1995. The above image shows then-director John Peoples discussing the top quark discovery with reporters. Today scientists use the SeaQuest experiment at Fermilab to study the presence of strange quarks in protons.


Leptons make up another class of elementary particles. This class includes the electron (e) that powers your electronics and its heavier cousins, the muon (μ) and the tau (τ). It also includes the neutrino (ν), a lightweight, electrically neutral particle. Neutrinos are among the most abundant particles in the universe, second only to photons (particles of light), and pass through you all the time without interacting. Neutrinos have been a part of Fermilab’s fixed-target experiments (where they left tracks such as the pictured bubble chamber event) for decades and are the focus of the upcoming Deep Underground Neutrino Experiment. Fermilab is also constructing two new muon experiments, Muon g-2 and Mu2e.


Our world is governed by various subatomic forces, which are transmitted by particles called bosons. The charged W+ and W- bosons and the neutral Z boson are the carriers of the weak force, which is responsible for how particles decay. Researchers were able to learn more about those carriers first by using fixed-target experiments and later by colliding protons and antiprotons in the CDF (pictured) and DZero detectors. Better understanding these particles and their characteristics (such as mass) helped physicists hunt for other particles predicted by theory, such as the Higgs boson.


The two symbols at the top of the artwork represent types of mesons, which are particles made of one quark and one antiquark. The left is the J/Ψ, pronounced jay-psi, made of a charm quark and an anticharm quark. It was given two different names, both of which stuck, by the discoverers at Brookhaven and SLAC national laboratories. It revealed the existence of a fourth type of quark, the charm, and became an important part of research projects exploring theories of quark physics at laboratories around the world. The right symbol is the Υ, pronounced upsilon, a meson made of a bottom quark and an antibottom quark. The discovery of the Υ at Fermilab in 1977 by a team led by Leon Lederman (pictured) was the first experimental proof for the existence of the bottom quark.

Explore the symbols in this piece of Fermilab artwork

  • Wilson Hall
  • Tracks
  • Accelerator complex
  • Prairie symbols
  • Quarks
  • Leptons
  • Bosons
  • Mesons

If all this symbolism isn’t enough for you—or if you’re a part of the coloring book craze and want to shade in a science drawing—fear not. Gonzales made an expanded version of this graphic. The buildings (clockwise from the top left) are the Meson Lab (now the Fermilab Test Beam Facility), the Geodesic Dome (now part of the Silicon Detector Facility), the CDF building (now part of the Illinois Accelerator Research Center) and the Pagoda (a small building that hosted a control room). She also incorporated four of the outdoor sculptures on the Fermilab site (clockwise from top): Tractricious, the Mobius Strip, Acqua Alle Funi and Broken Symmetry. You’ll also find some of the particle symbols from the core graphic, along with the symbols for π mesons, K mesons and gluons (g).

Download a high resolution version of the expanded artwork.

by Lauren Biron at May 24, 2016 01:00 PM

Peter Coles - In the Dark

From Sappho to Babbage

The English mathematician Charles Babbage, who designed and built the first programmable calculating machine, wrote to the (then) young poet Tennyson, whose poem The Vision of Sin he had recently read:


I like to think Babbage was having a laugh with Tennyson here, rather than expressing a view that poetry should be taken so literally, but you never know..

Anyway, I was reminded of the above letter by the much-hyped recent story of the alleged astronomical “dating” of this ancient poem (actually just a fragment) by Sappho:

Tonight I’ve watched
the moon and then
the Pleiades
go down

The night is now
half-gone; youth
goes; I am

in bed alone

It is a trivial piece of astronomical work to decuded that if the “Pleiades” does indeed refer to the constellation and “the night is now half-gone” means sometime around midnight, then the scene described in the fragment happened, if it happened at all, between January and March. However, as an excellent rebuttal piece by Darin Hayton points out, the assumptions needed to arrive at a specific date are all questionable.

More important, poetry is not and never has been intended for such superficial interpretation.  That goes for modern works, but is even more true for ancient verse. Who knows what the imagery and allusions in the text would have meant to an audience when it was composed, over 2500 years ago, but which are lost on a modern reader?

I’m not so much saddened that someone thought to study the possible astronomical interpretation an ancient text, even if they didn’t do a very thorough job of it. At least that means they are interested in poetry, although I doubt they were joking as Babbage may have been.

What does sadden me, however, is the ludicrous hype generated by the University of Texas publicity machine. There’s far too much of that about, and it’s getting worse.



by telescoper at May 24, 2016 12:54 PM

May 23, 2016

astrobites - astro-ph reader's digest

Exploring the law of star-formation through a spatially resolved study of two spiral galaxies

Title: The super-linear slope of the spatially resolved star formation law in NGC 3521 and NGC 5194 (M51a)

Authors: Guilin Liu, Jin Koda, Daniela Calzetti, Masayuki Fukuhara, and Reiko Momose

First Author’s Institution: Astronomy Department, University of Massachusetts, Amherst, MA, USA

Paper status: Published in ApJ


Nimisha Kumari

This is a guest post by Nimisha Kumari, a graduate student at the Institute of Astronomy, Cambridge (UK). Her current research involves the spatially-resolved studies of nearby spiral and blue compact dwarf galaxies. She received her bachelor’s degree from the University of Delhi (India) and master’s degree from Ecole Polytechnique (France).



The Schmidt Law, formulated by Maarten Schmidt in 1959, relates either the volume or surface density of the star-formation rate (SFR) with that of the gas as a power-law. In other words, ΣSFR = AΣγgas where ΣSFR and Σgas are the surface densities of the SFR and the gas (atomic and molecular) respectively, A is the average global efficiency of star-formation of the system studied (e.g. galaxies, galactic disks, star-forming regions), and γ is the power-law index. Due to the difficulty in measuring the volume densities of the two quantities, this law is generally expressed in terms of the surface densities, since they are more easily observable.

The value of γ = 1.4 ± 0.1 was found by Robert Kennicutt empirically from data of normal spirals and starbursts. This established the disk-averaged star-formation law, called the Schmidt-Kennicutt (S–K) law. This law can explain plausible scenarios of galaxy formation and evolution and hence is being used as an essential prescription in various models and simulations. However a disk-averaged star-formation law implies smoothing over the enormous local variations in the stellar population (dependent on age and the initial mass function) and gas/dust geometry. This means the currently established law might not be a fundamental physical relationship. Thus, in order to understand the physics of star-formation, we need a spatially-resolved study of the star-formation law.


In their paper, Liu et al. focus on addressing two questions using a sub-kiloparsec study of two nearby spiral galaxies, NGC 5194 and NGC 3521. The first concerns how the data are processed before any analysis has been conducted. The second looks at how the size of the start-forming regions studied affects the star-formation law.

Data used are the images of the two galaxies in Hα, far-ultraviolet (FUV) and mid-infrared (24 μm) wavebands to trace SFR; and CO and H I to trace gas present in the galaxies. The observed star-light (traced by Hα and FUV) and the dust-emission (traced by mid-infrared) in a star-forming region do not only contain the contribution from the star-forming region with young stars and the related dust component, but also from the underlying diffuse component of stellar/dust emission unassociated with current star-formation. The question is: Is it necessary to remove this diffuse component? Astronomers have investigated this question before, and do not know the answer yet. Liu et al. subtract the diffuse component from their data (Hα, FUV and mid-infrared) statistically using an astronomical software HIIphot, and study the S–K law for subtracted and unsubtracted data.

Tracing SFR: To understand how the above described data can be used to trace SFR, let us have a look at a typical star-forming region. A star forming region contains young and hot massive stars emitting mostly in FUV. This corresponds to the ionisation energy of neutral hydrogen found in the interstellar medium. FUV radiation from hot stars ionizes the gas around them, producing H II regions which are traced by Hα emission. The stellar environment also contains a huge amount of dust which absorbs nearly half of the UV and optical radiation, and emits at longer wavelengths in the infrared. Therefore, Hα or FUV luminosity is combined with infrared luminosity to account for the absorption by dust and then converted to SFR.


Figure 1: Results for M51a studied at 750 pc resolution. For the left panel in each pair, the diffuse backgrounds in the Hα, 24μm, and FUV images are not removed (denoted by “BG+”), but are removed in the right panel (“BG−”). Solid black dots indicate data points with sufficient signal-to-noise (S/N) ratio used for analysis, while light gray dots are the points with low S/N. The fitted slopes are indicated at the bottom right of each panel. (a) The molecular-only S-K law with SFR derived from Hα+24μm; (c) the correlation between SFRs derived from Hα+24μm and FUV+24μm, respectively; (d) the relation of Hα+24μ SFR vs. HI surface density; (e) the total hydrogen S-K law with SFR derived from Hα+24μm.(Caption adapted from Figure 4 of Liu et al. 2011)



Figure 1 provides details of the answer to the first question of this study (for NGC 5194) whether the subtraction of diffuse background is necessary for S–K law study. The two SFR indicators (Hα and FUV) can be used interchangeably only in the case when the diffuse back- ground unrelated to current star-formation is subtracted. This result is quite consistent with our knowledge that FUV traces older star-formation (10–100 Myr) in comparison to Hα which traces recent star-formation (< 5 Myr). Subtraction of diffuse background leads to the super-linear slope (i.e. γ > 1) of S-K law, both in the cases of molecular gas as well as total gas. However no apparent correlation is found between the SFR and the atomic gas. Above-mentioned results hold true for NGC 3521 as well. Liu et al. hence conclude that diffuse background is considerably important for studies of star-formation and results in super-linear S–K law if subtracted and linear S–K law if unsubtracted.

Figure 2 shows the influence of spatial scale on the slope of the S–K law (γH ) for subtracted data. NGC 3521, because of its higher inclination angle than that of M51a, has a much larger projected area and physical scale. The limit of study in NGC 3521 is therefore set to 700 pc. This corresponds to a physical scale of ∼ 2 kpc. Figure 2 (right) shows the negligible variation of γ with the spatial scale in NGC 3521 with high error bars, which is attributed to the unreliability in measurements due to high inclination angle. For M51a (Figure 2 left), γ decreases with increasing spatial scale. However, both galaxies show a super-linear slope (γ > 1) of S–K law. Liu et al. emphasize the consistency of their result at lower spatial scale in M51a with that of Galactic studies. This hints at an intrinsic super-linear S-K law for spiral galaxies.


Figure 2: Effect of spatial scale δ (in kiloparsec) on power-law index γH in M51a (left) and NGC 3521 (right).

by Guest at May 23, 2016 02:08 PM

CERN Bulletin

CERN Bulletin Issue No. 20-21/2016
Link to e-Bulletin Issue No. 20-21/2016Link to all articles in this issue No.

May 23, 2016 11:39 AM

May 22, 2016

Clifford V. Johnson - Asymptotia

A New Era

expo_line_two_opening_1Many years ago, even before the ground was broken on phase one of the Expo line and arguments were continuing about whether it would ever happen, I started saying that I was looking forward to the days when I could put my pen down, step out of my office, get on the train a minute away, and take it all the way to the beach and finish my computation there. Well, Friday, the first such day arrived. Phase two of the Expo line is now complete and has opened to the public, with newly finished stations from Culver City through Santa Monica. It joins the already running (since April 2012) Expo phase one, which I've been using every day to get to campus after changing from the Red line (connecting downtown).

expo_line_two_opening_2On Friday I happened to accidentally catch the first Expo Line train heading all the way out to Santa Monica! (I mean the first one for the plebs - there had been a celebratory one earlier with the mayor and so forth, I was told). I was not planning to do so and was just doing my routine trip to campus, thinking I'd try the new leg out later (as I did when phase one opened - see here). But there was a cheer when the train pulled up at Metro/7th downtown and the voice over the overhead speakers [...] Click to continue reading this post

The post A New Era appeared first on Asymptotia.

by Clifford at May 22, 2016 04:50 PM

May 21, 2016

John Baez - Azimuth

The Busy Beaver Game

This month, a bunch of ‘logic hackers’ have been seeking to determine the precise boundary between the knowable and the unknowable. The challenge has been around for a long time. But only now have people taken it up with the kind of world-wide teamwork that the internet enables.

A Turing machine is a simple model of a computer. Imagine a machine that has some finite number of states, say N states. It’s attached to a tape, an infinitely long tape with lots of squares, with either a 0 or 1 written on each square. At each step the machine reads the number where it is. Then, based on its state and what it reads, it either halts, or it writes a number, changes to a new state, and moves either left or right.

The tape starts out with only 0’s on it. The machine starts in a particular ‘start’ state. It halts if it winds up in a special ‘halt’ state.

The Busy Beaver Game is to find the Turing machine with N states that runs as long as possible and then halts.

The number BB(N) is the number of steps that the winning machine takes before it halts.

In 1961, Tibor Radó introduced the Busy Beaver Game and proved that the sequence BB(N) is uncomputable. It grows faster than any computable function!

A few values of BB(N) can be computed, but there’s no way to figure out all of them.

As we increase N, the number of Turing machines we need to check increases faster than exponentially: it’s


Of course, many could be ruled out as potential winners by simple arguments. But the real problem is this: it becomes ever more complicated to determine which Turing machines with N states never halt, and which merely take a huge time to halt.

Indeed, matter what axiom system you use for math, as long as it has finitely many axioms and is consistent, you can never use it to correctly determine BB(N) for more than some finite number of cases.

So what do people know about BB(N)?

For starters, BB(0) = 0. At this point I should admit that people don’t count the halt state as one of our N states. This is just a convention. So, when we consider BB(0), we’re considering machines that only have a halt state. They instantly halt.

Next, BB(1) = 1.

Next, BB(2) = 6.

Next, BB(3) = 21. This was proved in 1965 by Tibor Radó and Shen Lin.

Next, BB(4) = 107. This was proved in 1983 by Allan Brady.

Next, BB(5). Nobody knows what BB(5) equals!

The current 5-state busy beaver champion was discovered by Heiner Marxen and Jürgen Buntrock in 1989. It takes 47,176,870 steps before it halts. So, we know

BB(5) ≥ 47,176,870.

People have looked at all the other 5-state Turing machines to see if any does better. But there are 43 machines that do very complicated things that nobody understands. It’s believed they never halt, but nobody has been able to prove this yet.

We may have hit the wall of ignorance here… but we don’t know.

That’s the spooky thing: the precise boundary between the knowable and the unknowable is unknown. It may even be unknowable… but I’m not sure we know that.

Next, BB(6). In 1996, Marxen and Buntrock showed it’s at least 8,690,333,381,690,951. In June 2010, Pavel Kropitz proved that

\displaystyle{ \mathrm{BB}(6) \ge 7.412 \cdot 10^{36,534} }

You may wonder how he proved this. Simple! He found a 6-state machine that runs for


steps and then halts!

Of course, I’m just kidding when I say this was simple. The machine is easy enough to describe, but proving it takes exactly this long to run takes real work! You can read about such proofs here:

• Pascal Michel, The Busy Beaver Competition: a historical survey.

I don’t understand them very well. All I can say at this point is that many of the record-holding machines known so far are similar to the famous Collatz conjecture. The idea there is that you can start with any positive integer and keep doing two things:

• if it’s even, divide it by 2;

• if it’s odd, triple it and add 1.

The conjecture is that this process will always eventually reach the number 1. Here’s a graph of how many steps it takes, as a function of the number you start with:

Nice pattern! But this image shows how it works for numbers up to 10 million, and you’ll see it doesn’t usually take very long for them to reach 1. Usually less than 600 steps is enough!

So, to get a Turing machine that takes a long time to halt, you have to take this kind of behavior and make it much more long and drawn-out. Conversely, to analyze one of the potential winners of the Busy Beaver Game, people must take that long and drawn-out behavior and figure out a way to predict much more quickly when it will halt.

Next, BB(7). In 2014, someone who goes by the name Wythagoras showed that

\displaystyle{ \textrm{BB}(7) > 10^{10^{10^{10^{10^7}}}} }

It’s fun to prove lower bounds on BB(N). For example, in 1964 Milton Green constructed a sequence of Turing machines that implies

\textrm{BB}(2N) \ge 3 \uparrow^{N-2} 3

Here I’m using Knuth’s up-arrow notation, which is a recursively defined generalization of exponentiation, so for example

\textrm{BB}(10) \ge 3 \uparrow^{3} 3 = 3 \uparrow^2 3^{3^3} = 3^{3^{3^{3^{\cdot^{\cdot^\cdot}}}}}

where there are 3^{3^3} threes in that tower.

But it’s also fun to seek the smallest N for which we can prove BB(N) is unknowable! And that’s what people are making lots of progress on right now.

Sometime in April 2016, Adam Yedidia and Scott Aaronson showed that BB(7910) cannot be determined using the widely accepted axioms for math called ZFC: that is, Zermelo—Fraenkel set theory together with the axiom of choice. It’s a great story, and you can read it here:

• Scott Aaronson, The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me, Shtetl-Optimized, 3 May 2016.

• Adam Yedidia and Scott Aaronson, A relatively small Turing machine whose behavior is independent of set theory, 13 May 2016.

Briefly, Yedidia created a new programming language, called Laconic, which lets you write programs that compile down to small Turing machines. They took an arithmetic statement created by Harvey Friedman that’s equivalent to the consistency of the usual axioms of ZFC together with a large cardinal axiom called the ‘stationary Ramsey property’, or SRP. And they created a Turing machine with 7910 states that seeks a proof of this arithmetic statement using the axioms of ZFC.

Since ZFC can’t prove its own consistency, much less its consistency when supplemented with SRP, their machine will only halt if ZFC+SRP is inconsistent.

Since most set theorists believe ZFC+SRP is consistent, this machine probably doesn’t halt. But we can’t prove this using ZFC.

In short: if the usual axioms of set theory are consistent, we can never use them to determine the value of BB(7910).

The basic idea is nothing new: what’s new is the explicit and rather low value of the number 7910. Poetically speaking, we know the unknowable starts here… if not sooner.

However, this discovery set off a wave of improvements! On the Metamath newsgroup, Mario Carneiro and others started ‘logic hacking’, looking for smaller and smaller Turing machines that would only halt if ZF—that is, Zermelo–Fraenkel set theory, without the axiom of choice—is inconsistent.

By just May 15th, Stefan O’Rear seems to have brought the number down to 1919. He found a Turing machine with just 1919 states that searches for an inconsistency in the ZF axioms. Interestingly, this turned out to work better than using Harvey Friedman’s clever trick.

Thus, if O’Rear’s work is correct, we can only determine BB(1919) if we can determine whether ZF set theory is consistent. However, we cannot do this using ZF set theory—unless we find an inconsistency in ZF set theory.

For details, see:

• Stefan O’Rear, A Turing machine Metamath verifier, 15 May 2016.

I haven’t checked his work, but it’s available on GitHub.

What’s the point of all this? At present, it’s mainly just a game. However, it should have some interesting implications. It should, for example, help us better locate the ‘complexity barrier’.

I explained that idea here:

• John Baez, The complexity barrier, Azimuth, 28 October 2011.

Briefly, while there’s no limit on how much information a string of bits—or any finite structure—can have, there’s a limit on how much information we can prove it has!

This amount of information is pretty low, perhaps a few kilobytes. And I believe the new work on logic hacking can be used to estimate it more accurately!

by John Baez at May 21, 2016 05:24 PM

The n-Category Cafe

Castles in the Air

The most recent issue of the Notices includes a review by Slava Gerovitch of a book by Amir Alexander called Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. As the reviewer presents it, one of the main points of the book is that science was advanced the most by the people who studied and worked with infinitesimals despite their apparent formal inconsistency. The following quote is from the end of the review:

If… maintaining the appearance of infallibility becomes more important than exploration of new ideas, mathematics loses its creative spirit and turns into a storage of theorems. Innovation often grows out of outlandish ideas, but to make them acceptable one needs a different cultural image of mathematics — not a perfectly polished pyramid of knowledge, but a freely growing tree with tangled branches.

The reviewer makes parallels to more recent situations such as quantum field theory and string theory, where the formal mathematical justification may be lacking but the physical theory is meaningful, fruitful, and made correct predictions, even for pure mathematics. However, I couldn’t help thinking of recent examples entirely within pure mathematics as well, and particularly in some fields of interest around here.

Here are a few; feel free to suggest others in the comments (or to take issue with mine).

  • Informal arguments in higher category theory. For example, Lurie’s original paper On infinity topoi lacked a rigorous formal foundation, but contained many important insights. Because quasicategories had already been invented, he was able to make the ideas rigorous in reasonably short order; but I think it’s fair to say the price is a minefield of technical lemmas. Nowadays one finds people wanting to say “we work with <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-categories model-independently” to avoid all the technicalities, but it’s unclear whether this quite makes sense. (Although I have some hope now that a formal language closer to the informal one may come out of the Riehl-Verity theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmoi.)

  • String diagrams for monoidal categories. Joyal and Street’s original paper “The geometry of tensor calculus” carefully defined string diagrams as topological graphs and proved that any labeled string diagram could be interpreted in a monoidal category. But since then, string diagrams have proven so useful that many people have invented variants of them that apply to many different kinds of monoidal categories, and in many (perhaps most) cases they proceed to use them without a similar justifying theorem. Kate and I proved the justifying theorem for our string diagrams for bicategories with shadows, but we didn’t even try it with our string diagrams for monoidal fibrations.

  • Combining higher category with string diagrams, we have the recent “graphical proof assistant” Globular, which formally works with a certain kind of semistrict <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-category for <semantics>n4<annotation encoding="application/x-tex">n\le 4</annotation></semantics>. It’s known that semistrict 3-categories (Gray-categories) suffice to model all weak 3-categories, but no such theorem is yet known for 4-categories. So officially, doing a proof about 4-categories in Globular tells you nothing more than that it’s true about semistrict 4-categories, and I suspect that few naturally-ocurring 4-categories are naturally semistrict. However, such an argument clearly has meaning and applicability much more generally.

  • And, of course, there is homotopy type theory. Plenty of it is completely rigorous, of course (and even formally verified in a computer), but I’m thinking particularly of its conjectural higher-categorical semantics. Pretty much everyone agrees that HoTT should be an internal language for <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi, but with present technology this depends on an initiality theorem for models of type theories in general that is universally believed to be true but is very fiddly to prove correctly and has only been written down carefully in one special case. Moreover, even granting the initiality theorem there are various slight mismatches between the formal theories in current use and what we can construct in higher toposes to model them, e.g. the universes are not strict enough and the HITs are too big. Nevertheless, this relationship has been very fruitful to both sides of the subject already (the type theory and the category theory).

The title of this post is a reference to a classic remark by Thoreau:

“If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them.”

by shulman ( at May 21, 2016 12:53 AM

May 20, 2016

Clifford V. Johnson - Asymptotia

Gut Feeling…

gut_feeling_sampleStill slowly getting back up to speed (literally) on page production. I've made some major tweaks in my desktop workflow (I mostly move back and forth between Photoshop and Illustrator at this stage), and finally have started keeping track of my colours in a more efficient way (using global process colours, etc), which will be useful if I have to do big colour changes later on. My workflow improvement also now includes [...] Click to continue reading this post

The post Gut Feeling… appeared first on Asymptotia.

by Clifford at May 20, 2016 04:38 PM

Tommaso Dorigo - Scientificblogging

Prescaled Jet Triggers: The Rationale Of Randomly Picking Events
In a chapter of the book I have written, "Anomaly! - Collider physics and the quest for new phenomena at Fermilab" (available from September this year), I made an effort to explain a rather counter-intuitive mechanism at the basis of data collection in hadron colliders: the trigger prescale. I would like to have a dry run of the text here, to know if it is really too hard to understand - I still have time to tweak it if needed. So let me know if you understand the description below!

The text below is maybe hard to read as it is taken off context; however, let me at least spend one

read more

by Tommaso Dorigo at May 20, 2016 02:57 PM

May 19, 2016

Sean Carroll - Preposterous Universe

Give the People What They Want

And what they want, apparently, is 470-page treatises on the scientific and philosophical underpinnings of naturalism. To appear soon in the Newspaper of Record:


Happy also to see great science books like Lab Girl and Seven Brief Lessons on Physics make the NYT best-seller list. See? Science isn’t so scary at all.

by Sean Carroll at May 19, 2016 05:56 PM

Jester - Resonaances

A new boson at 750 GeV?
ATLAS and CMS presented today a summary of the first LHC results obtained from proton collisions with 13 TeV center-of-mass energy. The most exciting news was of course the 3.6 sigma bump at 750 GeV in the ATLAS diphoton spectrum, roughly coinciding with a 2.6 sigma excess in CMS. When there's an experimental hint of new physics signal there is always this set of questions we must ask:

0. WTF ?
0. Do we understand the background?
1. What is the statistical significance of  the signal?
2. Is the signal consistent with other data sets?
3. Is there a theoretical framework to describe it?
4. Does it fit in a bigger scheme of new physics?

Let us go through these questions one by one.

The background.  There's several boring ways to make photon pairs at the LHC, but they are expected to produce a  spectrum smoothly decreasing with the invariant mass of the pair. This expectation was borne out in run-1, where the 125 GeV Higgs resonance could be clearly seen on top of a nicely smooth background, with no breaks or big wiggles. So it is unlikely that some Standard Model processes (other than a statistical fluctuation) may produce a bump such as the one seen by ATLAS.

The stats.   The local significance is 3.6 sigma in ATLAS and 2.6 sigma in CMS.  Naively combining the two, we get a more than 4 sigma excess. It is a very large effect, but we have already seen this large fluctuations at the LHC that vanished into thin air (remember 145 GeV Higgs?). Next year's LHC data will be  crucial to confirm or exclude the signal.  In the meantime, we have a perfect right to be excited.

The consistency. For this discussion, the most important piece of information is the diphoton data collected in run-1 at 8 TeV center-of-mass energy.  Both ATLAS and CMS have a small 1 sigma excess around 750 GeV in the run-1 data, but there is no clear bump there.  If a new 750 GeV  particle is produced in gluon-gluon collisions,  then the gain in the signal cross section at 13 TeV compared to 8 TeV is roughly a factor of 5.  On the other hand, there was 6 times more data collected at 8 TeV by ATLAS (3.2 fb-1 vs 20 fb-1). This means that the number of signal events produced in ATLAS at 13 TeV should be about 75% of those at 8 TeV, and the ratio is even worse for CMS (who used only 2.6 fb-1).  However, the background may grow less fast than the signal, so the power of the 13 TeV and 8 TeV data is comparable.  All in all, there is some tension between the run-1 and run-2 data sets,  however a mild downward fluctuation of the signal at 8 TeV and/or a mild upward fluctuation at 13 TeV is enough to explain it.  One can also try to explain the lack of signal in run-1 by the fact that the 750 GeV particle is a decay product of a heavier resonance (in which case the cross-section gain can be much larger). More careful study with next year's data  will be needed to test for this possibility.

The model.  This is the easiest part :)  A resonance produced in gluon-gluon collisions and decaying to 2 photons?  We've seen that already... that's how the Higgs boson was first spotted.  So all we need to do is to borrow from the Standard Model. The simplest toy model for the resonance would be a new singlet scalar with mass of 750 GeV coupled to new heavy vector-like quarks that carry color and electric charges. Then quantum effects will produce, in analogy to what happens for the Higgs boson, an effective coupling of the new scalar to gluons and photons:

By a judicious choice of the effective couplings (which depend on masses, charges, and couplings of the vector-like quarks) one can easily fit the diphoton excess observed by ATLAS and CMS. This is shown as the green region in the plot.
 If the vector-like quark is a T', that is to say, it has the same color and electric charge as the Standard Model top quark, then the effective couplings must lie along the blue line. The exclusion limits from the run-1 data (mesh) cut through the best fit region, but do not disfavor the model completely. Variation of this minimal toy model will appear in a 100 papers this week.

The big picture.  Here sky is the limit. The situation is completely different than 3 years ago, where there was one strongly preferred (and ultimately true) interpretation of the 125 GeV diphoton and 4-lepton signals as the Higgs boson of the Standard Model. On the other hand,  scalars coupled to new quarks appear in countless model of new physics. We may be seeing the radial Higgs partner predicted by little Higgs or twin Higgs models, or the dilaton arising due to spontaneous conformal symmetry breaking, or a composite state bound by new strong interactions.  It could be a part of the extended Higgs sector in many different context, e.g. the heavy scalar or pseudo-scalar in the two Higgs doublet models.  For more spaced out possibilities, it could be the KK graviton of the Randall-Sundrum model, or it could fit some popular supersymmetric models such as the  NMSSM. All these scenarios face some challenges.  One is to explain why the branching ratio into two photons is large enough to be observed, and why the 750 GeV scalar is not seen in other decays channels, e.g. in decay to W boson pairs which should be the dominant mode for a Higgs-like scalar.  However, these challenges are nothing that an average theorist could not resolve by tomorrow morning.  Most likely, this particle would just be a small part of the larger structure, possibly having something to do with electroweak symmetry breaking and the hierarchy problem of the Standard Model.  If the signal is a real thing, then it may be the beginning of a new golden era in particle physics....

by Jester ( at May 19, 2016 03:44 PM

Jester - Resonaances

Higgs force awakens
The Higgs boson couples to particles that constitute matter around us, such as electrons, protons, and neutrons. Its virtual quanta are constantly being exchanged between these particles.  In other words, it gives rise to a force -  the Higgs force. I'm surprised why this PR-cool aspect is not explored in our outreach efforts. Higgs bosons mediate the Higgs force in the same fashion as gravitons, gluons, photons, W and Z bosons mediate  the gravity, strong, electromagnetic, and  weak forces. Just like gravity, the Higgs force is always attractive and its strength is proportional, in the first approximation, to particle's mass. It is a force in a common sense; for example, if we bombarded long enough a detector with a beam of particles interacting only via the Higgs force, they would eventually knock off atoms in the detector.

There is of course a reason why the Higgs force is less discussed: it has never been detected directly. Indeed, in the absence of midi-chlorians it is extremely weak. First, it shares the feature of the weak interactions of being short-ranged: since the mediator is massive, the interaction strength is exponentially suppressed at distances larger than an attometer (10^-18 m), about 0.1% of the diameter of a proton. Moreover, for ordinary matter, the weak force is more important because of the tiny Higgs couplings to light quarks and electrons. For example, for the proton the Higgs force is thousand times weaker than the weak force, and for the electron it is hundred thousand times weaker. Finally, there are no known particles interacting only via the Higgs force and gravity (though dark matter in some hypothetical models has this property), so in practice the Higgs force is always a tiny correction to more powerful forces that shape the structure of atoms and nuclei. This is again in contrast to the weak force, which is particularly relevant for neutrinos who are immune to strong and electromagnetic forces.

Nevertheless, this new paper argues that the situation is not hopeless, and that the current experimental sensitivity is good enough to start probing the Higgs force. The authors propose to do it by means of atom spectroscopy. Frequency measurements of atomic transitions have reached the stunning accuracy of order 10^-18. The Higgs force creates a Yukawa type potential between the nucleus and orbiting electrons, which leads to a shift of the atomic levels. The effect is tiny, in particular it  is always smaller than the analogous shift due to the weak force. This is a serious problem, because calculations of the leading effects may not be accurate enough to extract the subleading Higgs contribution.  Fortunately, there may be tricks to reduce the uncertainties. One is to measure how the isotope shift of transition frequencies for several isotope pairs. The theory says that the leading atomic interactions should give rise to a universal linear relation (the so-called King's relation) between  isotope shifts for different transitions. The Higgs and weak interactions should lead to a violation of King's relation. Given many uncertainties plaguing calculations of atomic levels, it may still be difficult to ever claim a detection of the Higgs force. More realistically, one can try to set limits on the Higgs couplings to light fermions which will be better than the current collider limits.  

Atomic spectroscopy is way above my head, so I cannot judge if the proposal is realistic. There are a few practical issues to resolve before the Higgs force is mastered into a lightsaber. However, it is possible that a new front to study the Higgs boson will be opened in the near future. These studies will provide information about the Higgs couplings to light Standard Model fermions, which is complementary to the information obtained from collider searches.

by Jester ( at May 19, 2016 03:43 PM

Jester - Resonaances

750 ways to leave your lover
A new paper last week straightens out the story of the diphoton background in ATLAS. Some confusion was created because theorists misinterpreted the procedures described in the ATLAS conference note, which could lead to a different estimate of the significance of the 750 GeV excess. However, once the correct phenomenological and statistical approach is adopted, the significance quoted by ATLAS can be reproduced, up to small differences due to incomplete information available in public documents. Anyway, now that this is all behind, we can safely continue being excited at least until summer.  Today I want to discuss different interpretations of the diphoton bump observed by ATLAS. I will take a purely phenomenological point of view, leaving for the next time  the question of a bigger picture that the resonance may fit into.

Phenomenologically, the most straightforward interpretation is the so-called everyone's model: a 750 GeV singlet scalar particle produced in gluon fusion and decaying to photons via loops of new vector-like quarks. This simple construction perfectly explains all publicly available data, and can be easily embedded in more sophisticated models. Nevertheless, many more possibilities were pointed out in the 750 papers so far, and here I review a few that I find most interesting.

Spin Zero or More?  
For a particle decaying to two photons, there is not that many possibilities: the resonance has to be a boson and, according to young Landau's theorem, it cannot have spin 1. This leaves at the table spin 0, 2, or higher. Spin-2 is an interesting hypothesis, as this kind of excitations is predicted in popular models like the Randall-Sundrum one. Higher-than-two spins are disfavored theoretically. When more data is collected, the spin of the 750 GeV resonance can be tested by looking at the angular distribution of the photons. The rumor is that the data so far somewhat favor spin-2 over spin-0, although the statistics is certainly insufficient for any serious conclusions.  Concerning the parity, it is practically impossible to determine it by studying the diphoton final state, and both the scalar and the pseudoscalar option are equally viable at present. Discrimination may be possible in the future, but  only if multi-body decay modes of the resonance are discovered. If the true final state is more complicated than two photons (see below), then the 750 GeV resonance may have  any spin, including spin-1 and spin-1/2.

Narrow or Wide? 
The total width is an inverse of particle's lifetime (in our funny units). From the experimental point of view, the width larger than detector's  energy resolution  will show up as a smearing of the resonance due to the uncertainty principle. Currently, the ATLAS run-2 data prefer the width 10 times larger than the experimental resolution  (which is about 5 GeV in this energy ballpark), although the preference is not very strong in the statistical sense. On the other hand, from the theoretical point of view, it is much easier to construct models where the 750 GeV resonance is a narrow particle. Therefore, confirmation of the large width would have profound consequences, as it would significantly narrow down the scope of viable models.  The most exciting interpretation would then be that the resonance is a portal to a dark sector containing new light particles very weakly coupled to ordinary matter.    

How many resonances?  
One resonance is enough, but a family of resonances tightly packed around 750 GeV may also explain the data. As a bonus, this could explain the seemingly large width without opening new dangerous decay channels. It is quite natural for particles to come in multiplets with similar masses: our pion is an example where the small mass splitting π± and π0 arises due to electromagnetic quantum corrections. For Higgs-like multiplets the small splitting may naturally arise after electroweak symmetry breaking, and  the familiar 2-Higgs doublet model offers a simple realization. If the mass splitting of the multiplet is larger than the experimental resolution, this possibility can tested by precisely measuring the profile of the resonance and searching for a departure from the Breit-Wigner shape. On the other side of the spectrum is the idea is that there is no resonance at all at 750 GeV, but rather at another mass, and the bump at 750 GeV appears due to some kinematical accidents.
Who made it? 
The most plausible production process is definitely the gluon-gluon fusion. Production in collisions of light quark and antiquarks is also theoretically sound, however it leads to a more acute tension between run-2 and run-1 data. Indeed, even for the gluon fusion, the production cross section of a 750 GeV resonance in 13 TeV proton collisions is only 5 times larger than at 8 TeV. Given the larger amount of data collected in run-1, we would expect a similar excess there, contrary to observations. For a resonance produced from u-ubar or d-dbar the analogous ratio is only 2.5 (see the table), leading to much more  tension. The ratio climbs back to 5 if the initial state contains the heavier quarks: strange, charm, or bottom (which can also be found sometimes inside a proton), however I haven't seen yet a neat model that makes use of that. Another possibility is to produce the resonance via photon-photon collisions. This way one could cook up a truly minimal and very predictive model where the resonance couples only to photons of all the Standard Model particles. However, in this case, the ratio between 13 and 8 TeV cross section is very unfavorable, merely a factor of 2, and the run-1 vs run-2 tension comes back with more force. More options open up when associated production (e.g. with t-tbar, or in vector boson fusion) is considered. The problem with these ideas is that, according to what was revealed during the talk last December, there isn't any additional energetic particles in the diphoton events. Similar problems are facing models where the 750 GeV resonance appears as a decay product of a heavier resonance, although in this case some clever engineering or fine-tuning may help to hide the additional particles from experimentalist's eyes.

Two-body or more?
While a simple two-body decay of the resonance into two photons is a perfectly plausible explanation of all existing data, a number of interesting alternatives have been suggested. For example, the decay could be 3-body, with another soft visible or invisible  particle accompanying two photons. If the masses of all particles involved are chosen appropriately, the invariant mass spectrum of the diphoton remains sharply peaked. At the same time, a broadening of the diphoton energy due to the 3-body kinematics may explain why the resonance appears wide in ATLAS. Another possibility is a cascade decay into 4 photons. If the  intermediate particles are very light, then the pairs of photons from their decay are very collimated and may look like a single photon in the detector.
 ♬ The problem is all inside your head   and the possibilities are endless. The situation is completely different than during the process of discovering the  Higgs boson, where one strongly favored hypothesis was tested against more exotic ideas. Of course, the first and foremost question is whether the excess is really new physics, or just a nasty statistical fluctuation. But if that is confirmed, the next crucial task for experimentalists will be to establish the nature of the resonance and get model builders on the right track.  The answer is easy if you take it logically ♬ 

All ideas discussed above appeared in recent articles by various authors addressing the 750 GeV excess. If I were to include all references the post would be just one giant hyperlink, so you need to browse the literature yourself to find the original references.

by Jester ( at May 19, 2016 03:43 PM

Jester - Resonaances

April Fools' 16: Was LIGO a hack?

This post is an April Fools' joke. LIGO's gravitational waves are for real. At least I hope so ;) 

We have had recently a few scientific embarrassments, where a big discovery announced with great fanfares was subsequently overturned by new evidence.  We still remember OPERA's faster than light neutrinos which turned out to be a loose cable, or BICEP's gravitational waves from inflation, which turned out to be galactic dust emission... It seems that another such embarrassment is coming our way: the recent LIGO's discovery of gravitational waves emitted in a black hole merger may share a similar fate. There are reasons to believe that the experiment was hacked, and the signal was injected by a prankster.

From the beginning, one reason to be skeptical about LIGO's discovery was that the signal  seemed too beautiful to be true. Indeed, the experimental curve looked as if taken out of a textbook on general relativity, with a clearly visible chirp signal from the inspiral phase, followed by a ringdown signal when the merged black hole relaxes to the Kerr state. The reason may be that it *is* taken out of a  textbook. This is at least what is strongly suggested by recent developments.

On EvilZone, a well-known hacker's forum, a hacker using a nickname Madhatter was boasting that it was possible to tamper with scientific instruments, including the LHC, the Fermi satellite, and the LIGO interferometer.  When challenged, he or she uploaded a piece of code that allows one to access LIGO computers. Apparently, the hacker took advantage the same backdoor that allows the selected members of the LIGO team to inject a fake signal in order to test the analysis chain.  This was brought to attention of the collaboration members, who  decided to test the code. To everyone's bewilderment, the effect was to reproduce exactly the same signal in the LIGO apparatus as the one observed in September last year!

Even though the traces of a hack cannot be discovered, there is little doubt now that there was a foul play involved. It is not clear what was the motif of the hacker: was it just a prank, or maybe an elaborate plan to discredit the scientists. What is even more worrying is that the same thing could happen in other experiments. The rumor is that the ATLAS and CMS collaborations are already checking whether the 750 GeV diphoton resonance signal could also be injected by a hacker.

by Jester ( at May 19, 2016 03:42 PM

ZapperZ - Physics and Physicists

The Curse Of Being A Physicist
When do you speak up in a social setting and set someone straight?

I think I've mentioned a few times on here of being in a social setting, and then being found out that I'm a physicist. Most of the time, this was a good thing, because I get curious questions about what was on the news related to physics (the LHC was a major story for months).

But what if you hear something, and clearly it wasn't quite right. Do you speak up and possibly might cause an embarrassment to the other person?

I attended the annual Members Night at the Adler Planetarium last night here in Chicago. It was a very enjoyable evening. Their new show that is about to open on "Planet Nine" was very, VERY informative and entertaining. I highly recommend it. We got to be among the first to see it before it is opened to the public.

Well, anyway, towards the end of the evening, before we left, we decided to walk around the back of the facility and visit the Doane Observatory. The telescope was looking at Jupiter which was prominent in the night sky last night. There was a line, so we waited in the line for our turn.

As we progressed up, I and my companions heard these two gentlemen chatting away with the visitors, and then to each other about their enthusiasm about astronomy and science, etc. This is always good to know, especially at an event like this. As I got closer to them, it turned out that they were either volunteers, or were working for Adler Planetarium, because they were wearing either name tags or something. One of them identified himself as an astronomer, which wasn't surprising considering the event and the location.

But then, things got a bit sour, at least for me. In trying to pump up their enthusiasm about astronomy and science, they started quoting Carl Sagan's famous phrase that we are all made up of star stuff. This wasn't the bad part, but then they took it further by claiming that hydrogen is the "lego blocks" of the universe, and that everything can be thought of as being built out of hydrogen. One of them started giving an example by saying that you take two hydrogen and put them together, and you get helium!

OK, by then, I was no longer amused by these two guys, and was tempted to say something. I wanted to say that hydrogen is not the "lego blocks" of our universe, not if the Standard Model of Particle Physics has anything to say about that. And secondly, you don't get helium when you put two hydrogen atoms together. After all, where will the extra 2 neutrons in helium come from?

But I stopped myself from saying anything. These people were working pretty hard for  this event, they were trying to show their enthusiasm about the subject matter, and we were surrounded by other people, the general public, who obviously were also interested in this topic. Anything that I would have said to correct these two men would not have looked good, at least that was my assessment at that moment. It might easily led to an awkward, embarrassing moment.

I get that when we try to talk to the public about science, we might overextend ourselves. I used to give tours and participated in outreach programs, so I've been in this type of situation before. While I tried to make sure everything I say was accurate, there were always possibilities that someone in the audience may know more about something I said and may find certain aspects of it not entirely accurate. I get that.

So that was why I didn't say anything to these two gentlemen. I think that what they just told to the people who were within ear shot of them were wrong. Maybe their enthusiasms made them forget some basic facts. That might be forgivable. Still, it is obvious that I'm still thinking about this the next morning, and second guessing if maybe I should have told them quietly that what they said wasn't quite right. Maybe it might stop them from saying it out loud next time?

On the other hand, how many of these people who heard what was said actually (i) understood it and (ii) remembered it?


by ZapperZ ( at May 19, 2016 01:44 PM

ZapperZ - Physics and Physicists

Still No Sterile Neutrinos
IceCube has not found any indication of the presence of sterile neutrinos after looking for it for 2 years, at least not in the energy range that it was expected.

In the latest research, the IceCube collaboration performed independent analysis on two sets of data from the observatory, looking for sterile neutrinos in the energy range between approximately 320 GeV and 20 TeV. If present, light sterile neutrinos with a mass of around 1 eV/C2 would cause a significant disappearance in the total number of muon neutrinos that are produced by cosmic-ray showers in the atmosphere above the northern hemisphere and then travel through the Earth to reach IceCube. The first set of data included more than 20,000 muon-neutrino events detected between 2011 and 2012, while the second covered almost 22,000 events observed between 2009 and 2010. 

I think there are other facilities that are looking for them as well. But this result certainly excludes a large portion of the "search area".


by ZapperZ ( at May 19, 2016 01:17 PM

Symmetrybreaking - Fermilab/SLAC

The Planck scale

The Planck scale sets the universe’s minimum limit, beyond which the laws of physics break.

In the late 1890s, physicist Max Planck proposed a set of units to simplify the expression of physics laws. Using just five constants in nature (including the speed of light and the gravitational constant), you, me and even aliens from Alpha Centauri could arrive at these same Planck units.

The basic Planck units are length, mass, temperature, time and charge.

Let’s consider the unit of Planck length for a moment. The proton is about 100 million trillion times larger than the Planck length. To put this into perspective, if we scaled the proton up to the size of the observable universe, the Planck length would be a mere trip from Tokyo to Chicago. The 14-hour flight may seem long to you, but to the universe, it would go completely unnoticed.

The Planck scale was invented as a set of universal units, so it was a shock when those limits also turned out to be the limits where the known laws of physics applied. For example, a distance smaller than the Planck length just doesn’t make sense—the physics breaks down.

Physicists don’t know what actually goes on at the Planck scale, but they can speculate. Some theoretical particle physicists predict all four fundamental forces—gravity, the weak force, electromagnetism and the strong force—finally merge into one force at this energy. Quantum gravity and superstrings are also possible phenomena that might dominate at the Planck energy scale.

The Planck scale is the universal limit, beyond which the currently known laws of physics break. In order to comprehend anything beyond it, we need new, unbreakable physics.

by Rashmi Shivni at May 19, 2016 01:00 PM

Lubos Motl - string vacua and pheno

Particles are vibrations
The music analogy is much more accurate than most people want to believe

Tetragraviton is a postdoc at the Perimeter Institute who has written several papers on multiloop amplitudes in gauge theory. Even though none of these papers depends on string theory in any tangible way, I've thought that he's a guy close enough to string theory who could potentially work on it which is why I was surprised by his blog post a week ago,
Particles Aren’t Vibrations (at Least, Not the Ones You Think)
which indicates that I was wrong. The first sentence tells you what kind of popularizers are supposed to be a target:
You’ve probably heard this story before, likely from Brian Greene.
I was imagining that there was something subtle. People may dislike the overabundant comments about "music and string theory" etc. But I didn't find anything too subtle in the blog post. While there's always some room for interpretations what a somewhat vague sentence addressed to the laymen could have meant, I think it's right to conclude that Tetragraviton is just flatly wrong.

Needless to say, the claim that (in weakly coupled string theory) different particle species are vibration modes of a string isn't just some fairy-tale used by Brian Greene. It's a translation of an actual defining fact of string theory into plain English. Brian Greene has in no way a monopoly over such a thing. Pretty much everyone else who has talked about string theory agrees that this is the right summary of string theory's ingenious description of the diversity of particle species.

Clearly, you may add people like Michio Kaku:
In string theory, all particles are vibrations on a tiny rubber band; physics is the harmonies on the string; chemistry is the melodies we play on vibrating strings; the universe is a symphony of strings, and the 'Mind of God' is cosmic music resonating in 11-dimensional hyperspace.
Kaku and even Greene may sometimes be presented as "just some popularizers". But they have done highly nontrivial contributions to the field, too. And almost all other string theorists who talk about string theory use very similar formulations. I could give you dozens of examples. But because of his widely respected technical credentials, let me pick Edward Witten:
String theory is an attempt at a deeper description of nature by thinking of an elementary particle not as a little point but as a little loop of vibrating string. One of the basic things about a string is that it can vibrate in many different shapes or forms, which gives music its beauty. If we listen to a tuning fork, it sounds harsh to the human ear. And that's because you hear a pure tone rather than the higher overtones that you get from a piano or violin that give music its richness and beauty.

So in the case of one of these strings it can oscillate in many different forms—analogously to the overtones of a piano string. And those different forms of vibration are interpreted as different elementary particles: quarks, electrons, photons. All are different forms of vibration of the same basic string. Unity of the different forces and particles is achieved because they all come from different kinds of vibrations of the same basic string. In the case of string theory, with our present understanding, there would be nothing more basic than the string.
The fact that particle species are types of vibrations isn't just a truth. It's pretty much "the defining truth", the very reason why string theory is unifying forces and matter. If you allow me to quote Barton Zwiebach's undergraduate textbook, A First Course in String Theory:
Why is string theory a truly unified theory? The reason is simple and goes to the heart of the theory. In string theory, each particle is identified as a particular vibrational mode of an elementary microscopic string. A musical analogy is very apt. Just as a violin string can vibrate in different modes and each mode corresponds to a different sound, the modes of vibration of a fundamental string can be recognized as the different particles we know. One of the vibrational states of strings is the graviton, the quantum of the gravitational field. Since there is just one type of string, and all particles arise from string vibrations, all particles are naturally incorporated into a single theory. When we think in string theory of a decay process...
Everyone who understands string theory agrees with the essence of the statement that string theory explains particles as vibrations.

It's always amazing to see how many people like to pick an important truth, completely negate it, and claim that the result is a very important truth. It looks like they want to prove Niels Bohr's famous quote
The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth.
Well, he only says that the opposite of a profound truth may be another profound truth. It usually isn't.

OK, so how did Tetragraviton argue that particles aren't vibrations?

We were shown the higher harmonics on a string with a claim that this is not how string theory produces the list of particle species. Except that it is a totally valid sketch of how string theory does it.

In a flat spacetime background, a single string really has possible higher harmonics \(\alpha^\mu_{\pm n}\) along the string – the \(n\)-th Fourier component in the expansion of a combination of \(x^{\prime \mu}(\sigma)\) and \(p^\mu(\sigma)\) – and \(\alpha_n,\alpha_{-n}\) obey the algebra of annihilation and creation operators, respectively.

A general excited open string state is obtained by the action of these harmonics on the ground state (usually a tachyonic ground state) \(\ket 0\):\[

\dots (\alpha_{-3})^{N_3} (\alpha_{-2})^{N_2} (\alpha_{-1})^{N_1} \ket 0

\] where the exponents \(N_j\) are non-negative integers (only finitely many are nonzero). For each higher harmonic, the string may be excited by the corresponding vibration – an integer number of times because the string obeys the laws of quantum mechanics and the quantum harmonic oscillator has an equally spaced spectrum. Such an excited string behaves as a particle whose mass is proportional to\[

m^2 = m_0^2 (\dots + 3N_3 + 2N_2 + 1N_1 - 1)

\] The more excitations you include, the heavier particle you get. The higher harmonics increase the string theory's mass more quickly. The term \(-1\) is a contribution from the zero-point energies of all these oscillators. You may derive this negative shift as a term proportional to the renormalized sum of integers\[

1+2+3+\dots \to -\frac{1}{12}

\] I've replaced \(=\) by \(\to\) just because I want to reduce the number of angry clueless critics by 70% but be sure that \(=\) would be more accurate.

The characteristic scale \(m\) may be close to the GUT scale if not the Planck scale. But there also exist low-string-scale models (brane worlds) where \(m\) is comparable to a few \({\rm TeV}\)s, the energies marginally accessible by the LHC. I was surprised that Tetragraviton didn't have a clue about the possibility of a low string scale.

In the formula above, I suppressed the \(\mu\) index so I was only adding vibrations in one transverse dimension. A realistic 10D superstring requires 8 copies of such oscillators, all of them may excite the string by the same amount, and there may also be similar fermionic oscillators living on the string. Their contributions to the masses are analogous – except that the corresponding operators "mostly anticommute" and the occupation numbers are therefore \(0\) or \(1\).

For closed strings, we have two sets of oscillators – left-moving and right-moving oscillators \(\alpha\) and \(\tilde\alpha\). Both of them may be added to excite the string. The total \(m^2\) calculated from the left-movers must agree with the total \(m^2\) calculated from the right-movers. The requirement is known as the level-matching condition, \(L_0=\tilde L_0\), and it is basically equivalent to the statement that the choice of the \(\sigma=0\) "origin" of a closed string must be unphysical (the total momentum along/around the closed string must vanish).

Note that our formula calculated \(m^2\) and not \(m\) as the integer. This is due to a rather elementary kinematic technicality that boils down to relativity. In relativity, things simplify when the strings are highly boosted or described in the "light cone gauge". In that case, the component \(p^-\) of the energy-momentum vector – a light-cone gauge edition of "energy" – turns out to contain a term proportional to \(m^2\). (Explanations without the light-cone gauge are possible, too.)

You may have been afraid that in relativity, the energy formula would unavoidably contain lots of square roots from \(E=\sqrt{M^2+P^2}\) which would make all the oscillators unharmonic. But this trap may be avoided by a choice of coordinates on the world sheet. In particular, in the light-cone gauge (really a conformal gauge is enough), the internal energy of the string is linked to \(m^2\) of the corresponding particle and the formula for \(m^2\) reduces to simple harmonic oscillators without square roots. None of these things may be clear to anyone "without any calculations" but the students learn and verify the reasons before the 5th lecture of string theory. The result is that even though the string is a relativistic object (the vibration equations are Lorentz-covariant), the relevant Hamiltonians may be written by simple formulae involving harmonic oscillators and no square roots.

So in some units, the squared masses \(m^2\) of allowed vibrating strings are literally integers in certain units.

The squared masses of known particle species are not equally spaced in this way. It's mostly because
  1. the strings generally vibrate in a curved spacetime background
  2. the particles – vibrations of strings – interact with each other (because strings split and join) and this has a similar effect on the masses as field theory phenomena such as the Higgs mechanism; in fact, the Higgs mechanism and all similar things work in string theory "just like" in field theory
Tetragraviton says that the statement "particles are vibrations" is invalidated in some way because string theory also has extra dimensions and supersymmetry. But neither extra dimensions nor supersymmetry invalidate the picture above. In fact, neither extra dimensions nor supersymmetry imply that even the simple equally spaced spectrum based on the higher harmonics has to be generalized.

Extra dimensions may be flat (torus or its orbifolds) and supersymmetry may be expressed in terms of free fields (whose spectrum is exactly gotten by adding the energy of the higher harmonics).

Moreover, Tetragraviton's extra comments about extra dimensions and supersymmetry are absolutely demagogic given the fact that he claimed to show something inaccurate about Brian Greene's statements about string theory. Brian Greene has always discussed extra dimensions and supersymmetry in much more detail than Tetragraviton. For example, several full chapters are dedicated to these topics in The Elegant Universe.

I want to emphasize that there are actually semirealistic models of string theory – which basically produce the minimal supersymmetric standard model or something like that consistently coupled to quantum gravity – which still build the spectrum pretty much by the simple addition of the higher harmonics that I discussed above. In particular, I mean the orbifolds of tori and the heterotic models in the free fermionic formulation.

A novelty of such orbifolds is that some of the states come from twisted sectors. A twisted sector has some new boundary conditions. A round trip around the closed string doesn't return you to the same point in the space (or configuration space) but one related by a global symmetry (isometry of the compactification manifold or a generalization of an isometry). Consequently, the indices \(n\) of \(\alpha_{n}\) are no longer integers but they are shifted by a fractional shift such as \(1/2\) or \(1/4\) away from an integer. This isn't changing the story qualitatively. It's still true that the squared masses are integer multiples of a quantum. Also, the negative additive shift in the formula for \(m^2\) – the ground state energy – depends on which (twisted – or untwisted) sector you consider.

Let me discuss Tetragraviton's claims in more detail.
It’s a nice story. It’s even partly true. But it [the claim that increasingly heavy particle species are obtained from the addition of higher harmonics etc.] gives a completely wrong idea of where the particles we’re used to come from.
Sorry but it gives a completely correct qualitative idea where all particle species in string theory come from. All of them come from string vibrations and it's always the case that (as long as one ignores subleading corrections to the masses from field theory effects etc.) the more vibrations are added to a string, the heavier particle species we obtain.

Experimentally, we have only observed a few dozens of particle species. But they come from the string tower of vibrating strings, too. In some approximation, they're usually coming from states with \(m^2=0\). But that does not mean that the counting of the higher harmonics and their contributions to \(m^2\) may be avoided.


It's because of the negative shift in the \(m^2\). The ground state of a (closed or open) string is normally a tachyon with \(m^2\lt 0\). This state is projected out by the so-called GSO projection. At the end, the spacetime supersymmetry is a sufficient (but not necessary!) condition to get rid of all the tachyons. But there are always numerous massless states – in the approximation of free strings. And these states are massless because the negative ground state contribution to \(m^2\) is cancelled by the positive contributions from the oscillators. This cancellation may take a different numerical form for different states – and especially for states in different twisted sectors.

But again, the counting of the basic frequency's and higher harmonics' energy is unavoidable even if you want to understand the origin of the massless states. If all the non-constant modes along the string could be completely ignored and omitted, the whole added value of string theory would be "redundant garbage" and we could just work with the equally consistent massless truncation of string theory.

However, we just can't. In particular, one of the massless i.e. \(m^2=0\) states of the vibrating string is the graviton, the quantum of the gravitational wave (or field), the messenger of the gravitational force. Even in the simplest \(D=26\) bosonic string theory, the spin-two graviton states are obtained from a closed string by the action of two oscillators:\[

\alpha^\mu_{-1} \tilde \alpha^\nu_{-1} \ket{0}

\] Similarly, in the RNS \(D=10\) definition of superstring theory, it is\[

\alpha^\mu_{-1/2} \tilde \alpha^\nu_{-1/2} \ket{0}_{NS,NS}

\] The ground state is a tachyon (which survives in bosonic string theory, a source of infrared inconsistencies equivalent to an instability, but is removed by the GSO projections in superstring theory). But its negative \(m^2\) is exactly cancelled by one left-moving excitation of the "basic frequency wave" on the string, and one right-moving one (note that the level-matching condition holds). There is no way to get the same results without the non-constant "sinusoidal waves" on the string.

The point that 4gravitons is missing is that massless states (in the free-string approximation) coming from the quantized strings are massless "by accident". Most states have positive masses, some states happen to have zero masses when the terms are added. But the latter aren't separated from the rest in any a priori way. The massive excitations are in no way added artificially to some massless starting point. There is no massless starting point in string theory. String theory unavoidably generates the massless and massive states simultaneously, with no consistent way to divide them. It is not quite trivial to derive the massless spectrum in a general string compactification. It's about as hard as to derive the states at any massive level.

OK, back to the graviton state that had two (minimal nonzero frequency) wave excitations around the closed string.

Once you allow the "basic frequency" of the wave on the string, you automatically allow all of them because splitting and joining strings are capable of producing truncated sines on a shorter interval which may only be Fourier-expanded on the shorter string if you allow all the higher harmonics as well. And a consistent theory of quantum gravity may only be obtained if you incorporate all of them. There's just no way to consistently truncate the higher harmonics because even the "simple" graviton depends on the non-constant modes along the string.
Again, even for massless states, the careful counting of the energy from nontrivial sinusoidal excitations of the string is essential to get the correct mass.
Disappointingly, the only interpretation of Tetragraviton's words that "the vibrating string picture with harmonics is a completely wrong explanation of the well-known particle species" is that he just doesn't have a clue how string theory explains the massless and light states.

But I believe that this is not the only problem with his views about string theory. Another paragraph says:
String theory’s strings are under a lot of tension, so it takes a lot of energy to make them vibrate. From our perspective, that energy looks like mass, so the more complicated harmonics on a string correspond to extremely massive particles, close to the Planck mass!
I've discussed that, it's not necessarily the case. I do believe that the string scale is close to the Planck mass but there do exist low-string models where it's as low as a few \({\rm TeV}\)s. This is just a technical difference. The heavier excited string vibrations are equally real in both scenarios.

But it's primarily the following paragraph that I believe to be seriously flawed:
Those aren’t the particles you’re used to. They’re not electrons, they’re not dark matter. They’re particles we haven’t observed, and may never observe. They’re not how string theory explains the fundamental particles of nature.
Electrons and particles of dark matter (if the latter is composed of particles) are excited strings as well, and even for those, the addition of energies from vibrations on the string is needed to get the correct mass (despite its being zero in the free-string approximation). There just doesn't exist any sense in which the statement that "the states in the infinite tower of arbitrarily excited strings don't describe the electron or dark matter" could be correct. It's just wrong, wrong, wrong.

But the generalization of this statement, "they [strings excited by the harmonics] are not how string theory explains the fundamental particles of nature" is surely the opposite of a deep truth. That is exactly how string theory explains the fundamental particles of Nature. Barton Zwiebach's quote above may be used as the best explanation in this blog post why this observation is both right and essential.

There may also be some confusion about "what counts as a fundamental particle of Nature". Tetragraviton seems to count the electron but not some heavy states near the string scale. But both of them are fundamental particles of Nature. Moreover, both of them have masses whose essential contribution comes from the energy of vibrations added to the string. There is no qualitative difference between the electron and the graviton; and heavier string states. We may have detected some particles and not others but all of them are equally real and equally fundamental.
So how does string theory go from one fundamental type of string to all of the particles in the universe, if not through these vibrations? As it turns out, there are several different ways it can happen. I’ll describe a few.

The first and most important trick here is supersymmetry. ...
Again, it's simply not true that supersymmetry replaces or invalidates the fact that the main contribution to the mass of particles in string theory comes from the vibrations of a string. Supersymmetry is a special feature of a subset of the string vacua (and similarly quantum field theories). But the elements of this subset are constructed in the same way as elements outside this subset. In string theory, they are constructed by counting the energy that vibrations on a quantum relativistic string may carry. Supersymmetry almost always requires some fermionic degrees of freedom but they may be viewed as extra coordinates of the (super)space and they add vibrations and energy through (fermionic) harmonic oscillators just like their bosonic friends (well, they're not just Platonic friends, they're superpartners). They also have higher harmonics with \(n=2,3\) etc., only the occupation numbers are \(N_a=0,1\).
Supersymmetry relates different types of particles to each other. In string theory, it means that along with vibrations that go higher and higher, there are also low-energy vibrations that behave like different sorts of particles.
Supersymmetry makes it more likely that there will be massless or light particles but it is not a necessary condition. There exist non-supersymmetric (yet tachyon-free) string vacua with the analogous massless portion of the spectrum (massless is meant at the level of the free string, the string scale). Despite the absence of supersymmetry, the number of massless bosonic and fermionic particle species – massless states of a vibrating string – is basically the same as in the similar supersymmetric models (I am talking about the tachyon-free non-SUSY heterotic strings). The states are just different, not less numerous or "worse".
Even with supersymmetry, string theory doesn’t give rise to all of the right sorts of particles. You need something else, like compactifications or branes.
Yup, except that Brian Greene and many others have explained all these things with extra dimensions etc. far more accurately and pedagogically than Tetragraviton. Incidentally, a compactification is always needed to obtain at least a semi-realistic string vacuum.
In string theory, the particles we’re used to aren’t just higher harmonics, or vibrations with more and more energy. They come from supersymmetry, from compactifications and from branes.
Again, there is absolutely no "contradiction" between vibrations on one side and compactifications, SUSY, or branes on the other side. They're independent concepts. Strings vibrate even when they're placed in a compactified spacetime manifold, even when they're supersymmetric, even when there are D-branes around, and even if the strings are open strings attached to these D-branes. It's similar to the bass strings: they also vibrate even when they are surrounded by an instrument whose shape resembles Meghan Trainor, no treble. The shape of the instrument, like the shape of the compactification manifold, influences the sound (and spectrum) of the vibrations but it in no way invalidates or removes the vibrations.

But a point that he repeats all the time and that annoys me is one about the "particle you're used to". String theory isn't primarily a theory mean to discuss "just the particles you used to". String theory is a theory of everything – which includes all particles, including those that no one is used to because we haven't observed them (yet).

If someone isn't interested in the full list of particles in Nature, I find it obvious that he has no reason to be interested in string theory, either – because string theory is almost by definition a theory going well beyond the technical limitations of current experiments. If someone isn't interested what is hiding beneath the surface, an effective field theory is an easier attitude for those whose interest is this narrow-minded. The effective field theories are really defined to be the answer to the questions that never try to go beyond a certain regime defined by practical limitations. But in that case, if someone is interested in these low-energy things only, I don't see why he would be reading blog posts or books about string theory at all.

It just makes no sense whatsoever. The person isn't interested in these questions so he probably doesn't study them and hasn't studied them. He almost certainly knows nothing about the things he could have learned (about string theory) but he hasn't and he should better shut his mouth.
The higher harmonics are still important: there are theorems that you can’t fix quantum gravity with a finite number of extra particles, so the infinite tower of vibrations allows string theory to exploit a key loophole.
Right. All the excited string modes are totally needed for the consistency of the quantum gravity, as I said. Also, as I discussed in a comment on the 4gravitons blog, when you gradually increase the value of the string coupling constant, the excited string states are gradually turning to black hole microstates. The exponential increase of the number of excited string states is a precursor or an approximation to the quasi-exponential increase of the number of black hole microstates we need in a consistent quantum theory of gravity.
They just don’t happen to be how string theory gets the particles of the Standard Model.
If the world is described by a weakly coupled string theory, string theory does derive all the particles of the Standard Model exactly by the same algorithm that 4gravitons irrationally denounces.
The idea that every particle is just a higher vibration is a common misconception, and I hope I’ve given you a better idea of how string theory actually works.
Is it not a misconception and Tetragraviton has only brought confusion and falsehoods to this topic.

Quite generally, a popularizer of science always runs the risk of being separated from the big shots who do the best research, and so on. People realize this (true) general fact and that's also why popularizers are sometimes attacked with similar words. However, in the case of string theory, almost all these attacks are just plain rubbish.

In particular, Brian Greene has been extremely careful what he was saying about string theory. His explanations of these topics correspond pretty much to the most accurate sketch that is accessible to a large enough subset of the lay public. And people who are criticizing some basic claims such as the deep insight that "in string theory, particles are vibrations" are simply full of šit. The identification of the particle species (all of them) and the vibration states of a string is a profound truth (of weakly coupled string theory).

by Luboš Motl ( at May 19, 2016 12:53 PM

The n-Category Cafe

The HoTT Effect

Martin-Löf type theory has been around for years, as have category theory, topos theory and homotopy theory. Bundle them all together within the package of homotopy type theory, and philosophy suddenly takes a lot more interest.

If you’re looking for places to go to hear about this new interest, you are spoilt for choice:

For an event which delves back also to pre-HoTT days, try my

CFA: Foundations of Mathematical Structuralism

12-14 October 2016, Munich Center for Mathematical Philosophy, LMU Munich

In the course of the last century, different general frameworks for the foundations of mathematics have been investigated. The orthodox approach to foundations interprets mathematics in the universe of sets. More recently, however, there have been other developments that call into question the whole method of set theory as a foundational discipline. Category-theoretic methods that focus on structural relationships and structure-preserving mappings between mathematical objects, rather than on the objects themselves, have been in play since the early 1960s. But in the last few years they have found clarification and expression through the development of homotopy type theory. This represents a fascinating development in the philosophy of mathematics, where category-theoretic structural methods are combined with type theory to produce a foundation that accounts for the structural aspects of mathematical practice. We are now at a point where the notion of mathematical structure can be elucidated more clearly and its role in the foundations of mathematics can be explored more fruitfully.

The main objective of the conference is to reevaluate the different perspectives on mathematical structuralism in the foundations of mathematics and in mathematical practice. To do this, the conference will explore the following research questions: Does mathematical structuralism offer a philosophically viable foundation for modern mathematics? What role do key notions such as structural abstraction, invariance, dependence, or structural identity play in the different theories of structuralism? To what degree does mathematical structuralism as a philosophical position describe actual mathematical practice? Does category theory or homotopy type theory provide a fully structural account for mathematics?

Confirmed Speakers:

  • Prof. Steve Awodey (Carnegie Mellon University)
  • Dr. Jessica Carter (University of Southern Denmark)
  • Prof. Gerhard Heinzmann (Université de Lorraine)
  • Prof. Geoffrey Hellman (University of Minnesota)
  • Prof. James Ladyman (University of Bristol)
  • Prof. Elaine Landry (UC Davis)
  • Prof. Hannes Leitgeb (LMU Munich)
  • Dr. Mary Leng (University of York)
  • Prof. Øystein Linnebo (University of Oslo)
  • Prof. Erich Reck (UC Riverside)

Call for Abstracts:

We invite the submission of abstracts on topics related to mathematical structuralism for presentation at the conference. Abstracts should include a title, a brief abstract (up to 100 words), and a full abstract (up to 1000 words), blinded for peer review. Authors should send their abstracts (in pdf format), together with their name, institutional affiliation and current position to We will select up to five submissions for presentation at the conference. The conference language is English.

Dates and Deadlines:

  • Submission deadline: 30 June, 2016
  • Notification of acceptance: 31 July, 2016
  • Registration deadline: 1 October, 2016
  • Conference: 12 - 14 October, 2016

by david ( at May 19, 2016 12:07 PM

May 18, 2016

CERN Bulletin

Croquet club
The CERN Croquet season started Saturday 7 May with the annual opening tournament. A total of 14 very happy players in the spring sunshine. It was a  lovely day in all senses - friendly competition, a lot of laughter and catching up with one another. Players are divided into PROs (low-handicap) and AMs (high-handicap), all matches are played as doubles. The pairings are changed during the day and the individual points go towards determining the winner. Congratulations to Ian Sexton for winning the Pros and Beryl Allardyce who won the Ams. Many of the games were very close and Ian seemed to have some good challenges in his block! Overall results: Pros: 1st - Ian 2nd - Brian 3rd - Angelina 4th - Jean Ams: 1st - Beryl 2nd - Frank 3rd - Peter (+Margaret) 4th - Roberta (+Jenny) Special thanks to the manager Danny Davids for making this tournament such a smooth and well run affair. CERN croquet club holds club tournaments and hosts Swiss Opens, Swiss Championships and International matches during the year.  Anyone (beginners or confirmed players) that is interested in playing this intriguing game please contact either: Ian Sexton, , Norman Eatough, or Dave Underhill,

by Croquet club at May 18, 2016 11:47 AM

Lubos Motl - string vacua and pheno

Weak gravity conjecture linked to many fields of maths, physics: an essay
Ben Heidenreich, Matthew Reece, and Tom Rudelius (Harvard) have won the 5th place in the 2016 Gravity Research Foundation Essay Contest (I will avoid the general rating of this kind of essay contests):
Axion Experiments to Algebraic Geometry: Testing Quantum Gravity via the Weak Gravity Conjecture
They discuss a refinement of our conjecture that for any type of a "charge" similar to electromagnetism, there must always exist sources for which the non-gravitational force donalds the gravitational one.

The essay shows that the inequality has implications for inflation (naively excluding a long enough inflation and maybe forcing one to talk about specific types of inflation), for AdS/CFT (charged operators with low enough dimensions should exist), and for pure mathematics (because the inequality should hold for compactifications on complicated enough manifolds, and such an inequality therefore sometimes turns into a nontrivial geometric theorem about those).

They start with my #1 favorite motivation for the weak gravity conjecture – the absence of global Lie symmetries in quantum gravity. It's been something I was emphasizing long before our paper but I vaguely remember that it wasn't new for some or several co-authors, either.

You know, the important pre-WGC lore – which I may have known from my adviser Tom Banks since the late 1990s – was that there are no global continuous symmetries in a consistent quantum theory of gravity. In general relativity, even translations are made "local" (diffeomorphism group) and things that are not "local" become unnatural.

However, gauge theories with tiny couplings \(g\to 0\) may seemingly emulate global symmetries as accurately as you want. That should better be impossible as well. If something (global symmetries) is forbidden, physical situations or vacua that are "infinitesimally close" to the forbidden thing should better be banned as well, right? Otherwise the ban would be operationally vacuous. There should exist a finite value of some quantity that tells you how far from the forbidden point you have to be.

And that's what the weak gravity conjecture does (and many types of evidence – from problems with extremal black hole remnants to lots of stringy examples – support the conjecture, at least in "some" form). A light charged particle with \[

m \leq \sqrt{2} eq M_{Planck}

\] must exist. Heidenreich et al. promote their belief in a stronger "detailed" version of the weak gravity conjecture that we have conjectured but we ran into some counter-arguments. They call it the lattice weak gravity conjecture (LWGC): for every allowed vector \(\vec Q\) in the lattice of charges, there must exist an object that is lighter than the 0.0001% of the mass of a black hole with the charge one million times \(\vec Q\).

(I have inserted the factor of one million and the millionth to make sure that you omit the corrections from the smallness of the black hole – you work with the semiclassical estimate of the mass.)

This sounds too strong. I thought that for larger charges, the statement actually isn't true – only several "elementary", low-charge light particles are required by WGC, I thought. If they replaced the word "particle" by a "state" (which may be a collection of many separate particles whose charges, masses add), I think that my doubts would go away.

It's difficult to decide whether the new light states should exist for every \(\vec Q\), every direction in the charge space, almost every direction, every direction in a basis of directions, every direction in a (near) orthogonal basis in some metric, or something else. There may exist some "very natural" specific version of the inequality that would be as provable as e.g. the Heisenberg uncertainty principle but I don't think that the "best, most accurate yet strong one" has been pinpointed yet.

The analogy with the Heisenberg uncertainty principle is meant to be an exaggeration. At least I still believe that the WGC is vastly less fundamental than the Heisenberg uncertainty principle. The uncertainty principle may be connected to many – in some sense "all" – situations in physics. WGC has been "connected" to many things. But I still don't see in what sense it could be considered a principle that "changes the rules of the game" in a way that is at least qualitatively analogous to the change of physics implied by the Heisenberg uncertainty principle.

There may be similarities between the inequalities but there are also differences. One of them is that the Heisenberg uncertainty principle strictly disagreed with the class of theories that had been considered before Heisenberg and pals revolutionized physics. On the other hand, WGC tells you to consider a subset of the theories of gravity-coupled-to-matter that were previously allowed.

The amount of activity dedicated to WGC is greater than what I used to assume a decade ago. (And I surely believe that e.g. matrix string theory is much more fundamentally important than WGC, for example.) On the other hand, I can imagine that this line of research on WGC will turn into something that will be self-evidently fundamental in its implications.

As we said at the beginning, WGC talks about some "minimal difference between two situations" – too decoupled new forces (with too weak couplings and/or too heavy charged particles) are forbidden etc. So this WGC-dictated "minimum distance" could be a consequence of some new kind of "orthogonality" that is indeed analogous to (if not a special case of) the orthogonality of mutually exclusive states in quantum mechanics – which is an assumption that may be used to derive the uncertainty principle.

By saying that the lightest charge particles have to be light enough, the WGC also quantifies the intuition that all the "engines" responsible for a force etc. can never be squeezed into a too small region of space. You need the rather long distances – the Compton wavelength of the light enough particle – for this force to arise. That's a way to say that WGC may be said to be "somewhat similar" to the holographic principle, too. All these things suggest that the information can't hide in too small volumes, or in too inaccessible physical phenomena.

If something seems to be nontrivially correct – it doesn't seem to be quite a coincidence that gravity is the weakest force – the reasons should better be understood well. So people's thinking about it is clearly desirable. On the other hand, no one is guaranteed that it will lead to a full-fledged revolution. If WGC really implied that no model of inflation may exist, I would personally not believe such a conclusion, anyway (except if someone gave me a truly convincing full definition of quantum gravity with all the proofs; or at least some viable alternative to inflation). Maybe it's a mistake of mine but I still happen to think that the "case of inflation" is still much stronger than the "case for any particular strong version of WGC applied to instantons". (Whether the inequality for 1-forms may really be applied to 0-forms seems disputable to me, too. Note that the energy-time "uncertainty principle" must be interpreted differently, if it is possible at all, than the momentum-position uncertainty principle, and one must often be very careful when he generalizes things to "related situations".)

The subtitle "Testing Quantum Gravity via the Weak Gravity Conjecture" must be provocative for assorted Šmoits. Not only the essay dares to talk about the testing of quantum gravity. It's worse than that: quantum gravity and string theory are being tested according to a conjecture co-authored by a guy who insists that Šmoits and their apologists are just stinky piles of feces. ;-)

by Luboš Motl ( at May 18, 2016 08:17 AM

May 17, 2016

Clifford V. Johnson - Asymptotia


Actually, I’m super-excited…! There is a New Hope coming. I’m daring to dream… ok just a little bit. (Sorry to be cryptic…More later.) -cvj

The post Excited! appeared first on Asymptotia.

by Clifford at May 17, 2016 08:47 PM

CERN Bulletin

Elections to the Mutual Aid Fund

Every two years, according to Article 6 of the Regulations of the Mutual Aid Fund, the Committee of the Mutual Aid Fund must renew one third of its membership. This year three members are outgoing. Of these three, two will stand again and one will not.


Candidates should be ready to give approximately two hours a month during working time to the Fund whose aim is to assist colleagues in financial difficulties.

We invite applications from CERN Staff who wish to stand for election as a member of the CERN Mutual Aid Fund to send in their application before 17 June 2016, by email to the Fund’s President, Connie Potter (

May 17, 2016 05:05 PM

Symmetrybreaking - Fermilab/SLAC

Why do objects feel solid?

The way you think about atoms may not be quite right.

A reader asks: "If atoms are mostly empty space, then why does anything feel solid?" James Beacham, a post-doctoral researcher with the ATLAS Experiment group of The Ohio State University, explains.

Video of bVrQw_Cdxyw

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

by Sarah Charley at May 17, 2016 01:00 PM

CERN Bulletin

Arts@CERN | ACCELERATE Austria | 19 May | IdeaSquare
Arts@CERN welcomes you to a talk by architects Sandra Manninger and Matias Del Campo, at IdeaSquare (Point 1) on May 19 at 6:00 p.m.   Sensible Bodies - architecture, data, and desire. Sandra and Matias are the winning architects for ACCELERATE Austria. Focusing on the notion of geometry, they are at CERN during the month of May, as artists in residence. Their research highlights how to go beyond beautiful data to discover something that could be defined voluptuous data. This coagulation of numbers, algorithms, procedures and programs uses the forces of thriving nature and, passing through the calculation of a multi-core processor, knits them with human desire. Read more. ACCELERATE Austria is supported by The Department of Arts of the Federal Chancellery of Austria. Thursday, May 19 at 6:00 p.m. at IdeaSquare.  See event on Indico.

May 17, 2016 08:35 AM

May 16, 2016

The n-Category Cafe

E8 as the Symmetries of a PDE

My friend Dennis The recently gave a new description of the Lie algebra of <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> (as well as all the other complex simple Lie algebras, except <semantics>𝔰𝔩(2,)<annotation encoding="application/x-tex">\mathfrak{sl}(2,\mathbb{C})</annotation></semantics>) as the symmetries of a system of partial differential equations. Even better, when he writes down his PDE explicitly, the exceptional Jordan algebra makes an appearance, as we will see.

This is a story with deep roots: it goes back to two very different models for the Lie algebra of <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, one due to Cartan and one due to Engel, which were published back-to-back in 1893. Dennis figured out how these two results are connected, and then generalized the whole story to nearly every simple Lie algebra, including <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics>.

Let’s begin with that model of <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> due to Cartan: the Lie algebra <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics> is formed by the infinitesimal symmetries of the system of PDE <semantics>u xx=13(u yy) 3,u xy=12(u yy) 2.<annotation encoding="application/x-tex"> u_{x x} = \frac{1}{3} (u_{y y})^3, \quad u_{x y} = \frac{1}{2} (u_{y y})^2 . </annotation></semantics> What does it mean to be an infintesimal symmetry of a PDE? To understand this, we need to see how PDE can be realized geometrically, using jet bundles.

A jet bundle over <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> is a bundle whose sections are given by holomorphic functions <semantics>u: 2<annotation encoding="application/x-tex"> u \colon \mathbb{C}^2 \to \mathbb{C} </annotation></semantics> and their partials, up to some order. Since we have a 2nd order PDE, we need the 2nd jet bundle: <semantics>J 2( 2,) 2<annotation encoding="application/x-tex"> \begin{matrix} J^2(\mathbb{C}^2, \mathbb{C}) \\ \downarrow \\ \mathbb{C}^2 \end{matrix} </annotation></semantics> This is actually the trivial bundle whose total space is <semantics> 8<annotation encoding="application/x-tex">\mathbb{C}^8</annotation></semantics>, but we label the coordinates suggestively: <semantics>J 2( 2,)={(x,y,u,u x,u y,u xx,u xy,u yy) 8}.<annotation encoding="application/x-tex"> J^2(\mathbb{C}^2, \mathbb{C}) = \left\{ (x,y,u,u_x,u_y, u_{x x}, u_{x y}, u_{y y}) \in \mathbb{C}^8 \right\} . </annotation></semantics> The bundle projection just picks out <semantics>(x,y)<annotation encoding="application/x-tex">(x,y)</annotation></semantics>.

For the moment, <semantics>u x<annotation encoding="application/x-tex">u_x</annotation></semantics>, <semantics>u y<annotation encoding="application/x-tex">u_y</annotation></semantics> and so on are just the names of some extra coordinates and have nothing to do with derivatives. To relate them, we choose some distinguished 1-forms on <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics>, called the contact 1-forms, spanned by holomorphic combinations of <semantics>θ 1 = duu xdxu ydy, θ 2 = du xu xxdxu xydy, θ 3 = du yu xydxu yydy.<annotation encoding="application/x-tex"> \begin{array}{rcl} \theta_1 & = & d u - u_x d x - u_y d y, \\ \theta_2 & = & d u_x - u_{x x} d x - u_{x y} d y, \\ \theta_3 & = & d u_y - u_{x y} d x - u_{y y} d y . \end{array} </annotation></semantics> These are chosen so that, if our suggestively named variables really were partials, these 1-forms would vanish.

For any holomorphic function <semantics>u: 2<annotation encoding="application/x-tex"> u \colon \mathbb{C}^2 \to \mathbb{C} </annotation></semantics> we get a section <semantics>j 2u<annotation encoding="application/x-tex">j^2 u</annotation></semantics> of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics>, called the prolongation of <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics>. It simply takes those variables that we named after the partial derivatives seriously, and gives us the actual partial derivatives of <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> in those slots: <semantics>(j 2u)(x,y)=(x,y,u(x,y),u x(x,y),u y(x,y),u xx(x,y),u xy(x,y),u yy(x,y)).<annotation encoding="application/x-tex"> (j^2 u) (x,y) = (x, y, u(x,y), u_x(x,y), u_y(x,y), u_{x x}(x,y), u_{x y}(x,y), u_{y y}(x,y) ) . </annotation></semantics> Conversely, an arbitrary section <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics> of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics> is the prolongation of some <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> if and only if it annihilates the contact 1-forms. Since contact 1-forms are spanned by <semantics>θ 1<annotation encoding="application/x-tex">\theta_1</annotation></semantics>, <semantics>θ 2<annotation encoding="application/x-tex">\theta_2</annotation></semantics> and <semantics>θ 3<annotation encoding="application/x-tex">\theta_3</annotation></semantics>, it suffices that: <semantics>s *θ 1=0,s *θ 2=0,s *θ 3=0.<annotation encoding="application/x-tex"> s^\ast \theta_1 = 0, \quad s^\ast \theta_2 = 0, \quad s^\ast \theta_3 = 0 . </annotation></semantics> Such sections are called holonomic. This correspondence between prolongations and holonomic sections is the key to thinking about jet bundles.

Our PDE <semantics>u xx=13(u yy) 3,u xy=12(u yy) 2<annotation encoding="application/x-tex"> u_{x x} = \frac{1}{3} (u_{y y})^3, \quad u_{x y} = \frac{1}{2} (u_{y y})^2 </annotation></semantics> carves out a submanifold <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics>. Solutions correspond to local holonomic sections that land in <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. In general, PDE give us submanifolds of jet spaces.

The external symmetries of our PDE are those diffeomorphisms of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics> that send contact 1-forms to contact 1-forms and send <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> to itself. The infinitesimal external symmetries are vector fields that preserve <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> and the contact 1-forms. There are also things called internal symmetries, but I won’t need them here.

So now we’re ready for:

Amazing theorem 1. The infinitesimal external symmetries of our PDE is the Lie algebra <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics>.

Like I said above, Dennis takes this amazing theorem of Cartan and connects it to an amazing theorem of Engel, and then generalizes the whole story to nearly all simple complex Lie algebras. Here’s Engel’s amazing theorem:

Amazing theorem 2. <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics> is the Lie algebra of infinitesimal contact transformations on a 5-dim contact manifold preserving a field of twisted cubic varieties.

This theorem lies at the heart of the story, so let me explain what it’s saying. First, it requires us to become acquainted with contact geometry, the odd-dimensional cousin of symplectic geometry. A contact manifold <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is a <semantics>(2n+1)<annotation encoding="application/x-tex">(2n+1)</annotation></semantics>-dimensional manifold with a contact distribution <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> on it. This is a smoothly-varying family of <semantics>2n<annotation encoding="application/x-tex">2n</annotation></semantics>-dimensional subspaces <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> of each tangent space <semantics>T mM<annotation encoding="application/x-tex">T_m M</annotation></semantics>, satisfying a certain nondegeneracy condition.

In Engel’s theorem, <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is 5-dimensional, so each <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> is 4-dimensional. We can projectivize each <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> to get a 3-dimensional projective space <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics> over each point. Our field of twisted cubic varieties is a curve in each of these projective spaces, the image of a cubic map: <semantics> 1(C m).<annotation encoding="application/x-tex"> \mathbb{C}\mathbb{P}^1 \to \mathbb{P}(C_m) . </annotation></semantics> This gives us a curve <semantics>𝒱 m<annotation encoding="application/x-tex">\mathcal{V}_m</annotation></semantics> in each <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>, and taken together this is our field of twisted cubic varieties, <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>. Engel gave explicit formulas for a contact structure on <semantics> 5<annotation encoding="application/x-tex">\mathbb{C}^5</annotation></semantics> with a twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> whose symmetries are <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics>, and you can find these formulas in Dennis’s paper.

How are these two theorems related? The secret is to go back to thinking about jet spaces, except this time, we’ll start with the 1st jet space: <semantics>J 1( 2,)={(x,y,u,u x,u y) 5}.<annotation encoding="application/x-tex"> J^1(\mathbb{C}^2, \mathbb{C}) = \left\{ (x, y, u, u_x, u_y) \in \mathbb{C}^5 \right\} . </annotation></semantics> This comes equipped with a space of contact 1-forms, spanned by a single 1-form: <semantics>θ=duu xdxu ydy.<annotation encoding="application/x-tex"> \theta = d u - u_x d x - u_y d y . </annotation></semantics> And now we see where contact 1-forms get their name: this contact 1-form defines a contact structure on <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, given by <semantics>C=ker(θ)<annotation encoding="application/x-tex">C = \mathrm{ker}(\theta)</annotation></semantics>.

Many of you may know Darboux’s theorem in symplectic geometry, which says that any two symplectic manifolds of the same dimension look the same locally. In contact geometry, the analogue of Darboux’s theorem holds, and goes by the name of Pfaff’s theorem. By Pfaff’s theorem, there’s an open set in <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics> which is contactomorphic to an open set in <semantics> 5<annotation encoding="application/x-tex">\mathbb{C}^5</annotation></semantics> with Engel’s contact structure. And we can use this map to transfer our twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> to <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, or at least an open subset of it. This gives us a twisted cubic field on <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, one that continues to have <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics> symmetry.

We are getting tantalizingly close to a PDE now. We have a jet space <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, with some structure on it. We just lack a submanifold of that jet space. Our twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> gives us a curve in each <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>, not in <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics> itself.

To these ingredients, add a bit of magic. Dennis found a natural construction that takes our twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> and gives us a submanifold of a space that, at least locally, looks like <semantics>J 2( 2,)<annotation encoding="application/x-tex">J^2(\mathbb{C}^2, \mathbb{C})</annotation></semantics>, and hence describes a PDE. This PDE is the <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> PDE.

It works like this. Our contact 1-form <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> endows each <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> with a symplectic structure, <semantics>dθ m<annotation encoding="application/x-tex">d\theta_m</annotation></semantics>. Starting with our contact structure, <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, this symplectic structure is only defined up to rescaling, because <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> determines <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> only up to rescaling. Nonetheless, it makes sense to look for subspaces of <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> that are Lagrangian: subspaces of maximal dimension on which <semantics>dθ m<annotation encoding="application/x-tex">d\theta_m</annotation></semantics> vanishes. The space of all Lagrangian subspaces of <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> is called the Lagrangian-Grassmannian, <semantics>LG(C m)<annotation encoding="application/x-tex">\mathrm{LG}(C_m)</annotation></semantics>, and we can form a bundle <semantics>LG(J 1) J 1 <annotation encoding="application/x-tex"> \begin{matrix} \mathrm{LG}(J^1) \\ \downarrow \\ J^1 \\ \end{matrix} </annotation></semantics> whose fiber over each point is <semantics>LG(C m)<annotation encoding="application/x-tex">LG(C_m)</annotation></semantics>. It turns out <semantics>LG(J 1)<annotation encoding="application/x-tex">LG(J^1)</annotation></semantics> is locally the same as <semantics>J 2( 2,)<annotation encoding="application/x-tex">J^2(\mathbb{C}^2, \mathbb{C})</annotation></semantics>, complete the with latter’s complement of contact 1-forms.

Dennis’s construction takes <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> and gives us a submanifold of <semantics>LG(J 1)<annotation encoding="application/x-tex">\mathrm{LG}(J^1)</annotation></semantics>, as follows. Remember, each <semantics>𝒱 m<annotation encoding="application/x-tex">\mathcal{V}_m</annotation></semantics> is a curve in <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>. The tangent space to a point <semantics>p𝒱 m<annotation encoding="application/x-tex">p \in \mathcal{V}_m</annotation></semantics> is thus a line in the projective space <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>, and this corresponds to 2-dimensional subspace of the 4-dimensional contact space <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics>. This subspace turns out to be Lagrangian! Thus, points <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> of <semantics>𝒱 m<annotation encoding="application/x-tex">\mathcal{V}_m</annotation></semantics> give us points of <semantics>LG(C m)<annotation encoding="application/x-tex">LG(C_m)</annotation></semantics>, and letting <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> and <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> vary, we get a submanifold of <semantics>LG(J 1)<annotation encoding="application/x-tex">LG(J^1)</annotation></semantics>. Locally, this is our PDE.

Dennis then generalizes this story to all simple Lie algebras besides <semantics>𝔰𝔩(2,)<annotation encoding="application/x-tex">\mathfrak{sl}(2,\mathbb{C})</annotation></semantics>. For simple Lie groups other than those in the <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> series, there is a homogenous space with a natural contact structure that has a field of twisted varieties living on it, called the field of “sub-adjoint varieties”. The same construction that worked for <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> now gives PDE for these. The <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> cases take more care.

Better yet, Dennis builds on work of Landsberg and Manivel to get explicit descriptions of all these PDE in terms of cubic forms on Jordan algebras! Landsberg and Manivel describe the field of sub-adjoint varieties using these cubic forms. For <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, the Jordan algebra in question is the complex numbers <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> with the cubic form <semantics>(t)=t 33.<annotation encoding="application/x-tex"> \mathfrak{C}(t) = \frac{t^3}{3} . </annotation></semantics>

Given any Jordan algebra <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> with a cubic form <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics> on it, first polarize <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics>: <semantics>(t)= abct at bt c,<annotation encoding="application/x-tex"> \mathfrak{C}(t) = \mathfrak{C}_{abc} t^a t^b t^c , </annotation></semantics> and then cook up a PDE for a function <semantics>u:W.<annotation encoding="application/x-tex"> u \colon \mathbb{C} \oplus W \to \mathbb{C} . </annotation></semantics> as follows: <semantics>u 00= abct at bt c,u 0a=32 abct bt c,u ab=3 abct c,<annotation encoding="application/x-tex"> u_{00} = \mathfrak{C}_{abc} t^a t^b t^c, \quad u_{0a} = \frac{3}{2} \mathfrak{C}_{a b c} t^b t^c, \quad u_{a b} = 3 \mathfrak{C}_{a b c} t^c , </annotation></semantics> where <semantics>tW<annotation encoding="application/x-tex">t \in W</annotation></semantics>, and I’ve used the indices <semantics>a<annotation encoding="application/x-tex">a</annotation></semantics>, <semantics>b<annotation encoding="application/x-tex">b</annotation></semantics>, and <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> for coordiantes in <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>, 0 for the coordinate in <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>. For <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, this gives us the PDE <semantics>u 00=t 33,u 01=t 22,u 11=t,<annotation encoding="application/x-tex"> u_{00} = \frac{t^3}{3}, \quad u_{01} = \frac{t^2}{2}, \quad u_{11} = t , </annotation></semantics> which is clearly equivalent to the PDE we wrote down earlier. Note that this PDE is determined entirely by the cubic form <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics> - the product on our Jordan algebra plays no role.

Now we’re ready for Dennis’s amazing theorem.

Amazing theorem 3. Let <semantics>W=𝔥 3(𝕆)<annotation encoding="application/x-tex">W = \mathbb{C} \otimes \mathfrak{h}_3(\mathbb{O})</annotation></semantics>, the exceptional Jordan algebra, and <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics> be the cubic form on <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> given by the determinant. Then the following PDE on <semantics>W<annotation encoding="application/x-tex">\mathbb{C} \oplus W</annotation></semantics> <semantics>u 00= abct at bt c,u 0a=32 abct bt c,u ab=3 abct c,<annotation encoding="application/x-tex"> u_{00} = \mathfrak{C}_{abc} t^a t^b t^c, \quad u_{0a} = \frac{3}{2} \mathfrak{C}_{a b c} t^b t^c, \quad u_{a b} = 3 \mathfrak{C}_{a b c} t^c , </annotation></semantics> has external symmetry algebra <semantics>𝔢 8<annotation encoding="application/x-tex">\mathfrak{e}_8</annotation></semantics>.


Thanks to Dennis The for explaining his work to me, and for his comments on drafts of this post.

by huerta ( at May 16, 2016 08:59 PM

Lubos Motl - string vacua and pheno

ATLAS: an amusing 2.1-sigma gluino-muon-multijet island excess
ATLAS released an interesting preprint
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at \(\sqrt{s} = 13\TeV\) with the ATLAS detector
in which gluino pairs were searched in final states with MET, many jets, and a single lepton. There were six signal regions. The last, sixth one, showed a mild but interesting excess. \(2.5\pm 0.7\) events were expected with a muon (thanks, Bill), but eight events were observed.

The excess looks intriguing when it's visualized on Figure 6.

The plot has two parts:

The observed thick red exclusion line is generally similar to the dashed black expected exclusion line. But on the upper picture, there is a clear "downward tooth" of the red line around the gluino mass of \(1200-1300\GeV\) and the lightest neutralino mass around \(400-600\GeV\), potentially properties of particles that may exist according to this hint.

On the second diagram, the excess looks like an island with \(m_{\tilde g} \sim 1250\GeV\) and the ratio of two mass differences (lightest chargino minus first lightest neutralino OVER gluino minus lightest neutralino) equal to \(0.75\) or so. However, the plot isn't quite showing the neighborhood of the most interesting values indicated by the upper plot because in the lower plot, the lightest neutralino is assumed to weigh \(60\GeV\).

The island-like shape of the exclusion line on the lower picture is interesting, nevertheless. Note that this is what the exclusion lines look like when all the wrong values of the mass are excluded and the correct mass is discovered. In this sense, the lower picture could already be a sketch of a discovery paper.

At any rate, if you go through the LHC category or search for a gluino on this blog, I think you will agree that it's far from the first hint of a gluino close enough to \(1200\GeV\) and a lightest neutralino in the \(600\GeV\) category. I am extremely far from any form of certainty that the gluino has to be found near these masses but if you offer me 100-to-1 odds like Jester did, I will happily make the bet again (or increase the existing one, if you wish).

by Luboš Motl ( at May 16, 2016 05:09 PM

May 15, 2016

CERN Bulletin

Federico Antinori elected as the new ALICE Spokesperson

On 8 April 2016 the ALICE Collaboration Board elected Federico Antinori from INFN Padova (Italy) as the new ALICE Spokesperson.


During his three-year mandate, starting in January 2017, he will lead a collaboration of more than 1500 people from 154 physics institutes across the globe.

Antinori has been a member of the collaboration ever since it was created and he has already held many senior leadership positions. Currently he is the experiment’s Physics Coordinator and as such he has the responsibility to overview the whole sector of physics analysis. During his mandate ALICE has produced many of its most prominent results. Before that he was the Coordinator of the Heavy Ion First Physics Task Force, charged with the analysis of the first Pb-Pb data samples. In 2007 and 2008 Federico served as ALICE Deputy Spokesperson. He was also the first ALICE Trigger Coordinator, having a central role in defining the experiment’s trigger menus from the first run in 2009 until the end of his mandate in 2011. He also played an important role in the commissioning of the experiment before the start of its operation.

Being entrusted by the Collaboration with its leadership makes Antinori feel honoured. “ALICE is a unique scientific instrument, built with years of dedication and labour of hundreds of colleagues. We have practically only begun to exploit its possibilities. As Spokesperson I can play a key role in making ALICE ever more efficient and successful and this is a truly exciting prospect for me.”

May 15, 2016 10:05 PM

ZapperZ - Physics and Physicists

Grandfather Paradox - Resolved?
This Minute Physics video claims to have "resolved" the infamous grandfather paradox. Well, OK, they don't actually say that, but they basically indicated why this might be a never-ending loop.

Still, let's think about it this way instead. During your grandfather's time, presumably, ALL the atoms or energy that will make you are already there, only they are all not together to form you. This only happens later on. But they are all there!

But here you come along from another time, popping into existence in your grandfather's time. Aren't you violating conservation of energy by adding MORE energy to the universe that are not accounted for? Now, unless there is a quid pro quo, where an equal amount of energy in your grandfather's time was siphoned to the future where you came from, this violation of conservation of energy is hard to explain away, especially if you invoke Noether's theorem.

I haven't come across a popular account of this issue.


by ZapperZ ( at May 15, 2016 01:58 PM

Geraint Lewis - Cosmic Horizons

How Far Can We Go? A long way, but not not that far!
Obligatory "sorry it's been a long time since I posted" comment. Life, grants, student, etc All of the usual excuses! But I plan (i.e. hope) to do more writing here in the future.

But what's the reason for today's post? Namely this video posted on your tube.
The conclusion is that humans are destined to explore the Local Group of galaxies, and that is it. And this video has received a bit of circulation on the inter-webs, promoted by a few sciency people.

The problem, however, is that it is wrong. The basic idea is that accelerating expansion due to the presence of dark energy means that the separation of objects will get faster and fast, and so it will be a little like chasing after a bus; the distance between the two of you will continue to get bigger and bigger. This part is correct, and in the very distant universe, there will be extremely isolated bunches of galaxies whose own gravitational pull overcomes the cosmic expansion. But the rest, just how much we can explore is wrong.

Why? Because they seem to have forgotten something key. Once we are out there traveling in the "expanding universe" then the expansion works in our advantage, increasing the distance not only between us and where we want to get to, but also between us and home. We effectively "ride" expansion.

So, how far could we get? Well, time to call (again - sorry) Tamara Davis's excellent cosmological work, in particular this paper on misconceptions about the Big Bang. I've spoken about this paper many times (and read it, it is quite excellent) but for this post, what we need to look is at the "conformal" picture of our universe. I don't have time togo into the details here, but the key thing is that you manipulate space and time so light rays trade at 45 degrees in the picture. Here's our universe.

The entire (infinite!) history of the universe is in this picture, mapped onto "conformal time". We're in the middle on the line marked now. If we extend our past light cone into the future, we can see the volume of the universe acceptable to us, given the continued accelerating expansion. We can see that encompasses objects that are currently not far from 20 billion light years away from us. This means that light rays fired out today will get this far, much, much larger than the Local Group of galaxies.

But ha! you scoff, that's a light ray. Puny humans in rockets have no chance!

Again, wrong, as you need to care about relativity again. How do I know? I wrote a paper about this with two smart students, Juliana Kwan (who is now at the University of Pennsylvania)  and Berian James, at Square. The point is that if you accelerate off into the universe, even at a nice gentle acceleration similar to what we experience here on Earth, you still get to explore much of the universe accessible to light rays.

Here's our paper 
 The key point is not just about how far you want to get, but whether or not you want to get home again. I am more than happy to acknowledge Jeremy Heyl's earlier work that inspired ours.

One tiny last point is the question whether our (or maybe not our) decedents will realise that there is dark energy in the universe. Locked away in Milkomenda (how I hate that name)  the view of the dark universe in the future might lead you to conclude that there is no more to the universe than ourselves, and it would appear static and unchanging, but anything thrown "out there", such as rocket ships (as per above) or high velocity stars, would still reveal the presence of dark energy.

There's plenty of universe we could potentially explore!

by Cusp ( at May 15, 2016 08:10 AM

May 13, 2016

Sean Carroll - Preposterous Universe

Big Picture Part Six: Caring

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Six, Caring:

  • 45. Three Billion Heartbeats
  • 46. What Is and What Ought to Be
  • 47. Rules and Consequences
  • 48. Constructing Goodness
  • 49. Listening to the World
  • 50. Existential Therapy

In this final section of the book, we take a step back to look at the journey we’ve taken, and ask what it implies for how we should think about our lives. I intentionally kept it short, because I don’t think poetic naturalism has many prescriptive advice to give along these lines. Resisting the temptation to give out a list of “Ten Naturalist Commandments,” I instead offer a list of “Ten Considerations,” things we can keep in mind while we decide for ourselves how we want to live.

A good poetic naturalist should resist the temptation to hand out commandments. “Give someone a fish,” the saying goes, “and you feed them for a day. Teach them to fish, and you feed them for a lifetime.” When it comes to how to lead our lives, poetic naturalism has no fish to give us. It doesn’t even really teach us how to fish. It’s more like poetic naturalism helps us figure out that there are things called “fish,” and perhaps investigate the various possible ways to go about catching them, if that were something we were inclined to do. It’s up to us what strategy we want to take, and what to do with our fish once we’ve caught them.

There are nevertheless some things worth saying, because there are a lot of untrue beliefs to which we all tend to cling from time to time. Many (most?) naturalists have trouble letting go of the existence of objective moral truths, even if they claim to accept the idea that the natural world is all that exists. But you can’t derive ought from is, so an honest naturalist will admit that our ethical principles are constructed rather than derived from nature. (In particular, I borrow the idea of “Humean constructivism” from philosopher Sharon Street.) Fortunately, we’re not blank slates, or computers happily idling away; we have aspirations, desires, preferences, and cares. More than enough raw material to construct workable notions of right and wrong, no less valuable for being ultimately subjective.

Of course there are also incorrect beliefs on the religious or non-naturalist side of the ledger, from the existence of divinely-approved ways of being to the promise of judgment and eternal reward for good behavior. Naturalists accept that life is going to come to an end — this life is not a dress rehearsal for something greater, it’s the only performance we get to give. The average person can expect a lifespan of about three billion heartbeats. That’s a goodly number, but far from limitless. We should make the most of each of our heartbeats.


The finitude of life doesn’t imply that it’s meaningless, any more than obeying the laws of physics implies that we can’t find purpose and joy within the natural world. The absence of a God to tell us why we’re here and hand down rules about what is and is not okay doesn’t leave us adrift — it puts the responsibility for constructing meaningful lives back where it always was, in our own hands.

Here’s a story one could imagine telling about the nature of the world. The universe is a miracle. It was created by God as a unique act of love. The splendor of the cosmos, spanning billions of years and countless stars, culminated in the appearance of human beings here on Earth — conscious, aware creatures, unions of soul and body, capable of appreciating and returning God’s love. Our mortal lives are part of a larger span of existence, in which we will continue to participate after our deaths.

It’s an attractive story. You can see why someone would believe it, and work to reconcile it with what science has taught us about the nature of reality. But the evidence points elsewhere.

Here’s a different story. The universe is not a miracle. It simply is, unguided and unsustained, manifesting the patterns of nature with scrupulous regularity. Over billions of years it has evolved naturally, from a state of low entropy toward increasing complexity, and it will eventually wind down to a featureless equilibrium condition. We are the miracle, we human beings. Not a break-the-laws-of-physics kind of miracle; a miracle in that it is wondrous and amazing how such complex, aware, creative, caring creatures could have arisen in perfect accordance with those laws. Our lives are finite, unpredictable, and immeasurably precious. Our emergence has brought meaning and mattering into the world.

That’s a pretty darn good story, too. Demanding in its own way, it may not give us everything we want, but it fits comfortably with everything science has taught us about nature. It bequeaths to us the responsibility and opportunity to make life into what we would have it be.

I do hope people enjoy the book. As I said earlier, I don’t presume to be offering many final answers here. I do think that the basic precepts of naturalism provide a framework for thinking about the world that, given our current state of knowledge, is overwhelmingly likely to be true. But the hard work of understanding the details of how that world works, and how we should shape our lives within it, is something we humans as a species have really only just begun to tackle in a serious way. May our journey of discovery be enlivened by frequent surprises!

by Sean Carroll at May 13, 2016 04:04 PM

Tommaso Dorigo - Scientificblogging

Catching The 750 GeV Boson With Roman Pots ?!
I am told by a TOTEM manager that this is public news and so it can be blogged about - so here I would like to explain a rather cunning plan that the TOTEM and the CMS collaborations have put together to enhance the possibilities of a discovery, and a better characterization, of the particle that everybody hopes is real, the 750 GeV resonance seen in photon pairs data by ATLAS and CMS in their 2015 data.

read more

by Tommaso Dorigo at May 13, 2016 12:56 PM

Lubos Motl - string vacua and pheno

Cernette: a bound state of 12 top quarks?
Willmutt reminded me of a paper I saw in the morning,
Production and Decay of \(750\GeV\) state of 6 top and 6 anti top quarks
by two experienced physicists, Froggatt and (co-father of string theory) Nielsen, that proposes that the \(750\GeV\) cernette could be real – and it could be a part of the Standard Model. They've been talking about the bound state (now proposed to be the cernette) since 2003.

At that time, the particle was conjectured to be so heavily bound that it would be a tachyon, \(m^2\lt 0\). I actually think that composite tachyons can't exist in tachyon-free theories, can they? (You better believe that such a tachyonic particle is impossible because such a man-made Cosmos-eating tachyonic toplet would be even worse than an Earth-eating strangelet LOL.)

The zodiac, a similarly strange bound state of 12 particles.

Unlike my numerologically driven weakly bound states of new particles, they propose that the particle could be a heavily bound state of 12 top quarks in total.

More precisely, they say that there should be 6 top quarks and 6 top antiquarks in the beast. The number 6 is preferred because all \(2\times 3 = 6\) arrangements of the spin-and-color are represented – both for quarks and antiquarks. So this complete list could potentially make a particle that is as stable as the atom of helium; or the helium-4 nucleus (the alpha-particle). The whole low-lying "shell" is occupied in all these cases!

The binding energy could come from the exchange of the virtual Higgs quanta. Note that for the odd messenger spins, \(J=1,3,5,\dots\), i.e. for electromagnetism, the like charges repel. For the \(J=2\) gravity, the like charges (positive masses) attract. For \(J=0\), the like charges must attract, too. A closer analysis of the signs in the Dirac fermionic bilinears implies that the opposite sources of the Higgs field actually attract as well – so the "sign of the top quark" is ignored. An ironic side effect of this rule is that when a top quark-antiquark pair is created, the total field they produce jumps discontinuously. But unlike the electric charge, the "charge sourcing the Higgs field" isn't conserved, so this jump isn't contradicting anything.

Twelve top quarks have the mass of \(12\times 173\GeV=2076\GeV\) so you need the interaction energy \(-1326\GeV\) to get down to \(750\GeV\). There are \(12\times 11/ 2\times 1=66\) pairs of "tops" (or antitops) in the proposed bound state. If each of them contributes \(-20\GeV\) in average, you will be fine. But do they contribute \(-20\GeV\) in such bounds states? Cannot someone just calculate these things, e.g. some lattice QCD methods? Cannot one see this \(-20\GeV\) in the toponium?

Both authors claim that \(pp\to SS\) where \(S\) is their 12-particle bound state has the cross section of 0.2 pb and 2 pb at \(8\TeV\) and \(13\TeV\), respectively, which seem good enough. The dominant decay modes should be (in this order) \(S\to t\bar t,gg,hh,W^+W^-,ZZ,\) and \(\gamma\gamma\). Given the low status of the diphoton, that doesn't look too good, does it? It is pretty hard to imagine how this complicated beast decays at all – twelve particles have to be liquidated almost simultaneously. That only occurs in some very high order, doesn't it? I am actually surprised by the high production cross section for the same reason.

But the simplicity makes the proposal attractive even if the absence of the Beyond the Standard Model physics could be disappointing at the end.

by Luboš Motl ( at May 13, 2016 12:34 PM

May 12, 2016

Sean Carroll - Preposterous Universe

Big Picture Part Five: Thinking

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Five, Thinking:

  • 37. Crawling Into Consciousness
  • 38. The Babbling Brain
  • 39. What Thinks?
  • 40. The Hard Problem
  • 41. Zombies and Stories
  • 42. Are Photons Conscious?
  • 43. What Acts on What?
  • 44. Freedom to Choose

Even many people who willingly describe themselves as naturalists — who agree that there is only the natural world, obeying laws of physics — are brought up short by the nature of consciousness, or the mind-body problem. David Chalmers famously distinguished between the “Easy Problems” of consciousness, which include functional and operational questions like “How does seeing an object relate to our mental image of that object?”, and the “Hard Problem.” The Hard Problem is the nature of qualia, the subjective experiences associated with conscious events. “Seeing red” is part of the Easy Problem, “experiencing the redness of red” is part of the Hard Problem. No matter how well we might someday understand the connectivity of neurons or the laws of physics governing the particles and forces of which our brains are made, how can collections of such cells or particles ever be said to have an experience of “what it is like” to feel something?

These questions have been debated to death, and I don’t have anything especially novel to contribute to discussions of how the brain works. What I can do is suggest that (1) the emergence of concepts like “thinking” and “experiencing” and “consciousness” as useful ways of talking about macroscopic collections of matter should be no more surprising than the emergence of concepts like “temperature” and “pressure”; and (2) our understanding of those underlying laws of physics is so incredibly solid and well-established that there should be an enormous presumption against modifying them in some important way just to account for a phenomenon (consciousness) which is admittedly one of the most subtle and complex things we’ve ever encountered in the world.

My suspicion is that the Hard Problem won’t be “solved,” it will just gradually fade away as we understand more and more about how the brain actually does work. I love this image of the magnetic fields generated in my brain as neurons squirt out charged particles, evidence of thoughts careening around my gray matter. (Taken by an MEG machine in David Poeppel’s lab at NYU.) It’s not evidence of anything surprising — not even the most devoted mind-body dualist is reluctant to admit that things happen in the brain while you are thinking — but it’s a vivid illustration of how closely our mental processes are associated with the particles and forces of elementary physics.


The divide between those who doubt that physical concepts can account for subjective experience and those who are think it can is difficult to bridge precisely because of the word “subjective” — there are no external, measurable quantities we can point to that might help resolve the issue. In the book I highlight this gap by imagining a dialogue between someone who believes in the existence of distinct mental properties (M) and a poetic naturalist (P) who thinks that such properties are a way of talking about physical reality:

M: I grant you that, when I am feeling some particular sensation, it is inevitably accompanied by some particular thing happening in my brain — a “neural correlate of consciousness.” What I deny is that one of my subjective experiences simply is such an occurrence in my brain. There’s more to it than that. I also have a feeling of what it is like to have that experience.

P: What I’m suggesting is that the statement “I have a feeling…” is simply a way of talking about those signals appearing in your brain. There is one way of talking that speaks a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences. And there is a map between these ways: when the neurons do a certain thing, the person feels a certain way. And that’s all there is.

M: Except that it’s manifestly not all there is! Because if it were, I wouldn’t have any conscious experiences at all. Atoms don’t have experiences. You can give a functional explanation of what’s going on, which will correctly account for how I actually behave, but such an explanation will always leave out the subjective aspect.

P: Why? I’m not “leaving out” the subjective aspect, I’m suggesting that all of this talk of our inner experiences is a very useful way of bundling up the collective behavior of a complex collection of atoms. Individual atoms don’t have experiences, but macroscopic agglomerations of them might very well, without invoking any additional ingredients.

M: No they won’t. No matter how many non-feeling atoms you pile together, they will never start having experiences.

P: Yes they will.

M: No they won’t.

P: Yes they will.

I imagine that close analogues of this conversation have happened countless times, and are likely to continue for a while into the future.

by Sean Carroll at May 12, 2016 03:58 PM

Symmetrybreaking - Fermilab/SLAC

Mommy, Daddy, where does mass come from?

The Higgs field gives mass to elementary particles, but most of our mass comes from somewhere else.

The story of particle mass starts right after the big bang. During the very first moments of the universe, almost all particles were massless, traveling at the speed of light in a very hot “primordial soup.” At some point during this period, the Higgs field turned on, permeating the universe and giving mass to the elementary particles.  

The Higgs field changed the environment when it was turned on, altering the way that particles behave. Some of the most common metaphors compare the Higgs field to a vat of molasses or thick syrup, which slows some particles as they travel through.

Others have envisioned the Higgs field as a crowd at a party or a horde of paparazzi. As famous scientists or A-list celebrities pass through, people surround them, slowing them down, but less-known faces travel through the crowds unnoticed. In these cases, popularity is synonymous with mass—the more popular you are, the more you will interact with the crowd, and the more “massive” you will be. 

But why did the Higgs field turn on? Why do some particles interact more with the Higgs field than others? The short answer is: We don’t know.

“This is part of why finding the Higgs field is just the beginning—because we have a ton of questions,” says Matt Strassler, a theoretical physicist and associate of the Harvard University physics department. 

The strong force and you

The Higgs field gives mass to fundamental particles—the electrons, quarks and other building blocks that cannot be broken into smaller parts. But these still only account for a tiny proportion of the universe’s mass.

The rest comes from protons and neutrons, which get almost all their mass from the strong nuclear force. These particles are each made up of three quarks moving at breakneck speeds that are bound together by gluons, the particles that carry the strong force. The energy of this interaction between quarks and gluons is what gives protons and neutrons their mass. Keep in mind Einstein’s famous E=mc2, which equates energy and mass. That makes mass a secret storage facility for energy.

“When you put three quarks together to create a proton, you end up binding up an enormous energy density in a small region in space,” says John Lajoie, a physicist at Iowa State University. 

A proton is made of two up quarks and a down quark; a neutron is made of two down quarks and an up quark. Their similar composition makes the mass they acquire from the strong force nearly identical. However, neutrons are slightly more massive than protons—and this difference is crucial. The process of neutrons decaying into protons promotes chemistry, and thus, biology. If protons were heavier, they would instead decay into neutrons, and the universe as we know it would not exist. 

“As it turns out, the down quarks interact more strongly with the Higgs [field], so they have a bit more mass,” says Andreas Kronfeld, a theoretical physicist at Fermilab. This is why the tiny difference between proton and neutron mass exists. 

But what about neutrinos?

We’ve learned that the elementary particles get their mass from the Higgs field—but wait! There may be an exception: neutrinos. Neutrinos are in a class by themselves; they have extremely tiny masses (a million times smaller than the electron, the second lightest particle), are electrically neutral and rarely interact with matter.

Scientists are puzzled as to why neutrinos are so light. Theorists are currently considering multiple possibilities. It might be explained if neutrinos are their own antiparticles—that is, if the antimatter version is identical to the matter version. If physicists discover that this is the case, it would mean that neutrinos get their mass from somewhere other than the Higgs boson, which physicists discovered in 2012.

Neutrinos must get their mass from a Higgs-like field, which is electrically neutral and spans the entire universe. This could be the same Higgs that gives mass to the other elementary particles, or it could be a very distant cousin. In some theories, neutrino mass also comes from an additional, brand new source that could hold the answers to other lingering particle physics mysteries.

“People tend to get excited about this possibility because it can be interpreted as evidence for a brand new energy scale, naively unrelated to the Higgs phenomenon,” says André de Gouvêa, a theoretical particle physicist at Northwestern University.

This new mechanism may also be related to how dark matter, which physicists think is made up of yet undiscovered particles, gets its mass.

“Nature tends to be economical, so it's possible that the same new set of particles explains all of these weird phenomena that we haven't explained yet,” de Gouvêa says.

by Diana Kwon at May 12, 2016 01:39 PM

The n-Category Cafe

The Works of Charles Ehresmann

Charles Ehresmann’s complete works are now available for free here:

There are 630 pages on algebraic topology and differential geometry, 800 pages on local structures and ordered categories, and their applications to topology, 900 pages on structured categories and quotients and internal categories and fibrations, and 850 pages on sketches and completions and sketches and monoidal closed structures.

That’s 3180 pages!

On top of this, more issues of the journal he founded, Cahiers de Topologie et Géométrie Différentielle Catégoriques, will become freely available online.

Andrée Ehresmann announced this magnificent gift to the world on the category theory mailing list, writing:

We are pleased to announce that the issues of the Cahiers de Topologie et Géométrie Différentielle Catégoriques, from Volume L (2009) to LV (2014) included, are now freely downloadable from the internet site of the Cahiers:

through the hyperlink to Recent Volumes.

In the future the issues of the Cahiers will become freely available on the site of the Cahiers two years after their paper publication. We recall that papers published up to Volume XLIX are accessible on the NUMDAM site.

Moreover, the 7 volumes of Charles Ehresmann: Oeuvres complètes et commentées (edited by A. Ehresmann from 1980-83 as Supplements to the Cahiers) are now also freely downloadable from the site

These 2 sites are included in the site of Andrée Ehresmann

and they can also be accessed through hyperlinks on its first page.


Andree Ehresmann, Marino Gran and Rene Guitart,

Chief-Editors of the Cahiers

by john ( at May 12, 2016 04:24 AM

May 11, 2016

Sean Carroll - Preposterous Universe

Big Picture Part Four: Complexity

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Four, Complexity:

  • 28. The Universe in a Cup of Coffee
  • 29. Light and Life
  • 30. Funneling Energy
  • 31. Spontaneous Organization
  • 32. The Origin and Purpose of Life
  • 33. Evolution’s Bootstraps
  • 34. Searching Through the Landscape
  • 35. Emergent Purpose
  • 36. Are We the Point?

One of the most annoying arguments a scientist can hear is that “evolution (or the origin of life) violates the Second Law of Thermodynamics.” The idea is basically that the Second Law says things become more disorganized over time, but the appearance of life represents increased organization, so what do you have to say about that, Dr. Smarty-Pants?

This is a very bad argument, since the Second Law only says that entropy increases in closed systems, not open ones. (Otherwise refrigerators would be impossible, since the entropy of a can of Diet Coke goes down when you cool it.) The Earth’s biosphere is obviously an open system — we get low-entropy photons from the Sun, and radiate high-entropy photons back to the universe — so there is manifestly no contradiction between the Second Law and the appearance of complex structures.

As right and true as that response is, it doesn’t quite address the question of why complex structures actually do come into being. Sure, they can come into being without violating the Second Law, but that doesn’t quite explain why they actually do. In Complexity, the fourth part of The Big Picture, I talk about why it’s very natural for such a thing to happen. This covers the evolution of complexity in general, as well as specific questions about the origin of life and Darwinian natural selection. When it comes to abiogenesis, there’s a lot we don’t know, but good reason to be optimistic about near-term progress.

In 2000, Gretchen Früh-Green, on a ship in the mid-Atlantic Ocean as part of an expedition led by marine geologist Deborah Kelley, stumbled across a collection of ghostly white towers in the video feed from a robotic camera near the ocean floor deep below. Fortunately they had with them a submersible vessel named Alvin, and Kelley set out to explore the structure up close. Further investigation showed that it was just the kind of alkaline vent formation that Russell had anticipated. Two thousand miles east of South Carolina, not far from the Mid-Atlantic Ridge, the Lost City hydrothermal vent field is at least 30,000 years old, and may be just the first known example of a very common type of geological formation. There’s a lot we don’t know about the ocean floor.

Lost City

The chemistry in vents like those at Lost City is rich, and driven by the sort of gradients that could reasonably prefigure life’s metabolic pathways. Reactions familiar from laboratory experiments have been able to produce a number of amino acids, sugars, and other compounds that are needed to ultimately assemble RNA. In the minds of the metabolism-first contingent, the power source provided by disequilibria must come first; the chemistry leading to life will eventually piggyback upon it.

Albert Szent-Györgyi, a Hungarian physiologist who won the Nobel Prize in 1937 for the discovery of Vitamin C, once offered the opinion that “Life is nothing but an electron looking for a place to rest.” That’s a good summary of the metabolism-first view. There is free energy locked up in certain chemical configurations, and life is one way it can be released. One compelling aspect of the picture is that it’s not simply working backwards from “we know there’s life, how did it start?” Instead, its suggesting that life is the solution to a problem: “we have some free energy, how do we liberate it?”

Planetary scientists have speculated that hydrothermal vents similar to Lost City, might be abundant on Jupiter’s moon Europa or Saturn’s moon Enceladus. Future exploration of the Solar System might be able to put this picture to a different kind of test.

A tricky part of this discussion is figuring out when it’s okay to say that a certain naturally-evolved organism or characteristic has a “purpose.” Evolution itself has no purpose, but according to poetic naturalism it’s perfectly okay to ascribe purposes to specific things or processes, as long as that kind of description actually provides a useful way of talking about the higher-level emergent behavior.

by Sean Carroll at May 11, 2016 04:01 PM

May 10, 2016

Tommaso Dorigo - Scientificblogging

Scavenging LHC Data: The CMS Data Scouting Technique
With the Large Hadron Collider now finally up and running after the unfortunate weasel incident, physicists at CERN and around the world are eager to put their hands on the new 2016 collisions data. The #MoarCollisions hashtag keeps entertaining the tweeting researchers and their followers, and everybody is anxious to finally ascertain whether the tentative signal of a new 750 GeV particle seen in diphoton decays in last year's data will reappear and confirm an epic discovery, or what.

read more

by Tommaso Dorigo at May 10, 2016 08:29 AM

May 09, 2016

Jon Butterworth - Life and Physics

Symmetrybreaking - Fermilab/SLAC

LHC prepares to deliver six times the data

Experiments at the Large Hadron Collider are once again recording collisions at extraordinary energies.

After months of winter hibernation, the Large Hadron Collider is once again smashing protons and taking data. The LHC will run around the clock for the next six months and produce roughly 2 quadrillion high-quality proton collisions, six times more than in 2015 and just shy of the total number of collisions recorded during the nearly three years of the collider’s first run.

“2015 was a recommissioning year. 2016 will be a year of full data production during which we will focus on delivering the maximum number of data to the experiments,” says Fabiola Gianotti, CERN director general.

The LHC is the world’s most powerful particle accelerator. Its collisions produce subatomic fireballs of energy, which morph into the fundamental building blocks of matter. The four particle detectors located on the LHC’s ring allow scientists to record and study the properties of these building blocks and look for new fundamental particles and forces.

“We’re proud to support more than a thousand US scientists and engineers who play integral parts in operating the detectors, analyzing the data and developing tools and technologies to upgrade the LHC’s performance in this international endeavor,” says Jim Siegrist, associate director of science for high-energy physics in the US Department of Energy’s Office of Science. “The LHC is the only place in the world where this kind of research can be performed, and we are a fully committed partner on the LHC experiments and the future development of the collider itself.”

Between 2010 and 2013 the LHC produced proton-proton collisions with 8 Tera-electronvolts of energy. In the spring of 2015, after a two-year shutdown, LHC operators ramped up the collision energy to 13 TeV. This increase in energy enables scientists to explore a new realm of physics that was previously inaccessible. Run II collisions also produce Higgs bosons—the groundbreaking particle discovered in LHC Run I—25 percent faster than Run I collisions and increase the chances of finding new massive particles by more than 40 percent.

Almost everything we know about matter is summed up in the Standard Model of particle physics, an elegant map of the subatomic world. During the first run of the LHC, scientists on the ATLAS and CMS experiments discovered the Higgs boson, the cornerstone of the Standard Model that helps explain the origins of mass. The LHCb experiment also discovered never-before-seen five-quark particles, and the ALICE experiment studied the near-perfect liquid that existed immediately after the Big Bang. All these observations are in line with the predictions of the Standard Model.

“So far the Standard Model seems to explain matter, but we know there has to be something beyond the Standard Model,” says Denise Caldwell, director of the Physics Division of the National Science Foundation. “This potential new physics can only be uncovered with more data that will come with the next LHC run.”

For example, the Standard Model contains no explanation of gravity, one of the four fundamental forces in the universe. It also does not explain astronomical observations of dark matter, a type of matter that interacts with our visible universe only through gravity, nor does it explain why matter prevailed over antimatter during the formation of the early universe. The small mass of the Higgs boson also suggests that matter is fundamentally unstable.

The new LHC data will help scientists verify the Standard Model’s predictions and push beyond its boundaries. Many predicted and theoretical subatomic processes are so rare that scientists need billions of collisions to find just a small handful of events that are clean and scientifically interesting. Scientists also need an enormous amount of data to precisely measure well-known Standard Model processes. Any significant deviations from the Standard Model’s predictions could be the first step towards new physics.

The United States is the largest national contributor to both the ATLAS and CMS experiments, with 45 US universities and laboratories working on ATLAS and 49 working on CMS.

A version of this article was published by Fermilab.

May 09, 2016 02:04 PM

May 08, 2016

John Baez - Azimuth


One of the big problems with intermittent power sources like wind and solar is the difficulty of storing energy. But if we ever get a lot of electric vehicles, we’ll have a lot of batteries—and at any time, most of these vehicles are parked. So, they can be connected to the power grid.

This leads to the concept of vehicle-to-grid or V2G. In a V2G system, electric vehicles can connect to the grid, with electricity flowing from the grid to the vehicle or back. Cars can help solve the energy storage problem.

Here’s something I read about vehicle-to-grid systems in Sierra magazine:

At the University of Delaware, dozens of electric vehicles sit in a uniform row. They’re part of an experiment involving BMW, power-generating company NRG, and PJM—a regional organization that moves electricity around 13 states and the District of Columbia—that’s examining how electric vehicles can give energy back to the electricity grid.

It works like this: When the cars are idle (our vehicles typically sit 95 percent of the time), they’re plugged in and able to deliver the electricity in their batteries back to the grid. When energy demand is high, they return electricity to the grid; when demand is low, they absorb electricity. One car doesn’t offer much, but 30 of them is another story—worth about 300 kilowatts of power. Utilities will pay for this service, called “load leveling,” because it means that they don’t have to turn on backup power plants, which are usually coal or natural gas burners. And the EV owners get regular checks—approximately $2.50 a day, or about $900 a year.

It’s working well, according to Willett Kempton, a longtime V2G guru and University of Delaware professor who heads the school’s Center for Carbon-Free Power Integration: “In three years hooked up to the grid, the revenue was better than we thought. The project, which is ongoing, shows that V2G is viable. We can earn money from cars that are driven regularly.”

V2G still has some technical hurdles to overcome, but carmakers—and utilities, too—want it to happen. In a 2014 report, Edison Electric Institute, the power industry’s main trade group, called on utilities to promote EVs [electric vehicles], describing EV adoption as a “quadruple win” that would sustain electricity demand, improve customer relations, support environmental goals, and reduce utilities’ operating costs.

Utilities appear to be listening. In Virginia and North Carolina, Dominion Resources is running a pilot project to identify ways to encourage EV drivers to only charge during off-peak demand. In California, San Diego Gas & Electric will be spending $45 million on a vehicle-to-grid integration system. At least 25 utilities in 14 states are offering customers some kind of EV incentive. And it’s not just utilities—the Department of Defense is conducting V2G pilot programs at four military bases.

Paula DuPont-Kidd, a spokesperson for PJM, says V2G is especially useful for what’s called “frequency regulation service”—keeping electricity transmissions at a steady 60 cycles per second. “V2G has proven its ability to be a resource to the grid when power is aggregated,” she says. “We know it’s possible. It just hasn’t happened yet.”

I wonder how much, exactly, this system would help.

My quote comes from here:

• Jim Motavalli, Siri, will connected vehicles be greener?, Sierra, May–June 2016.

Motavalli also discusses vehicle-to-vehicle connectivity and vehicle-to-building systems. The latter could let your vehicle power your house during a blackout—which seems of limited use to me, but maybe I don’t get the point.

In general, it seems good to have everything I own have the ability to talk to all the rest. There will be security concerns. But as we move toward ‘ecotechnology’, our gadgets should become less obtrusive, less hungry for raw power, more communicative, and more intelligent.

by John Baez at May 08, 2016 12:46 AM

May 07, 2016

ZapperZ - Physics and Physicists

"... in America today, the only thing more terrifying than foreigners is…math...."
OK, I'm going to get a bit political here, but with some math! So if this is not something you care to read, skip this.

I've been accused many times of being an "elitist", as if giving someone a label like that is a sufficient argument against what I had presented (it isn't!). But you see, it is hard not to be an "elitist" when you read something like this.

Prominent Guido Menzio, who is Italian, was pulled out of a plane because his seatmate thought he was writing something suspicious while they waited for their plane to take off. She couldn't understand the letters and probably it was "Arabic" or something (what if it is?), and since Menzio looks suspiciously "foreign", she reported him to the crew.

That Something she’d seen had been her seatmate’s cryptic notes, scrawled in a script she didn’t recognize. Maybe it was code, or some foreign lettering, possibly the details of a plot to destroy the dozens of innocent lives aboard American Airlines Flight 3950. She may have felt it her duty to alert the authorities just to be safe. The curly-haired man was, the agent informed him politely, suspected of terrorism.

The curly-haired man laughed.

He laughed because those scribbles weren’t Arabic, or some other terrorist code. They were math.

Yes, math. A differential equation, to be exact.
You can't make this up! But what hits home is what Menzio said later in the news article, and what the article writer ended with.

Rising xenophobia stoked by the presidential campaign, he suggested, may soon make things worse for people who happen to look a little other-ish.

“What might prevent an epidemic of paranoia? It is hard not to recognize in this incident, the ethos of [Donald] Trump’s voting base,” he wrote.

In this true parable of 2016 I see another worrisome lesson, albeit one also possibly relevant to Trump’s appeal: That in America today, the only thing more terrifying than foreigners is…math.
During this summer months, many of us travel to conferences all over the place. So, if you look remotely exotic, have a slightly darker skin, don't risk it by doing math on an airplane. That ignorant passenger sitting next to you just might rat on you! If by being an "elitist" means that I can recognize the difference between "math" and "arabic", then I'd rather be an elitist than someone who is proud of his/her aggressive ignorance.

How's that? Are you still with me?


by ZapperZ ( at May 07, 2016 03:47 PM

May 06, 2016

Tommaso Dorigo - Scientificblogging

A Statistics Session At A Particle Physics Conference ?

The twelfth edition of “Quark Confinement and the Hadron Spectrum“, a particle physics conference specialized in QCD and Heavy Ion physics, will be held in Thessaloniki this year, from

read more

by Tommaso Dorigo at May 06, 2016 09:49 AM

John Baez - Azimuth

Shelves and the Infinite

Infinity is a very strange concept. Like alien spores floating down from the sky, large infinities can come down and contaminate the study of questions about ordinary finite numbers! Here’s an example.

A shelf is a set with a binary operation \rhd that distributes over itself:

a \rhd (b \rhd c) = (a \rhd b) \rhd (a \rhd c)

There are lots of examples, the simplest being any group, where we define

g \rhd h = g h g^{-1}

They have a nice connection to knot theory, which you can see here if you think hard:

My former student Alissa Crans, who invented the term ‘shelf’, has written a lot about them, starting here:

• Alissa Crans, Lie 2-Algebras, Chapter 3.1: Shelves, Racks, Spindles and Quandles, Ph.D. thesis, U.C. Riverside, 2004.

I could tell you a long and entertaining story about this, including the tale of how shelves got their name. But instead I want to talk about something far more peculiar, which I understand much less well. There’s a strange connection between shelves, extremely large infinities, and extremely large finite numbers! It was first noticed by a logician named Richard Laver in the late 1980s, and it’s been developed further by Randall Dougherty.

It goes like this. For each n, there’s a unique shelf structure on the numbers \{1,2, \dots ,2^n\} such that

a \rhd 1 = a + 1 \bmod 2^n

So, the elements of our shelf are


1 \rhd 1 = 2

2 \rhd 1 = 3

and so on, until we get to

2^n \rhd 1 = 1

However, we can now calculate

1 \rhd 1

1 \rhd 2

1 \rhd 3

and so on. You should try it yourself for a simple example! You’ll need to use the self-distributive law. It’s quite an experience.

You’ll get a list of 2^n numbers, but this list will not contain all the numbers \{1, 2, \dots, 2^n\}. Instead, it will repeat with some period P(n).

Here is where things get weird. The numbers P(n) form this sequence:

1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, …

It may not look like it, but the numbers in this sequence approach infinity!

if we assume an extra axiom, which goes beyond the usual axioms of set theory, but so far seems consistent!

This axiom asserts the existence of an absurdly large cardinal, called an I3 rank-into-rank cardinal.

I’ll say more about this kind of cardinal later. But, this is not the only case where a ‘large cardinal axiom’ has consequences for down-to-earth math, like the behavior of some sequence that you can define using simple rules.

On the other hand, Randall Dougherty has proved a lower bound on how far you have to go out in this sequence to reach the number 32.

And, it’s an incomprehensibly large number!

The third Ackermann function A_3(n) is roughly 2 to the nth power. The fourth Ackermann function A_4(n) is roughly 2 raised to itself n times:


And so on: each Ackermann function is defined by iterating the previous one.

Dougherty showed that for the sequence P(n) to reach 32, you have to go at least

n = A(9,A(8,A(8,255)))

This is an insanely large number!

I should emphasize that if we use just the ordinary axioms of set theory, the ZFC axioms, nobody has proved that the sequence P(n) ever reaches 32. Neither is it known that this is unprovable if we only use ZFC.

So, what we’ve got here is a very slowly growing sequence… which is easy to define but grows so slowly that (so far) mathematicians need new axioms of set theory to prove it goes to infinity, or even reaches 32.

I should admit that my definition of the Ackermann function is rough. In reality it’s defined like this:

A(m, n) = \begin{cases} n+1 & \mbox{if } m = 0 \\ A(m-1, 1) & \mbox{if } m > 0 \mbox{ and } n = 0 \\ A(m-1, A(m, n-1)) & \mbox{if } m > 0 \mbox{ and } n > 0. \end{cases}

And if you work this out, you’ll find it’s a bit annoying. Somehow the number 3 sneaks in:

A(2,n) = 2 + (n+3) - 3

A(3,n) = 2 \cdot (n+3) - 3

A(4,n) = 2^{n+3} - 3

A(5,n) = 2\uparrow\uparrow(n+3) - 3

where a \uparrow\uparrow b means a raised to itself b times,

A(6,n) = 2 \uparrow\uparrow\uparrow(n+3) - 3

where a \uparrow\uparrow\uparrow b means a \uparrow\uparrow (a \uparrow\uparrow (a \uparrow\uparrow \cdots )) with the number a repeated b times, and so on.

However, these irritating 3’s scarcely matter, since Dougherty’s number is so large… and I believe he could have gotten an even larger upper bound if he wanted.

Perhaps I’ll wrap up by saying very roughly what an I3 rank-into-rank cardinal is.

In set theory the universe of all sets is built up in stages. These stages are called the von Neumann hierarchy. The lowest stage has nothing in it:

V_0 = \emptyset

Each successive stage is defined like this:

V_{\lambda + 1} = P(V_\lambda)

where P(S) is the the power set of S, that is, the set of all subsets of S. For ‘limit ordinals’, that is, ordinals that aren’t of the form \lambda + 1, we define

\displaystyle{ V_\lambda = \bigcup_{\alpha < \lambda} V_\alpha }

An I3 rank-into-rank cardinal is an ordinal \lambda such that V_\lambda admits a nontrivial elementary embedding into itself.

Very roughly, this means the infinity \lambda is so huge that the collection of sets that can be built by this stage can mapped into itself, in a one-to-one but not onto way, into a smaller collection that’s indistinguishable from the original one when it comes to the validity of anything you can say about sets!

More precisely, a nontrivial elementary embedding of V_\lambda into itself is a one-to-one but not onto function

f: V_\lambda \to V_\lambda

that preserves and reflects the validity of all statements in the language of set theory. That is: for any sentence \phi(a_1, \dots, a_n) in the language of set theory, this statement holds for sets a_1, \dots, a_n \in V_\lambda if and only if \phi(f(a_1), \dots, f(a_n)) holds.

I don’t know why, but an I3 rank-into-rank cardinal, if it’s even consistent to assume one exists, is known to be extraordinarily big. What I mean by this is that it automatically has a lot of other properties known to characterize large cardinals. It’s inaccessible (which is big) and ineffable (which is bigger), and measurable (which is bigger), and huge (which is even bigger), and so on.

How in the world is this related to shelves?

The point is that if

f, g : V_\lambda \to V_\lambda

are elementary embeddings, we can apply f to any set in V_\lambda. But in set theory, functions are sets too: sets of ordered pairs So, g is a set. It’s not an element of V_\lambda, but all its subsets g \cap V_\alpha are, where \alpha < \lambda. So, we can define

f \rhd g = \bigcup_{\alpha < \lambda} f (g \cap V_\alpha)

Laver showed that this operation distributes over itself:

f \rhd (g \rhd h) = (f \rhd g) \rhd (f \rhd h)

And, he showed that if we take one elementary embedding and let it generate a shelf by this this operation, we get the free shelf on one generator!

The shelf I started out describing, the numbers \{1, \dots, 2^n \} with

a \rhd 1 = a + 1 \bmod 2^n

also has one generator namely the number 1. So, it’s a quotient of the free shelf on one generator by one relation, namely the above equation.

That’s about all I understand. I don’t understand how the existence of a nontrivial elementary embedding of V_\lambda into itself implies that the function P(n) goes to infinity, and I don’t understand Randall Dougherty’s lower bound on how far you need to go to reach P(n) = 32. For more, read these:

• Richard Laver, The left distributive law and the freeness of an algebra of elementary embeddings, Adv. Math. 91 (1992), 209–231.

• Richard Laver, On the algebra of elementary embeddings of a rank into itself, Adv. Math. 110 (1995), 334–346.

• Randall Dougherty and Thomas Jech, Finite left distributive algebras and embedding algebras, Adv. Math. 130 (1997), 201–241.

• Randall Dougherty, Critical points in an algebra of elementary embeddings, Ann. Pure Appl. Logic 65 (1993), 211–241.

• Randall Dougherty, Critical points in an algebra of elementary embeddings, II.

by John Baez at May 06, 2016 06:40 AM

May 05, 2016

Andrew Jaffe - Leaves on the Line

Wussy (Best Band in America?)

It’s been a year since the last entry here. So I could blog about the end of Planck, the first observation of gravitational waves, fatherhood, or the horror (comedy?) of the US Presidential election. Instead, it’s going to be rock ’n’ roll, though I don’t know if that’s because it’s too important, or not important enough.

It started last year when I came across Christgau’s A+ review of Wussy’s Attica and the mentions of Sonic Youth, Nirvana and Television seemed compelling enough to make it worth a try (paid for before listening even in the streaming age). He was right. I was a few years late (they’ve been around since 2005), but the songs and the sound hit me immediately. Attica was the best new record I’d heard in a long time, grabbing me from the first moment, “when the kick of the drum lined up with the beat of [my] heart”, in the words of their own description of the feeling of first listening to The Who’s “Baba O’Riley”. Three guitars, bass, and a drum, over beautiful screams from co-songwriters Lisa Walker and Chuck Cleaver.


And they just released a new record, Forever Sounds, reviewed in Spin Magazine just before its release:

To certain fans of Lucinda Williams, Crazy Horse, Mekons and R.E.M., Wussy became the best band in America almost instantaneously…

Indeed, that list nailed my musical obsessions with an almost google-like creepiness. Guitars, soul, maybe even some politics. Wussy makes me feel almost like the Replacements) did in 1985.

IMG 1764

So I was ecstatic when I found out that Wussy was touring the UK, and their London date was at the great but tiny Windmill in Brixton, one of the two or three venues within walking distance of my flat (where I had once seen one of the other obsessions from that list, The Mekons). I only learned about the gig a couple of days before, but tickets were not hard to get: the place only holds about 150 people, but their were far fewer on hand that night — perhaps because Wussy also played the night before as part of the Walpurgis Nacht festival. But I wanted to see a full set, and this night they were scheduled to play the entire new Forever Sounds record. I admit I was slightly apprehensive — it’s only a few weeks old and I’d only listened a few times.

But from the first note (and after a good set from the third opener, Slowgun) I realised that the new record had already wormed its way into my mind — a bit more atmospheric, less song-oriented, than Attica, but now, obviously, as good or nearly so. After the 40 or so minutes of songs from the album, they played a few more from the back catalog, and that was it (this being London, even after the age of “closing time”, most clubs in residential neighbourhoods have to stop the music pretty early). Though I admit I was hoping for, say, a cover of “I Could Never Take the Place of Your Man”, it was still a great, sloppy, loud show, with enough of us in the audience to shout and cheer (but probably not enough to make very much cash for the band, so I was happy to buy my first band t-shirt since, yes, a Mekons shirt from one of their tours about 20 years ago…). I did get a chance to thank a couple of the band members for indeed being the “best band in America” (albeit in London). I also asked whether they could come back for an acoustic show some time soon, so I wouldn’t have to tear myself away from my family and instead could bring my (currently) seven-month old baby to see them some day soon.

They did say UK tours might be a more regular occurrence, and you can follow their progress on the Wussy Road Blog. You should just buy their records, support great music.

by Andrew at May 05, 2016 10:36 PM