Particle Physics Planet

August 01, 2015

Peter Coles - In the Dark

Brighton Pride

Today I’ve been mainly taking part in the 25th Brighton Pride celebrations. The Parade started out 90 minutes late and on a diverted route because of what appears to have been a hoax bomb (in the words of the police, a “suspect package” – no jokes please) but the atmosphere was incredible. Not only was the parade huge, but the streets were lined with thousands and thousands of people. It was all very friendly so my worries that my fear of crowds would resurface were unfounded.

I walked with the Sussex University student society. Hopefully next year there will be an official staff presence!

The Pride Carnival in Preston Park after the Parade wasn’t so interesting for me so I only stayed a couple of hours before returning to Kemptown for the Village Party, which will go on all night and all day tomorrow. I am just taking a break for a cup of tea and a bite to eat before deciding whether to rejoin the party a bit later. I am however a bit oldy for that sort of thing and may instead decide to listen to the Proms instead..

Anyway, here are a few pictures of the parade and village party..








by telescoper at August 01, 2015 07:17 PM

astrobites - astro-ph reader's digest

The Small Magellanic Cloud in 3D

Title: The Carnegie Hubble Program: Distance and Structure of the SMC as Revealed by Mid-Infrared Observations of Cepheids

Authors: Victoria Scowcroft, Wendy L. Freedman, Barry F. Madore, Andy Monson, S. E. Persson, Jeff Rich, Mark Seibert, Jane R. Rigby

First Author’s Institution: Observatories of the Carnegie Institution of Washington

Status: Accepted for publication in ApJ

Cepheid variable stars have long been famous for the role they play in the cosmic distance ladder. Intrinsic luminosities are generally difficult to measure, but because the luminosities and periods of variation of Cepheids follow a well-known relationship called the Leavitt Law (LL—you might also see this called the Period-Luminosity or PL relation), we can measure their periods and then calculate their intrinsic luminosities. The LL for a variety of wavelength bands is shown in Figure 1. Once we know their intrinsic luminosities, we can determine their distances. In addition, since Cepheids are supergiant stars, and are therefore very luminous, we can also use them to derive distances to objects much farther than we could with parallax.

Figure 1: Figure 3 from the paper, which shows the LL for 10 wavelength bands from U (the shortest) to 4.5 micrometers (the longest). We can see that there is a drop in the dispersion and increase in slope as we move from the shorter wavelengths to the longer wavelengths.

Figure 1: Figure 3 from the paper, which shows the Leavitt Law for 10 wavelength bands from U (the shortest) to 4.5 micron (the longest). We can see that there is a drop in the dispersion and increase in slope as we move from the shorter wavelengths to the longer wavelengths. The 3.6 micron-band has darker pink points. 

Usually when we look at distances—for example, to other galaxies—that we’ve derived with Cepheids, we consider all of the Cepheids in each galaxy in aggregate. Though Cepheid periods and luminosities are quite closely related, there is still some dispersion intrinsic to the LL, so we get more accurate distance measures when we consider what the LL looks like for all of the Cepheids in a galaxy rather than just using distances derived from individual stars.

The authors of today’s paper have used Cepheids in the Small Magellanic Cloud (SMC) in the mid-infrared to derive a distance modulus to the galaxy. Their average distance modulus of 18.96 ± 0.01 (stat) ± 0.03 (sys) mag for the SMC, which corresponds to 62 ± 0.3 kpc, is consistent with previous estimates of the distance. However, they have gone a step further and used the Cepheids not only to determine a mean distance modulus to the SMC, but also to study its structure.

Previous observations of Cepheids in the SMC have indicated that the south-west side of the galaxy was farther away from us than the central or north-east portion. The authors of today’s paper noticed that while the intrinsic dispersion of the LL in the mid-infrared (3.6 µm) for the Large Magellanic Cloud (LMC) and Milky Way is about ±0.10 mag, in the SMC, it is 0.16 mag. They focus in particular on this bandwidth because it has the lowest intrinsic dispersion (see Figure 1), which also makes it easier to calculate precise distance moduli. This intrinsic dispersion can be traced back to the width of the instability strip—the part of the stellar color-magnitude diagram where Cepheids and other variable stars live. The instability strip looks narrower in the mid-infrared, causing  smaller dispersion in the LL at those wavelengths as well.

Figure 6 from the paper, which shoes the individual distances derived from the Cepheids. The closest Cepheids are in blue while the farthest are red--they span about 20 kpc. The white points show what the typical uncertainty in the distance to each Cepheid looks like. The three plots to the side show what the SMC looks like in each of the three planes of a cartesian coordinate system, making it more clear that it has a "sausage"-like shape. The image on the upper right-hand corner shows the SMC as it looks on the sky.

Figure 2: Figure 6 from the paper, which shows the individual distances derived from the Cepheids. The closest Cepheids are in blue while the farthest are red. In total, they span about 20 kpc. The white points  with error bars show what the typical uncertainty in the distance to each Cepheid looks like. The three plots to the side give an alternative view in each of the three planes of a cartesian coordinate system, making the “sausage”-like shape of the SMC more obvious. The image on the upper right-hand corner shows the SMC and distribution of Cepheid as they appear on the sky.

They attribute the higher dispersion in the LL for the SMC to the geometry of the galaxy—meaning that some of the Cepheids are closer to us than others—this is possible given the SMC’s known large line-of-sight depth. Since extinction can contribute to this spread in the LL as well, they also correct each Cepheid for extinction. However, even after dereddening, they find that there is still a higher than expected dispersion in the Leavitt Law that cannot be attributed to extinction. With this additional dispersion attributed to the geometry of the SMC, they find that the south-west corner of the SMC is about 20 kpc farther away than the north-east corner. Figure 2 shows the individual distance moduli of the SMC Cepheids.

Finally, the authors also compare the distance moduli that they measured for their Cepheids with theoretical models that seek to describe the mechanism that produced the irregular shape of the SMC. In particular, they look at a model of the mechanism that produces the “wing” of the SMC—a portion of the galaxy that is being drawn towards the LMC. They find that the locations of the young stars (like Cepheids) in the galaxy are in good agreement with where they are predicted to be by the models. They suggest that in the future, such measurements of Cepheids can also be used to inform simulations of galaxy dynamics as well, thus also contributing to our understanding of the dynamical histories of these galaxies. While we will probably have to wait a bit longer for reliable individual Cepheid distance moduli, it’s exciting to think that we could eventually use these stars not only to figure out how far away another galaxy is, but also to understand its structure.

by Caroline Huang at August 01, 2015 03:04 AM

July 31, 2015

Tommaso Dorigo - Scientificblogging

Bang! Meet The Highest-Energy Hadron Collision Ever Imaged!
The 13 TeV data from LHC collisions taken this summer is quickly going through analysis programs and being used for new physics results, and everybody is wondering whether there are surprises in store... Of course that will require some more time to be ascertained.
For the time being, I can offer a couple of very inspiring pictures. CMS recorded a spectacular event featuring two extremely high-energy jets in the first 40 inverse picobarns of data that was collected and reconstructed by the experiment with all detector component properly working.

read more

by Tommaso Dorigo at July 31, 2015 06:24 PM

Peter Coles - In the Dark

The England Cricket Team – Another Apology

Some time I wrote a post on this blog about the 1st Ashes Test between England and Australia at Cardiff which resulted in an England victory. In that piece I celebrated the team spirit of England’s cricketers and some memorable performances with both bat and ball. I also suggested that England had a realistic prospect of regaining the Ashes.

More recently, however, in the light of Australia’s comprehensive victory in the 2nd Ashes Test at Lord’s during which the England bowlers were ineffectual, their batsmen inept and the team spirit non-existent, I accepted  that my earlier post was misleading and that England actually had absolutely no chance of regaining the Ashes.

Today England breezed to an emphatic 8-wicket victory over Australia in the 3rd Ashes Test at Edgbaston in the Midlands. The manner of this victory, inside three days, and bouncing back from the crushing defeat in the previous Test, makes it clear that my previous post was wrong and England’s bowlers are far from ineffectual, their batsmen highly capable, and the team not at all lacking in team spirit.

Moreover, with England now leading 2-1 with two matches to play, I now accept that England do indeed have a realistic prospect of regaining the Ashes.

I apologize for my earlier apology and for any inconvenience caused.

I hope this clarifies the situation.

P.S. Geoffrey Boycott is 74 not out.

by telescoper at July 31, 2015 05:30 PM

Sean Carroll - Preposterous Universe

Spacetime, Storified

I had some spare minutes the other day, and had been thinking about the fate of spacetime in a quantum universe, so I took to the internet to let my feelings be heard. Only a few minutes, though, so I took advantage of Twitter rather than do a proper blog post. But through the magic of Storify, I can turn the former into the latter!

Obviously the infamous 140-character limit of Twitter doesn’t allow the level of precision and subtlety one would always like to achieve when talking about difficult topics. But restrictions lead to creativity, and the results can actually be a bit more accessible than unfettered prose might have been.

Anyway, spacetime isn’t fundamental, it’s just a useful approximation in certain regimes. Someday we hope to know what it’s an approximation to.

<noscript>[<a href="" target="_blank">View the story &#8220;Spacetime&#8221; on Storify</a>]</noscript>

by Sean Carroll at July 31, 2015 05:27 PM

Emily Lakdawalla - The Planetary Society Blog

Pretty pictures of the Cosmos: Star stream
Award-winning astrophotographer Adam Block presents the first-ever high-resolution color images of the "star stream halo" of the spiral galaxy NGC 4414.

July 31, 2015 03:00 PM

Ben Still - Neutrino Blog

Pentaquark Series 5: Now You See Me, Now You Don't
This is the fifth in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. Post 4 here. Today we discuss the discovery and subsequent undiscovery of the theta-plus exotic baryon and evidence for the Pc particle seen by LHCb in 2015.

Now You See Me, Now You Don’t

The possible configurations of the theta-plus either a pentaquark
particle where all quarks are bound together (left) or  as a
molecule made from a bound Baryon and Meson (right).
With evidence of an excess in data being presented by 10 different experiments the existence of the Θ+ pentaquark (or Baryon-Meson molecule) was looking ever more like a discovery. Further still, some small statistics evidence was also mounting for two other pentaquark states. Many other experiments joined the search but most were coming up empty handed. One explanation for this is that they could have been using different experimental methods? Using different energies of particle maybe? It is a difficult task to disentangle all of the differences between experiments, but it is an essential one. Experimental data is always compared to theory, or theory to experiment, so that the behaviour of nature may be written into mathematical language. To interpret experimental results one must understand every aspect of the machinery of an experiment as well as any known physics which may contributes in some way to a measurement being made. 

Many of the experiments claiming to have evidence of the Θ+ particle were different in a number of ways to the other experiments who were claiming to see no evidence for its existence. There were two pairs of experiments, however, which had minimal difference between them and proved to be the most compelling cases for comparing results. The DIANA [1] experiment claimed to have seen a particle fitting the description of the Θ+ created via a interaction known as charge exchange. At the same energies and via the same interaction channel (type) the Belle experiment sat looking at many more interactions than DIANA. With a much larger data set, containing a great deal more charge exchange interactions, the Belle could see no evidence [2] of the Θ+ candidate particle seen by DIANA. 

The other pair of experiments, also at odds with one another, were SAPHIR and CLAS. The SAPHIR experiment used a type of particle interaction known as photo-production; photons of light are collided with other particles to instigate an interaction, in this case producing two pairs of quark and antiquark. SAPHIR produced one of the most, statistically speaking, convincing evidence of all the ten experiments claiming to have seen a particle looking like Θ+[3]. With this in mind the CLAS experiment repeated the conditions used by SAPHIR and took a great deal more data. If the Θ+ particle was indeed there then CLAS would have enough data to prove its existence beyond reasonable doubt. CLAS saw no evidence at all for a Θ+-like particle [4]. The benchmark used to compare the experiments was the comparison between the number of possible Θ+ produces and the number of Lambda baryons made. When the number of possible Θparticles was divided by the number of Lambda baryons it was over 50 times lower in the CLAS result than that reported by SAPHIR. One of the most convincing pieces of evidence for the existence of the Θ+ particle was now seen to have been entirely refuted; by an almost identical experimental method with a lot more data backing it up. In the 2006 Particle Data Group review of the status of the Θparticle it was remarked of these results that “Combined with the other negative reports, it leaves the reality of the Θ+ in great doubt.”

A LEGO diagram showing the creation of a Pc pentaquark in the decay of a Lambda baryon.
Now, in 2015, the LHCb experiment has claimed very, statistically, convincing evidence of another particle [5]. This is an exotic (not your usual) baryon which seems to decay into a (not exotic at all) Baryon and a Meson. The results can be once again best understood as the decay of some particle containing four quarks and one antiquark. The quark content of this new particle cannot be the same as the Θ+ particle claimed by previous experiments; it is has a much greater mass and produces different particles when it decays. The baryon seen in the decay is a proton, made from two up quarks and a single down; the meson a J/Ψ, which is made from a charm quark and charm antiquark. The quark content of this particle is therefore: 2x up quarks, 1x down quark, 1x charm quark, and 1x charm antiquark. The pentaquark was seen to be an intermediate state in the decay of a L0b baryon see the LEGO diagram above.
The Pc state measured by LHCb could be a pentaquark (left)
or a Baryon-Meson molecule (right).

The pentaquark was seen to exist in two forms, each just with slightly different masses - something not uncommon for particles made of quarks (explained in a future post). The majority of the mass of baryons, Mesons, and exotic quark particles comes not from the quarks themselves but instead from the strong force binding them together. A slight difference in configuration means a difference in energy and therefore mass os a particle. This will be the topic of the next blog post. The individual evidence for each pentaquark state combined to make an ever more convincing body of evidence that the particle, Pc, exists. 

by Ben ( at July 31, 2015 02:16 PM

Ben Still - Neutrino Blog

Pentaquark Series 4: Pentaquark Prediction and Search
This is the fourth in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. Post 3 here. Today we discuss the prediction of pentaquarks and first tentative sightings.

The pentaquark might be a whole new type of particle
containing 4 quarks and 1 antiquark within itself.
Particle as announced by LHCb last week would have to be comprised of four quarks and one antiquark in some, currently, unknown arrangement. All quarks could be contained within some single particle; this is a pentaquark, or they could be a bound pair of one Baryon and one Meson - a Baryon-Meson molecule. From what we have discussed so far about the strong force there should be nothing stopping us from creating pentaquarks or Baryon-Meson molecules. A white strong charge Baryon plus a white strong charge Meson would simply result in a white strong charge bound molecule. Also if we have 4 quarks and 1 antiquark we can also create a white charge pentaquark in a number of different ways:

red + green + blue + red + anti-red = white
red + green + blue + green + anti-green = white
red + green + blue + blue + anti-blue = white

Or the pentaquark might be a bound
state of a Baryon and Meson.
In 1997 Dmitri Diakonov, Victor Petrov, and Maxim Polyakov [1] employed similar methods to Gell-Mann in his Eightfold way, using the symmetries of the quarks to predict not only the existence but also the expected mass of pentaquark particles. Again like Gell-Mann they predicted a pattern in these symmetries called an Exotic Baryon anti-decuplet; exotic because these particles (or combinations thereof) are not constructed in the same way as other Baryons; baryon because they have some properties common with Baryons (there is at least one baryon’s worth of quarks making up these particles); anti-decuplet because there were 10 particles, as in Gell-Mann’s decuplet, but pointing in the opposite direction. I have drawn one representation of this anti-decuplet below using my LEGO analogy*. This is just one of a number of patterns that can be, and have been, drawn from quark symmetries.

The Exotic Baryon Anti-decuplet: an extension of quark symmetries showing the lightest possible pentaquark states. Here I show the states as Baryon-Meson molecules.
The Exotic Baryon Anti-decuplet: an extension of quark symmetries showing the lightest possible pentaquark states. Here I show the pentaquark states as 4 quark, 1 antiquark bound states.

With the prediction out there it is was now the job of the experimentalists to smash particles into one another and sift through the debris to see if any of these particles existed. They chose to focus their searches upon those particles at the extreme points of the anti-decuplet triangle. The lighter particles produced when these pentaquarks decay can only be explained by these exotic states. Let us take the Θ+ as an example.

Detection of particles used to reconstruct the
pentaquark state. Borrowed from here.
The Θ+ can be identified experimentally by the fact it is uniquely strange. The Θ+ contains an anti-strange quark while all three quark baryons can only contain a strange quark, because no baryon contains an antiquark. We can say that the Θ+ has an opposite strangeness to all traditional Baryons; this is something that can be identified in particle detectors. The Θ+ is similar to Baryons as it has the same quality known as baryon number; related to the colour charge of the quarks and antiquarks. Both pentaquarks and three quark Baryons have a baryon number of 1; each quark has baryon number +1/3 and the antiquark has baryon number -1/3. Experiments have shown that the strangeness and baryon number must be conserved when a particle decays to other lighter particles. By tracking strangeness and baryon number, experiments are able to pick out groups of particles which could only have come from the decay of a pentaquark. As we will discuss in future posts, this shows up in experimental data as a large amount of extra data around a single particle mass which sits on top of a broad number of other possible background data.

In 2003 the LEPS experiment in Japan published a paper [2] which suggested evidence that a particle with a mass the same as the Θ+ (within errors) had been seen within its detectors. Over the next year this claim was followed by some nine other experiments all saying that they too had seen an excess in their data around the predicted Θ+ mass. The evidence for this pentaquark seemed compelling, but there were some problems and questions surrounding the data. In some cases the number of background events were underestimated, which exaggerated and excesses there might have been. Some experiments chose specific techniques to enhance data around the predicted mass of the Θ+. When considering the results of all ten experiments the range of masses determined by each, although similar, varied far more than one would expect from the given theory. It was obvious that further experiments were needed, with much more data, if the existence of the Θ+ were to be confirmed or refuted.

*Notice I have not combined the quarks into a pentaquark particle but instead leave them next to one another as Baryon-Meson molecule..

by Ben ( at July 31, 2015 02:14 PM

Emily Lakdawalla - The Planetary Society Blog

Our Global Volunteers: July 2015 Update
The Planetary Society's volunteers around the world have been busy these past few months, with all the excitement surrounding Asteroid Day, the LightSail test mission, the New Horizons and Dawn missions, and other space milestones!

July 31, 2015 02:00 PM

ATLAS Experiment

From ATLAS around the world: Brief history of Morocco in ATLAS

In 1996, Morocco officially became a member of the ATLAS collaboration. The eagerly awaited day had finally arrived, and the first Arabic and African country signed a collaborative agreement with CERN to participate in the great scientific adventure of particle physics. This achievement was possible thanks to the efforts of a small group of physicists that recognised the potential benefits of collaborating with large accelerator centres.

Motivated to improve science, technology and innovation, the Moroccan High Energy Physics Cluster (RUPHE) founded in 1996 to enhance the scientific training of young people and advances in pure scientific knowledge. RUPHE includes ATLAS collaborators from University of Hassan II Casablanca, Mohammed V University (Rabat), Mohamed I University (Oujda), Cadi Ayyad University (Marrakech) and the National Energy Centre of Science and Nuclear Techniques (CNESTEN) in Rabat.

 John Ellis, Fairouz Malek and Farida Fassi enjoying afternoon tea during their visit to Fez after successfully training PhD students during the School of High-Energy Physics in Morocco.

John Ellis, Fairouz Malek and Farida Fassi enjoying afternoon tea during their visit to Fez after successfully training PhD students during the School of High-Energy Physics in Morocco.

Morocco’s participation in ATLAS started even before its membership was approved in 1996. In 1992, Moroccan researchers contributed to the construction of a neutron irradiation station. After that, they continued boosting their contribution by playing a key role in the construction, testing and commissioning of the ATLAS Electromagnetic Calorimeter (ECAL) presampler during 1998-2003 period. Since then, Moroccan researchers have been working to strengthen the long-standing cooperation with CERN. Currently, there are 27 faculty members and research assistants, including 9 active PhD students.

The research interests focus on these topics: the search for new physics phenomena in association with top physics, Higgs physics and B physics, including a significant participation on the detector performance studies. During the LHC’s Run 1, Moroccan researchers contributed to the success of the ATLAS experiment. This success has motivated our researchers to look forward to a very successful Run 2.

John Ellis visiting Fez during the Advanced School of Physics in the Maghreb in 2011 in Taza.

John Ellis visiting Fez during the Advanced School of Physics in the Maghreb in 2011 in Taza.

In addition, we are involved in the distributed computing effort. During ATLAS data taking periods, user support becomes a challenging task. With many scientists analysing data, user support is becoming crucial to ensure that everyone is able to analyse the collision data distributed among hundreds of computing sites worldwide. The Distributed Analysis Support Team (DAST) is a team of expert shifters who provide the first direct support for all help requests on distributed data analysis. Alden Stradling (University of Texas, Arlington) and I (Mohammed V University) coordinate the overall activity of this team.

In terms of building local expertise, several schools and workshops have been organized. Outstanding worldwide experts have participated, giving lectures on particle physics, nuclear physics, applied physics and grid computing. Most participants are master’s degree or PhD students already working in these fields, or in related fields and seeking a global dimension to their training. Such schools include: “L’Ecole de Physique Avancée au Maghreb 2011” in Taza, “tutorial training on statistics tools for data analysis” and the “Master of High-Energy Physics and Scientific Computing” in Casablanca. High school students from Oujda participated in the International Masterclasses in March 2015, which aimed to encourage them in doing science, and gave them an introduction to what we do in ATLAS and why it is interesting and exciting.

ATLAS Overview Week in Marrakech in 2013.

ATLAS Overview Week in Marrakech in 2013.

After the success of the ATLAS Liquid Argon Week organized in Marrakech in 2009, the ATLAS Overview Week for 2013 was hosted in Morocco. It was our great pleasure to invite our ATLAS colleagues to this important event in Marrakech. There were many interesting talks and discussions at the event. We took a brief time out to watch the announcement of the 2013 Nobel Prize in Physics. To our delight, it was awarded to François Englert and Peter Higgs for their pioneering work on the electroweak-symmetry-breaking mechanism in 1964. It was a very exciting moment for me.

The ATLAS Collaboration reacts to the 2013 Nobel Prize in Physics announcement during ATLAS Week in Marrakech.

The ATLAS Collaboration reacts to the 2013 Nobel Prize in Physics announcement during ATLAS Week in Marrakech.

Farida Fassi Farida Fassi is a research assistant professor at Mohammed V University in Rabat. She started working in ATLAS in 1996 doing her PhD at IFIC in Valencia, Spain. She worked in the online and offline test beam data analysis of the first prototypes of the Hadronic Tile Calorimeter modules. This was in addition to top physics analysis. In 2003, she began working in Grid Computing and Distributed Data Analysis. She had a CNRS post-doctoral research fellowship working on the CMS experiment while based in Lyon, France. She was the coordinator and contact person of the French CMS Tier-1, and continued her search for new physics phenomena. In 2011, she came back to ATLAS focusing on the search for the ttbar resonances. Farida is the co-coordinator of the Distributed Analysis Support Team.

by Farida Fassi at July 31, 2015 01:13 PM

Lubos Motl - string vacua and pheno

CMS bump at \(5\TeV\)?
First, off-topic: You may pre-order a book by Lisa Randall that will be out in 3 months (4 formats). It argues that dark matter is composed of organs of dinosaurs who were labeled reactionary autonomous intelligent weapons and shot into the outer space by mammoths. Well, I know the theory and the very interesting wisdom and stories around a bit more precisely than that because of some 50-hour exposure but there has to be a surprise left for you. ;-)

The ATLAS' bump at \(2\TeV\) or so – possibly a new gauge boson – is probably the most attractive excess the LHC teams are seeing in their data. However, Pauline Gagnon of ATLAS has ironically pointed out another pair of cute excesses seen by her competitors at the CMS:
The bumpy road to discoveries
Here are the two graphs:

Both graphs show the invariant masses of dijets – a dijet spectrum.

The left graph is brand new, coming from the 2015 \(\sqrt{s}=13\TeV\) data. Only 37 inverse picobarns of data have been collected but that was enough to see a very high-energy event, a dijet with the invariant mass of \(m_{\rm inv} = 5\TeV\).

What makes it even more interesting is that the right graph shows some \(\sqrt{s}=8\TeV\) data from 2012. After analyzing 19.7 inverse femtobarns of data, they saw a bump at \(m_{\rm inv} = 5.15\TeV\) which may be the sign of the same particle (or family of new particles) as the new 2015 bump.

I haven't known about this fun \(5.15\TeV\) event. It only appears in 8 or so uncited experimental articles – and the first article was written by E. Quark and R.S. Graviton, pretty famous experts. ;-) If you search for 5.15 in the paper by Fitzpatrick, Kaplan, Randall, and Wang, you will discover \(5.15\TeV\) in a numerator of a numerical formula for some \(S\) which is hopefully just a coincidence. ;-)

One must always realize that small statistical flukes may occur by chance. But one must be doubly careful if they occur at the highest-energy bin – more generally, in the last point(s) in the graph – because those tend to be the most unreliable and noisy ones. In his popular book, Feynman recalled how he and Gell-Mann knew that the experimenters were wrong when they thought that they had falsified the F-G (Feynman-Gell-Mann) A-V theory (axial vector minus vector) theory of the weak force:
I went out and found the original article on the experiment that said the neutron-proton coupling is T [tensor], and I was shocked by something. I remembered reading that article once before (back in the days when I read every article in the Physical Review—it was small enough). And I remembered, when I saw this article again, looking at that curve and thinking, “That doesn’t prove anything!”

You see, it depended on one or two points at the very edge of the range of the data, and there’s a principle that a point on the edge of the range of the data—the last point—isn’t very good, because if it was, they’d have another point further along. And I had realized that the whole idea that neutron-proton coupling is T was based on the last point, which wasn’t very good, and therefore it’s not proved. I remember noticing that!

And when I became interested in beta decay, directly, I read all these reports by the “beta-decay experts,” which said it’s T. I never looked at the original data; I only read those reports, like a dope. Had I been a good physicist, when I thought of the original idea back at the Rochester Conference I would have immediately looked up “how strong do we know it’s T?”—that would have been the sensible thing to do. I would have recognized right away that I had already noticed it wasn’t satisfactorily proved.

Since then I never pay any attention to anything by "experts." I calculate everything myself. When people said the quark theory was pretty good, I got two Ph.D.s, Finn Ravndal and Mark Kislinger, to go through the whole works with me, just so I could check that the thing was really giving results that fit fairly well, and that it was a significantly good theory. I'll never make that mistake again, reading the experts' opinions. Of course, you only live one life, and you make all your mistakes, and learn what not to do, and that's the end of you.
Amen to that. But in different situations, the bumps may be more real than in others. ;-)

By the way, at least two supersymmetric explanations of LHC excesses were posted to the hep-ph archive today. Well, one of them is a SUSY explanation and the other is a superstring explaining. ;-)

by Luboš Motl ( at July 31, 2015 01:07 PM

Lubos Motl - string vacua and pheno

Glimpsed particles that the LHC may confirm
The LHC is back in business. Many of us have watched the webcast today. There was a one-hour delay at the beginning. Then they lost the beam once. And things went pretty much smoothly afterwards. After a 30-month coffee break, the collider is collecting actual data to be used in the future papers at the center mass of \(13\TeV\).

So far, no black hole has destroyed the Earth.

It's possible that the LHC will discover nothing new, at least for years. But it is in no way inevitable. I would say that it's not even "very likely". We have various theoretical reasons to expect one discovery or another. A theory-independent vague argument is that the electroweak scale has no deep reason to be too special. And every time we added an order of magnitude to the energies, we saw something new.

But in this blog post, I would like to recall some excesses – inconclusive but tantalizing upward deviations from the Standard Model predictions – that have been mentioned on this blog. Most of them emerged from ATLAS or CMS analyses at the LHC. Some of them may be confirmed soon.

Please submit your corrections if some of the "hopeful hints" have been killed. And please submit those that I forgot.

The hints below will be approximately sorted from those that I consider most convincing at this moment. The energy at the beginning is the estimated mass of a new particle.
I omitted LHC hints older than November 2011 but you may see that the number of possible deviations has been nontrivial.

The most accurate photographs of the Standard Model's elementary particles provided by CERN so far. The zoo may have to be expanded.

Stay tuned.

by Luboš Motl ( at July 31, 2015 12:43 PM

Peter Coles - In the Dark

Half Term Blue Moon

Tonight’s a Blue Moon, which happens whenever there are two full moons in a calendar month, although the phrase used to mean the third full moon of a season in which there are four in a quarter-year (or season). A Blue Moon isn’t all that rare an occurence actually. In fact there’s one every two or three years on average. But it does at least provide an excuse to post this again…

Incidentally, today marks the half-way mark in my five-year term as Head of the School of Mathematical and Physical Sciences at the University of Sussex. I started on 1st February 2013, so it’s now been exactly two years and six months. It’s all downhill from here!

by telescoper at July 31, 2015 11:11 AM

CERN Bulletin

High Turbulence
As a member of the EuHIT (European High-Performance Infrastructures in Turbulence - see here) consortium, CERN is participating in fundamental research on turbulence phenomena. To this end, the Laboratory provides European researchers with a cryogenic research infrastructure (see here), where the first tests have just been performed.

by EuHIT, Collaboration at July 31, 2015 09:40 AM

CERN Bulletin

High Turbulence
As a member of the EuHIT (European High-Performance Infrastructures in Turbulence - see here) consortium, CERN is participating in fundamental research on turbulence phenomena. To this end, the Laboratory provides European researchers with a cryogenic research infrastructure (see here), where the first tests have just been performed.

by EuHIT, Collaboration at July 31, 2015 09:38 AM

arXiv blog

How Far Can the Human Eye See a Candle Flame?

Answers on the Web vary from a few thousand meters to 48 kilometers. Now a pair of physicists have carried out an experiment to find out.

July 31, 2015 04:17 AM

July 30, 2015

Christian P. Robert - xi'an's og

Judith Rousseau gets Bernoulli Society Ethel Newbold Prize

As announced at the 60th ISI World Meeting in Rio de Janeiro, my friend, co-author, and former PhD student Judith Rousseau got the first Ethel Newbold Prize! Congrats, Judith! And well-deserved! The prize is awarded by the Bernoulli Society on the following basis

The Ethel Newbold Prize is to be awarded biannually to an outstanding statistical scientist for a body of work that represents excellence in research in mathematical statistics, and/or excellence in research that links developments in a substantive field to new advances in statistics. In any year in which the award is due, the prize will not be awarded unless the set of all nominations includes candidates from both genders.

and is funded by Wiley. I support very much this (inclusive) approach of “recognizing the importance of women in statistics”, without creating a prize restricted to women nominees (and hence exclusive).  Thanks to the members of the Program Committee of the Bernoulli Society for setting that prize and to Nancy Reid in particular.

Ethel Newbold was a British statistician who worked during WWI in the Ministry of Munitions and then became a member of the newly created Medical Research Council, working on medical and industrial studies. She was the first woman to receive the Guy Medal in Silver in 1928. Just to stress that much remains to be done towards gender balance, the second and last woman to get a Guy Medal in Silver is Sylvia Richardson, in 2009… (In addition, Valerie Isham, Nicky Best, and Fiona Steele got a Guy Medal in Bronze, out of the 71 so far awarded, while no woman ever got a Guy Medal in Gold.) Funny occurrences of coincidence: Ethel May Newbold was educated at Tunbridge Wells, the place where Bayes was a minister, while Sylvia is now head of the Medical Research Council biostatistics unit in Cambridge.

Filed under: Books, Kids, Statistics, University life Tagged: Bayesian non-parametrics, Bernoulli society, Brazil, Cambridge University, compound Poisson distribution, England, Ethel Newbold, Guy Medal, industrial statistics, ISI, Medical Research Council, Rio de Janeiro, Royal Statistical Society, Turnbridge Wells

by xi'an at July 30, 2015 10:15 PM

astrobites - astro-ph reader's digest

Galactic Interlopers

Title: Interloper bias in future large-scale structure surveys
Authors: A. R. Pullen, C. M. Hirata, O. Dore, A. Raccanelli
First Author’s Institution: Department of Physics, Carnegie Mellon University, Pittsburgh, PA
Status: To be submitted to PASJ



We look out into a universe that appears deceivingly two-dimensional. Our favorite constellations are often composed of stars that are separated by distances more immense than their proximity to each other suggests. This artificial two-dimensionality of the observed universe has forever been a bane of astronomy, for it takes a lot to squeeze information about the third dimension out of the universe. Deprojecting our 2D sky into a true 3D map by measuring distances to objects is an astronomical enterprise of its own, built up first from inch-long measuring sticks used exclusively for nearby objects, which are replaced by yardsticks as we move further out, to mile markers even further out, and so on.  We can use predictably varying stars called classical Cepheids to determine distances up to about 30 Mpc, a little beyond the nearest galaxy cluster, VirgoType Ia supernovae, stellar explosions that achieve the same brightness each and every time, no matter when or where they exploded, help us measure distances as much as 30 times further.  Each measuring stick in the sequence is calibrated by the sequence of shorter measuring sticks that came before it, a sequence which astronomers have called the “distance ladder.”  Thus errors and uncertainties in calibrating one yardstick can propagate up the sequence, much like falling dominoes.  We’ve directly measured the distances of only a small fraction of celestial objects; for a vast majority of the objects in the universe, we must turn to our sequence of sticks.

For objects far beyond the gravitational influence of our galactic neighborhood, the measuring stick of choice is the object’s redshift. This is unique to a universe that’s expanding uniformly and homogenously, causing things further from you to appear to move away from you faster. Much like how the pitch of an emergency siren falls after it flys away from you, the wavelength of the light from an object moving away from you becomes longer and longer, causing it to look redder. The amount an object’s light is “redshifted” depends predictably on the object’s distance—a relation so robust that it has been codified into what’s known as Hubble’s law.

Hubble’s law has embolded cosmological cartographers to take up the herculean task of drawing a 3D map of our universe. The feat requires measuring redshifts of a huge sample of galaxies via large spectroscopic surveys. The first such survey, begun in the 1970s, contained a few thousand galaxies. The biggest survey completed to date, the Sloan Digital Sky Survey (SDSS), contains nearly a million galaxies. These maps have revealed that the universe on its largest scales is fascinatingly varied and structured. There are walls of galaxies surrounding vast, empty voids; galaxies are often assembled together to form fractal-like filamentary strands; at nodes where the filaments intersect, one can find the densest and largest clusters of galaxies. The maps also contain clues to the physics and the cosmological parameters that govern the past and future evolution of our universe.

Thus even more ambitious surveys are in the works. Our quest for more galaxies requires us to search for ever fainter galaxies, for which reliable redshifts are difficult to measure. But it’s not impossible. One can look for an easy-to-find, strong spectral feature typical in galaxies and measure how much redder it’s become. It would have been a fairly straightforward task, except for one catch—there’s a handful of strong features that can easily be mistaken for each other. These interloping lines could cause a galaxy to be mistakenly given an incorrect redshift, and thus distance.

The authors of today’s paper thus asked, how much do galaxies with incorrect distances based on a single emission line affect our maps and the physics we infer from them? They looked at how upcoming spectroscopic redshift surveys undertaken with the Prime Focus Spectrograph (PFS) to be installed on the 8.2-meter Subaru Telescope and the Wide-Field InfraRed Survey Telescope (WFIRST) could be affected by interloping galaxies. In particular, the authors studied how the matter power spectrum, an important measure of the amount of mass found at varying cosmological size scales, derived from the two surveys would be affected. They found that if more than 0.2% of the galaxies were interlopers with incorrect distances, they can increase the total error by 10%.  If more than 0.5% of the galaxies were interlopers, they can drastically skew the matter power spectrum at small scales.  Such effects have consequences for many other cosmological studies, including those concerning dark energy and modified gravity.

Can the interlopers be weeded out somehow? The authors investigate two methods to identify interlopers. One could repeat the emission line analysis but for pairs of strong lines, since each of the strong lines pairs that PFS and WFIRST could measure have unique wavelength separations. Alternatively, one could independently measure the redshift of each galaxy based on the galaxy’s color, derived from a separate photometric survey. The authors tested these two interloper removal methods on a mock sample of galaxies and found that finding strong line pairs alone can help remove most of the interlopers in the PFS survey, while a combination of finding pairs and calculating photometric redshifts must be done together to remove interlopers in the WFIRST survey.


To see a video of the first author A. Pullen explaining this paper, follow this link.




by Stacy Kim at July 30, 2015 07:06 PM

ZapperZ - Physics and Physicists

Report From 13 TeV
So far so good!

This report briefly describes the achievement of getting to 13 TeV collision energy at the LHC.

At 10.40 a.m. on 3 June, the LHC operators declared "stable beams" for the first time at a beam energy of 6.5 TeV. It was the signal for the LHC experiments to start taking physics data for Run 2, this time at a collision energy of 13 TeV – nearly double the 7 TeV with which Run 1 began in March 2010.

So far, they haven't been swallowed by a catastrophic black hole that is supposed to destroy our world. Darn it! What's next? Sighting of supersymmetry particles? You must be joking!


by ZapperZ ( at July 30, 2015 06:32 PM

CERN Bulletin

LHC Report: machine development

Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the long-term performance of the LHC.


The main goals of this first machine development (MD) week were to determine the minimum beam-spot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beam-squeezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam.

Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit – a circuit with 154 small sextupole magnets used to correct errors in the main dipoles – in arc 7-8 at high energy. This problem resulted in the cancellation of the last test runs at high energy and the MD session stopping some 8 hours earlier than planned. However, the time with beam was effective in terms of the results achieved. A large set of instruments were developed or tested, including high-resolution beam position monitors (DOROS), robust beam current monitors and two systems to examine the frequency content of the beam.

Thanks to the MD studies, the beam sizes at the two high-luminosity interaction points (where the ATLAS and CMS detectors are installed) were reduced by a factor of 1.4. The corresponding machine optics were finely tuned to be ready for high-intensity beams. However, before these optics can be used in operation, further studies are mandatory to understand and validate other important parameters, including the machine aperture, new collimator settings, a reduced crossing angle and, possibly, non-linear corrections in the quadrupole triplets next to the interaction points. These topics will be addressed in future MD weeks to pave the way towards higher luminosities in Run 2.

For the first time, operators were able to perform the beam-size squeeze during the energy ramp. This opens up the possibility of saving up to 10 minutes per fill in a slightly more ambitious configuration than that tested last week. Results on higher bunch populations require careful analysis of the collected beam data. These will soon be available in detailed reports to be published as LHC MD notes.

At the end of the MD period, the LHC went into its second scrubbing run, a two-week period that aims to prepare the machine fully for operation with 25-nanosecond bunch spacing, planned for the first weeks of August.

We would like to take this opportunity to thank all the MD teams, system experts, management, operators and physics experiments involved during the MDs for their high flexibility, dedication and endurance.

July 30, 2015 05:07 PM

Clifford V. Johnson - Asymptotia

Almost Done
So Tuesday night, I decided that it was imperative that I paid a visit to one really good restaurant (at least) before leaving Santiago. My duties at ICMP2015 were over, and I was tired, so did not want to go too far, but I'd heard there were good ones in the area, so I asked the main organizer and he made a recommendation. imageIt was an excellent choice. One odd thing: the hotel is in two separate towers, and I'd noticed this upon arrival and started calling it The Two Towers in my mind for the time was there. Obviously, right? Well, anyway, the restaurant is right around the corner from it plus a two minute walk, and.... Wait for is called Le Due Tonni, which translates into The Two Towers, but apparently it has nothing to do with my observation about the hotel, since it got that name from a sister restaurant in a different part of town, I am told. So... An odd coincidence. I will spare you the details of what I had for dinner save to say that if you get the fettuccini con salmon you're on to a sure thing, and to warn that you don't end up accidentally ordering a whole bottle of wine instead of a glass of it because you're perhaps used to over-inflated wine prices in LA restaurants (caught it before it was opened and so saved myself having to polish off a whole bottle on my own)... Another amusing note is that one of my problems with getting my rusty Spanish out for use only occasionally is that I get logjams in my head because vocabulary from Spanish, French, and Italian all come to me mid sentence and I freeze sometimes. I'd just been getting past doing that by Tuesday, but then got very confused in the restaurant at one point until I realized my waiter was, oddly, speaking to me in Italian at times. I still am not sure why. It was a good conference to come to, I think, because I connected [...] Click to continue reading this post

by Clifford at July 30, 2015 04:35 PM

Emily Lakdawalla - The Planetary Society Blog

Dawn Journal: Descent to HAMO
With a wonderfully rich bounty of pictures and other observations already secured, Dawn is now on its way to an even better vantage point around dwarf planet Ceres.

July 30, 2015 02:45 PM

Tommaso Dorigo - Scientificblogging

(Well)-Paid PhD Position In Physics Offered In Padova, Italy
Are you a post-lauream student in Physics, interested in pursuing a career in particle physics, and maybe with interest in advanced machine learning applications, with an eye to a great job after your PhD ? Then this posting is for you.

read more

by Tommaso Dorigo at July 30, 2015 02:22 PM

CERN Bulletin

Symmetrybreaking - Fermilab/SLAC

One Higgs is the loneliest number

Physicists discovered one type of Higgs boson in 2012. Now they’re looking for more.

When physicists discovered the Higgs boson in 2012, they declared the Standard Model of particle physics complete; they had finally found the missing piece of the particle puzzle.

And yet, many questions remain about the basic components of the universe, including: Did we find the one and only type of Higgs boson? Or are there more?

A problem of mass

The Higgs mechanism gives mass to some fundamental particles, but not others. It interacts strongly with W and Z bosons, making them massive. But it does not interact with particles of light, leaving them massless.

These interactions don’t just affect the mass of other particles, they also affect the mass of the Higgs. The Higgs can briefly fluctuate into virtual pairs of the particles with which it interacts.

Scientists calculate the mass of the Higgs by multiplying a huge number—related to the maximum energy for which the Standard Model applies—with a number related to those fluctuations. The second number is determined by starting with the effects of fluctuations to force-carrying particles like the W and Z bosons, and subtracting the effects of fluctuations to matter particles like quarks.

While the second number cannot be zero because the Higgs must have some mass, almost anything it adds up to, even at very small numbers, makes the mass of the Higgs gigantic.

But it isn’t. It weighs about 125 billion electronvolts; it’s not even the heaviest fundamental particle.

“Having the Higgs boson at 125 GeV is like putting an ice cube into a hot oven and it not melting,” says Flip Tanedo, a theoretical physicist and postdoctoral researcher at the University of California, Irvine.

A lightweight Higgs, though it makes the Standard Model work, doesn’t necessarily make sense for the big picture. If there are multiple Higgses—much heavier ones—the math determining their masses becomes more flexible.

“There’s no reason to rule out multiple Higgs particles,” says Tim Tait, a theoretical physicist and professor at UCI. “There’s nothing in the theory that says there shouldn’t be more than one.”

The two primary theories that predict multiple Higgs particles are Supersymmetry and compositeness.


Popular in particle physics circles for tying together all the messy bits of the Standard Model, Supersymmetry predicts a heavier (and whimsically named) partner particle, or “sparticle,” for each of the known fundamental particles. Quarks have squarks and Higgs have Higgsinos.

“When the math is re-done, the effects of the particles and their partner particles on the mass of the Higgs cancel each other out and the improbability we see in the Standard Model shrinks and maybe even vanishes,” says Don Lincoln, a physicist at Fermi National Accelerator Laboratory.

The Minimal Supersymmetric Standard Model—the supersymmetric model that most closely aligns with the current Standard Model—predicts four new Higgs particles in addition to the Higgs sparticle, the Higgsino.

While Supersymmetry is maybe the most popular theory for exploring physics beyond the Standard Model, physicists at the LHC haven’t seen any evidence of it yet. If Supersymmetry exists, scientists will need to produce more massive particles to observe it.

“Scientists started looking for Supersymmetry five years ago in the LHC,” says Tanedo. “But we don’t really know where they will find it: 10 TeV? 100 TeV?”


The other popular theory that predicts multiple Higgs bosons is compositeness. The composite Higgs theory proposes that the Higgs boson is not a fundamental particle but is instead made of smaller particles that have not yet been discovered.

“You can think of this like the study of the atom,” says Bogdan Dobrescu, a theoretical physicist at Fermi National Accelerator Laboratory. “As people looked closer and closer, they found the proton and neutron. They looked closer again and found the ‘up’ and ‘down’ quarks that make up the proton and neutron.”

Composite Higgs theories predict that if there are more fundamental parts to the Higgs, it may assume a combination of masses based on the properties of these smaller particles.

The search for composite Higgs bosons has been limited by the scale at which scientists can study given the current energy levels at the LHC.

On the lookout

Physicists will continue their Higgs search with the current run of the LHC.

At 60 percent higher energy, the LHC will produce Higgs bosons more frequently this time around. It will also produce more top quarks, the heaviest particles of the Standard Model. Top quarks interact energetically with the Higgs, making them a favored place to start picking at new physics.

Whether scientists find evidence for Supersymmetry or a composite Higgs (if they find either), that discovery would mean much more than just an additional Higgs.

“For example, finding new Higgs bosons could affect our understanding of how the fundamental forces unify at higher energy,” Tait says.

“Supersymmetry would open up a whole ‘super’ world out there to discover. And a composite Higgs might point to new rules on the fundamental level beyond what we understand today. We would have new pieces of the puzzle to look at it.”


Like what you see? Sign up for a free subscription to symmetry!

by Katie Elyce Jones at July 30, 2015 01:00 PM

Peter Coles - In the Dark

Planning for the Future

Some great news arrived this morning. The Planning Inspectorate has given approval to the University of Sussex’s Campus Masterplan, which paves the way for some much-needed new developments on the Falmer Campus and a potential £500 million investment in the local economy. As a scientist working at the University I’m particularly delighted with this decision as it will involve much-needed new science buildings which should ease the pressure on our existing estate. The planned developments include new state-of-the-art academic and research facilities, the creation of an estimated 2400 new jobs in the local community and 2500 new student rooms on the campus, while still preserving the famous listed buildings designed by architect Sir Basil Spence when the University was founded back in the 1960s. We’re in for an exciting few years as these new developments take shape, especially a new building for Life Sciences and redevelopment of the East Slope site. The expansion of residential accommodation on campus will take some of the pressure off the housing stock in central Brighton while the other new buildings will provide much-needed replacements and extensions for some older ones that are at the end of their useful life.

Here’s a video fly-through that illustrates the general scale of the development – although the individual buildings shown are just indicative, as detailed designs are still being drawn up and each new building will need further planning permission.

But it is not just as an employee of the University that I am delighted by this news. I also live in Brighton and I honestly believe that the expansion of the University is an extremely good thing for the City, which is already turning into a thriving high-tech economy owing to the presence of so many skilled graduates and spin-out enterprises. There’s a huge amount of work to do in order to turn these plans into reality, but within a couple of years I think we’ll start to see the dividend.

by telescoper at July 30, 2015 12:11 PM

July 29, 2015

Christian P. Robert - xi'an's og

gradient importance sampling

from my office, La Défense & Bois de Boulogne, Paris, May 15, 2012Ingmar Schuster, who visited Paris-Dauphine last Spring (and is soon to return here as a postdoc funded by Fondation des Sciences Mathématiques de Paris) has arXived last week a paper on gradient importance sampling. In this paper, he builds a sequential importance sampling (or population Monte Carlo) algorithm that exploits the additional information contained in the gradient of the target. The proposal or importance function being essentially the MALA move as its proposal, mixed across the elements of the previous population. When compared with our original PMC mixture of random walk proposals found in e.g. this paper, each term in the mixture thus involves an extra gradient, with a scale factor that decreases to zero as 1/t√t. Ingmar compares his proposal with an adaptive Metropolis, an adaptive MALTa and an HM algorithms, for two mixture distributions and the banana target of Haario et al. (1999) we also used in our paper. As well as a logistic regression. In each case, he finds both a smaller squared error and a smaller bias for the same computing time (evaluated as the number of likelihood evaluations). While we discussed this scheme when he visited, I remain intrigued as to why it works so well when compared with the other solutions. One possible explanation is that the use of the gradient drift is more efficient on a population of particles than on a single Markov chain, provided the population covers all modes of importance on the target surface: the “fatal” attraction of the local model is then much less of an issue…

Filed under: Books, pictures, Statistics, University life Tagged: adaptive importance sampling, Fondation Sciences Mathématiques de Paris, Langevin MCMC algorithm, Leipzig, population Monte Carlo, sequential Monte Carlo, Université Paris Dauphine

by xi'an at July 29, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 4060 - June 26, 2015
Larry Crumpler gives an update on the Opportunity rover's activities in Spirit of St. Louis crater.

July 29, 2015 06:04 PM

astrobites - astro-ph reader's digest

Radio Crickets: A New Probe of General Relativity


This November marks the 100th anniversary of Einstein’s Theory of General Relativity (GR), our modern theory of gravity that describes its true nature and intimate connection with space and time. A century after the formulation of GR, one of the phenomena predicted by this theory has remained elusive to detection – ripples of gravitational energy propagating through spacetime like waves. These gravitational waves have remained elusive for good reason.

Spacetime is very stiff, and even extremely massive objects accelerating through spacetime produce feeble gravitational wave signals (so feeble that when Einstein predicted their existence he believed we would never be able to detect their minuscule effects on spacetime). Coincidentally, the centennial year of GR is also when the upgraded and unbelievably sensitive Advanced Laser Interferometer Gravitational Wave Observatory (aLIGO) will commence science runs. This machine is predicted to make the first direct detections of these ripples in spacetime over the next few years and open up a new window to the Universe through multi-messenger astronomy.

Though the first detection of gravitational waves will be more than enough to celebrate about, a true goldmine of scientific wealth will come from finding an electromagnetic counterpart of a gravitational wave signal, allowing these astrophysical objects to be accessed by two completely independent forms of information. Today’s paper considers a possible electromagnetic counterpart of what is thought to be the loudest gravitational wave event in the Universe – the merging of two supermassive black holes (SMBHs). These events would be screaming in gravitational wave radiation, and may be able to reach gravitational wave luminosities of about 10^50 Watts right before they merge. As comparison, this is about as luminous as all the stars shining in all the galaxies in the observable Universe! Though aLIGO is not sensitive to these frequencies of gravitational waves, pulsar timing arrays and future space-based interferometers like eLISA will be.

SMBH binaries are believed to emerge via the collision of two large galaxies, each of which hosting a massive black hole at its center. After the galaxies collide, the SMBHs lose angular momentum through dynamical friction, creeping close enough to the remnant galaxy’s center to form a gravitationally bound binary. After entering their orbital dance, the black holes continue to lose angular momentum by scattering gas and stars, causing their orbit to shrink (though this phase of binary SMBH evolution is up for debate, since theoretical models have a hard time making the orbits shrink when their orbital separation is on the order of 1 parsec, or about 200,000 astronomical units, an issue known as the final parsec problem). When they reach a separation of about 1/1000 parsecs (a couple hundred astronomical units…pretty close given that the event horizon of a billion Solar mass black hole situated at our Sun would stretch 20 astronomical units, or all the way to the orbit of Uranus), gravitational wave emission becomes the key player in angular momentum loss, quickly diminishing the orbital separation until the two SMBHs merge. It is this final phase of orbital evolution that may be probed with future space-based gravitational wave observatories. But alternatively, as today’s paper suggests, we may be able to gain insight about this period of evolution from electromagnetic radiation as well.

Screen Shot 2015-07-28 at 4.26.31 PM

Figure 1. The jet and orbital configuration, assuming equal mass black holes. L_orb is the orbital angular moment, v_jet is the velocity of the jet without orbital motion, v_orb is the orbital velocity, and the red vector v~jet is v_jet + v_orb. v~jet precesses about v_jet as the black hole orbits. Figure 1 in the paper.


The key to the electromagnetic counterpart presented in today’s paper is that the black holes are able to hold onto an accretion disk and continue accreting gas during this gravitational wave dominated stage of orbital evolution, shown to be possible in recent studies. With accretion disks come jets of highly relativistic particles, and charged particles spiraling in the strong magnetic field of a SMBH emit synchrotron radiation detectable by radio telescopes. As the binary orbits, the jet will trace out a conical surface. This is easily seen by looking at figure 1 and recalling simple vector addition (remember, for an observer very far away, the solid black jet vector, which represents the velocity of the jet neglecting orbital motion, is essentially fixed, while the orbital velocity vector is constantly changing). The red jet vector, which is the combination of both the jet velocity and orbital velocity, therefore precesses about the black jet vector as the black hole orbits. This would be the end of the story if these binaries were not emitting gravitational waves.

Since the system emits gravitational waves during this phase, the orbital separation decreases, causing the orbital speed to increase. Imagine the black orbital velocity vector from figure 1 increasing. The red vector, which is the sum of the orbital and jet velocities, will therefore have an increasing contribution from the orbital velocity, causing it to precess about the black jet vector with an increasing opening angle. The increase in orbital speed and decrease in orbital separation of the binary will also cause the jet to precess faster, winding the jet tighter closer to the source and resulting in a classical “chirping” morphology in the jet (hence the “radio crickets” in the title of this bite). These effects can be seen in figure 2, which simulates the evolution of a SMBH jet during the first 100 years after entering the gravitational wave dominated regime of orbital decay.

Screen Shot 2015-07-28 at 4.26.09 PM

Figure 2. The jets of an equal-mass binary with a total mass of 10 billion solar masses situated 100 megaparsecs away. The axes are angular size in milli-arcseconds. The left panel shows the bipolar jet pointed toward and away from the observer on the right and left of the box, respectively. The opening angle of the jet increases as one gets closer to the black hole, as this represents material emitted later in time. Also apparent is the chirping morphology; as one gets closer to the source the helicity increases because the orbital frequency increases. The twisting of the jet is due to apparent superluminal motion. The right panel shows the normalized brightness of the inner 30 milli-arcsecond region of the forward jet. Figure 5 in the paper.


Long baseline radio interferometry from telescope arrays such as the Square Kilometer Array, set to have its first light in 2020, will achieve the resolution necessary to observe these subtle milli-arcsecond jet features caused by gravitational wave inspiral. These observations would also put a lower bound on the abundance of bright gravitational wave sources detectable by future space-based detectors. Moreover, supplementing gravitational wave observations with electromagnetic observations of compact binary mergers will enable detailed studies on the strong-field regime of GR, where Einstein’s theory of gravity might break down and create problems that need to be solved by future generations of scientists.

by Michael Zevin at July 29, 2015 03:46 PM

arXiv blog

A Programming Language For Robot Swarms

When it comes to robotic flocks, do you control each machine individually or the entire swarm overall? A new programming language allows both.

July 29, 2015 02:00 PM

ZapperZ - Physics and Physicists

Weyl Fermions
This is a bit late, but what they hey....

Here is another triumph out of condensed matter physics experiment. This is the first reported discovery of the Weyl fermions, first predicted and now found in a Tantalum arsenide compound.

Another solution of the Dirac equation – this time for massless particles – was derived in 1929 by the German mathematician Hermann Weyl. For some time it was thought that neutrinos were Weyl fermions, but now it looks almost certain that neutrinos have mass and are therefore not Weyl particles.

Now, a group headed by Zahid Hasan at Princeton University has found evidence that Weyl fermions exist as quasiparticles – collective excitations of electrons – in the semimetal tanatalum arsenide (TaAs).

For those who are keeping score, this means that these condensed matter systems have, so far, detected Majorana fermions, and analogous signatures of magnetic monopoles.

And many people still think condensed matter physics is all "applied" and not "fundamental"?


by ZapperZ ( at July 29, 2015 01:24 PM

Quantum Diaries

Trois mots pour résumer une conférence

Impressionnant, excitant et plein de nouvelles perspectives. Cela résume mon impression alors que se termine aujourd’hui la conférence de physique des particules de la Société européenne de physique (EPS) à Vienne.

Nous avons été exposés à une quantité impressionnante de nouvelles données. Non seulement les expériences du Grand collisionneur de hadrons (LHC) du CERN ont finalisé la plupart de leurs analyses sur l’ensemble des données recueillies avant l’arrêt début 2013, mais elles ont aussi déjà commencé à analyser les nouvelles données. Ceci confirme que tout, des détecteurs aux logiciels de reconstruction, fonctionne parfaitement après le vaste programme d’améliorations et de réparations.


Souper de clôture de la conférence au magnifique palais Schönbrunn à Vienne (Photo: Gertrud Konrad)

Tous les outils nécessaires aux analyses de physique – simulations, systèmes d’acquisition de données, trigger, calibrations et algorithmes d’analyse – produisent déjà des résultats de haute qualité avec les données des collisions à une énergie de 13 TeV. Les expériences sont clairement en mesure de reprendre les analyses là où elles les avaient laissées avec les données collectées à 8 TeV. Bien sûr, il n’y a encore aucuns signes de nouveaux phénomènes mais les expériences LHCb, CMS et ATLAS ont toutes de petites anomalies qui devraient être élucidées avec les nouvelles données du LHC.

Durant cette conférence, on a pu apprécié aussi la variété des expériences en place et les nouveaux résultats qui commencent déjà à arriver sur la matière sombre et l’énergie sombre. De nouvelles avenues sont aussi explorées pour élargir les recherches dans l’espoir de découvrir les 95 % du contenu de l’Univers qui manquent toujours à l’appel. Les expériences ont fait des pas de géants et on s’attend à des percées majeures d’ici à peine quelques années. On peut aussi espérer des développements dans le secteur des neutrinos, un domaine de recherche prolifique mais aussi un des plus déconcertants et embrouillants depuis de nombreuses années.

Comme l’a souligné Pierre Binetruy, un théoricien travaillant en cosmologie : « Les découvertes simultanées du boson de Higgs et la confirmation de quelques unes des caractéristiques de l’inflation (la période marquée par une expansion fulgurante juste après le Big Bang) a ouvert une nouvelle ère dans la compréhension commune de la cosmologie et de la physique des particules ». Nous sommes clairement à la veille de percées majeures et de nouvelles découvertes dans plusieurs domaines. La prochaine conférence sera sans aucun doute un événement à ne pas manquer.

Pauline Gagnon

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution ou consultez mon site web.

by Pauline Gagnon at July 29, 2015 01:18 PM

Quantum Diaries

Three words to summarize a conference

Impressive, exciting and eye-opening. This is how I would summarize the European Physics Society (EPS) particle physics conference that is ending today in Vienna.

The participants were treated to an impressive amount of new data. Not only had the Large Hadron Collider (LHC) experiments at CERN finalised most of their analyses on the entire set of data collected prior to the long shutdown of the last two years, but they had also already started analysing the new data. This confirms that everything, from hardware to software, is up and running after extensive upgrades, repairs and improvements.

All the tools for physics analysis – simulations, data acquisition systems, trigger menus, calibration and analysis algorithms – are already performing beautifully at the new collision energy of 13 TeV. The experiments are clearly in a position to take up the analyses where they had left them with the 8 TeV data. True, there are no signs for new physics anywhere yet but LHCb, CMS and ATLAS all have little hints that will soon be elucidated with the new data.


Conference dinner in the beautiful Schönbrunn castle in Vienna (Credit: Gertrud Konrad)

A wealth of new experiments and results were also presented at the conference on dark matter and dark energy. New avenues are also explored to broaden the searches in the hope of accounting for the 95% of the content of the Universe that is still completely unknown. Giant steps have already been taken and major breakthroughs are expected in the very near future. Developments are also expected in the neutrino sector, a prolific research domain that has been most puzzling and confusing for many years.

As stated by Pierre Binetruy, a theorist working on cosmology: “The simultaneous discovery of the Higgs and confirmation of some of the basic features of inflation (the rapid expansion that followed the Big Bang) has opened a new era in the common understanding of cosmology and particle physics“. It is clear that we are on the eve of major advances and discoveries. The next conference is sure to be an event not to be missed.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline  or sign-up on this mailing list to receive an e-mail notification. You can also visit my website



by Pauline Gagnon at July 29, 2015 01:12 PM

Peter Coles - In the Dark

Mathematics at Sussex – The Video!

Here’s a nice little promotional video about the Department of Mathematics at the University of Sussex, featuring some of our lovely staff and students along with some nice views of the campus and the city of Brighton. Above all, I think it captures what a friendly place this is to work and study. Enjoy!

by telescoper at July 29, 2015 10:54 AM

Emily Lakdawalla - The Planetary Society Blog

In Pictures: West Virginia from Space
Jason Davis shares five images of his home state, West Virginia, taken by astronauts aboard the International Space Station.

July 29, 2015 09:04 AM

The n-Category Cafe

Internal Languages of Higher Categories

(guest post by Chris Kapulkin)

I recently posted the following preprint to the arXiv:

Btw if you’re not the kind of person who likes to read mathematical papers, I also gave a talk about the above mentioned work in Oxford, so you may prefer to watch it instead. (-:

I see this work as contributing to the idea/program of HoTT as the internal language of higher categories. In the last few weeks, there has been a lot of talk about it, prompted by somewhat provocative posts on Michael Harris’ blog.

My goal in this post is to survey the state of the art in the area, as I know it. In particular, I am not going to argue that internal languages are a solution to many of the problems of higher category theory or that they are not. Instead, I just want to explain the basic idea of internal languages and what we know about them as far as HoTT and higher category theory are concerned.

Disclaimer. The syntactic rules of dependent type theory look a lot like a multi-sorted essentially algebraic theory. If you think of sorts called types and terms then you can think of rules like <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics>-types and <semantics>Π<annotation encoding="application/x-tex">\Pi</annotation></semantics>-types as algebraic operations defined on these sorts. Although the syntactic presentation of type theory does not quite give an algebraic theory (because of complexities such as variable binding), it is possible to formulate dependent type theory as an essentially algebraic theory. However, actually showing that these two presentations are equivalent has proven complicated and it’s a subject of ongoing work. Thus, for the purpose of this post, I will take dependent type theories to be defined in terms of contextual categories (a.k.a. C-systems), which are the models for this algebraic theory (thus leaving aside the Initiality Conjecture). Ultimately, we would certainly like to know that these statements hold for syntactically-presented type theories; but that is a very different question from the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categorical aspects I will discuss here.

A final comment before we begin: this post derives greatly from my (many) conversations with Peter Lumsdaine. In particular, the two of us together went through the existing literature to understand precisely what’s known and what’s not. So big thanks to Peter for all his help!

Internal languages of categories

First off, what is the internal language? Without being too technical, let me say that it is typically understood as a correspondence:

(1)<semantics>{theories}LangCl{categories}<annotation encoding="application/x-tex"> \left\{\text{theories}\right\} \overset{\mathrm{Cl}}{\underset{\mathrm{Lang}}{\rightleftarrows}} \left\{\text{categories}\right\} </annotation></semantics>

On the right hand side of this correspondence, we have a category of categories with some extra structure and functors preserving this structure. On the left hand side, we have certain type theories, which are extensions of a fixed core one, and their interpretations (which are, roughly speaking, maps taking types to types and terms to terms, preserving typing judgements and the constructors of the core theory). Notice that the core theory is the initial object in the category of theories in the above picture.

The functor <semantics>Cl<annotation encoding="application/x-tex">\mathrm{Cl}</annotation></semantics> takes a theory to its initial model, built directly out of syntax of the theory: the objects are contexts and the morphisms are (sequences of) terms of the theory (this category is often called the classifying category, hence the notation <semantics>Cl<annotation encoding="application/x-tex">\mathrm{Cl}</annotation></semantics>). The functor <semantics>Lang<annotation encoding="application/x-tex">\mathrm{Lang}</annotation></semantics> takes a category to the theory whose types are generated in a suitable sense by the objects and whose terms are generated by the morphisms of the category. In particular, constructing <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> for some category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> from the right hand side is the same as establishing a model of the core theory in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.

Finally, these functors are supposed to form some kind of an adjoint equivalence (with <semantics>ClLang<annotation encoding="application/x-tex">\mathrm{Cl} \dashv \mathrm{Lang}</annotation></semantics>), be it an equivalence of categories, bi-equivalence, or <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-equivalence, depending on whether the two sides of the correspondence are categories, <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-categories, or <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories.

The cleanest example of this phenomenon is the correspondence between <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-theories (that is, theories in simply typed <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-calculus) and cartesian closed categories:

(2)<semantics>{theories in λ-calculus}LangCl{cartesian closed categories}<annotation encoding="application/x-tex"> \left\{\begin{array}{c}\text{theories in}\\ \lambda\text{-calculus}\end{array}\right\} \overset{\mathrm{Cl}}{\underset{\mathrm{Lang}}{\rightleftarrows}} \left\{\begin{array}{c}\text{cartesian closed}\\ \text{categories}\end{array}\right\} </annotation></semantics>

which you can read about in Part I of the standard text by Jim Lambek and Phil Scott.

Extensional Martin-Löf Type Theory

Unfortunately, as soon as we move to dependent type theory, things are getting more complicated. Starting with the work of Robert Seely, it has become clear that one should expect Extensional Martin-Löf Type Theory (with dependent products, dependent sums, and extensional Identity types) to be the internal language of locally cartesian closed categories:

(3)<semantics>{Extensional Martin-Löf type theories}LangCl{locally cartesian closed categories}<annotation encoding="application/x-tex"> \left\{\begin{array}{c}\text{Extensional Martin-L&ouml;f}\\ \text{type theories}\end{array}\right\} \overset{\mathrm{Cl}}{\underset{\mathrm{Lang}}{\rightleftarrows}} \left\{\begin{array}{c}\text{locally cartesian}\\ \text{closed categories}\end{array}\right\} </annotation></semantics>

Seely overlooked however an important coherence problem: since types are now allowed to depend on other types, we need to make coherent choices of pullbacks. The reason for that is that type-theoretically substitution (into a type) is a strictly functorial operation, whereas its categorical counterpart, pullback, without making any choices, is functorial only up to isomorphism. (If you find the last sentence slightly too brief, I recommend Peter Lumsdaine’s talk explaining the problem and known solutions.) The fix was later found by Martin Hofmann; but the resulting pair of functors does not form an equivalence of categories, only a biequivalence of <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-categories, as was proven in 2011 by Pierre Clairambault and Peter Dybjer.

Intensional Martin-Löf Type Theory and locally cartesian closed <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories

Next let us consider Intensional Martin-Löf Type Theory with dependent products and sums, and the identity types; additionally, we will assume the (definitional) eta rule for dependent functions and functional extensionality.

Such type theory has been, at least informally, conjectured to be the internal language of locally cartesian closed <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories and thus, we expect the following correspondence:

(4)<semantics>{Intensional Martin-Löf type theories}LangCl{locally cartesian closed -categories}<annotation encoding="application/x-tex"> \left\{\begin{array}{c}\text{Intensional Martin-L&ouml;f}\\ \text{type theories}\end{array}\right\} \overset{\mathrm{Cl}}{\underset{\mathrm{Lang}}{\rightleftarrows}} \left\{\begin{array}{c}\text{locally cartesian}\\ \text{closed }\infty\text{-categories}\end{array}\right\} </annotation></semantics>

where the functors <semantics>Cl<annotation encoding="application/x-tex">\mathrm{Cl}</annotation></semantics> and <semantics>Lang<annotation encoding="application/x-tex">\mathrm{Lang}</annotation></semantics> should form an adjunction (an adjoint <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-equivalence? or maybe even <semantics>(,2)<annotation encoding="application/x-tex">(\infty,2)</annotation></semantics>-equivalence?) between the type-theoretic and categorical sides.

Before I summarize the state of the art, let me briefly describe what the two functors ought to be. Starting with a type theory, one can take its underlying category of contexts and regard it as category with weak equivalences (where the weak equivalences are syntactically-defined equivalences), to which one can then apply the simplicial localization. This gives a well-defined functor from type theories to <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories. Of course, it is a priori not clear what the (essential) image of such a functor would be.

Conversely, given a locally cartesian closed <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, one can look for a category with weak equivalences (called the presentation of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>) whose simplicial localization is <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> and then try to establish the structure of a categorical model of type theory on such a category.

What do we know? The verification that <semantics>Cl<annotation encoding="application/x-tex">\mathrm{Cl}</annotation></semantics> takes values in locally cartesian closed <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories can be found in my paper. The other functor is known only partially. More precisely, if <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a locally presentable locally cartesian closed <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category, then one can construct <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics>. As mentioned above, the construction is done in two steps. The first, that is presenting such an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category by a type-theoretic model category (which is, in particular, a category with weak equivalences) was given by Denis-Charles Cisinski and Mike Shulman in these blog comments, and independently in Theorem 7.10 of this paper by David Gepner and Joachim Kock. The second step (the structure of a model of type theory) is precisely Example of the local universe model paper by Peter Lumsdaine and Michael Warren.

What don’t we know? First off, how to define <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> when <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is not locally presentable and whether the existing definition of <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> for locally presentable quasicategories is even functorial? We also need to understand what the homotopy theory of type theories is (if we’re hoping for an equivalence of homotopy theories, we need to understand the homotopy theory of the left hand side!)? In particular, what are the weak equivalences of type theories? Next in line: what is the relation between <semantics>Cl<annotation encoding="application/x-tex">\mathrm{Cl}</annotation></semantics> and <semantics>Lang<annotation encoding="application/x-tex">\mathrm{Lang}</annotation></semantics>? Are they adjoint and if so, can we hope that they will yield an equivalence of the corresponding <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories?

Univalence, Higher Inductive Types, and (elementary) <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-toposes

Probably the most interesting part of this program is the connection between Homotopy Type Theory and higher topos theory (HoTT vs HTT?). Conjecturally, we should have a correspondence:

(5)<semantics>{Homotopy Type Theories}LangCl{elementary -toposes}<annotation encoding="application/x-tex"> \left\{\begin{array}{c}\text{Homotopy}\\ \text{Type Theories}\end{array}\right\} \overset{\mathrm{Cl}}{\underset{\mathrm{Lang}}{\rightleftarrows}} \left\{\begin{array}{c}\text{elementary}\\ \infty\text{-toposes}\end{array}\right\} </annotation></semantics>

This is however not a well-defined problem as it depends on one’s answer to the following two questions:

  1. What is HoTT? Obviously, it should be a system that extends Intensional Martin-Löf Type Theory, includes at least one, but possibly infinitely many univalent universes, and some Higher Inductive Types, but what exactly may largely depend on the answer to the next question…
  2. What is an elementary <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-topos? While there exist some proposals (for example, that presented by Andr'e Joyal in 2014), this question also awaits a definite answer. By analogy with the <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-categorical case, every Grothendieck <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-topos should be an elementary <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-topos, but not the other way round. Moreover, the axioms of an elementary <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-topos should imply (maybe even explicitly include) that it is locally cartesian closed and that it has finite colimits, but should not imply local presentability.

What do we know? As of today, only partial constructions of the functor <semantics>Lang<annotation encoding="application/x-tex">\mathrm{Lang}</annotation></semantics> exist. More precisely, there are:

  • Theorem 6.4 of Mike Shulman’s paper contains the construction of <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> if <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a Grothendieck <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-topos that admits a presentation as simplicial presheaves on an elegant Reedy category and HoTT is taken the extension of Intensional Martin-Löf Type Theory but as many univalent universes 'a la Tarski as there are inaccessible cardinals greater than the cardinality of the site.
  • Remark 1.1 of the same paper can be interpreted as saying that if one considers HoTT with weak (univalent) universes instead, then the construction of <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> works for an arbitrary <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-topos <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.
  • A forthcoming paper by Peter Lumsdaine and Mike Shulman will supplement the above two points: for some reasonable range of higher toposes, the resulting type theory <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> will be also shown to possess certain Higher Inductive Types (e.g. homotopy pushouts and truncations), although the details remain to be seen.

What don’t we know? It still remains to define <semantics>Lang(C)<annotation encoding="application/x-tex">\mathrm{Lang}(C)</annotation></semantics> outside of the presentable setting, as well as give the construction of <semantics>Cl<annotation encoding="application/x-tex">\mathrm{Cl}</annotation></semantics> in this case (or rather, check that the obvious functor from type theories to <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories takes values in higher toposes). The formal relation between these functors (which are yet to be defined) remains wide open.

by shulman ( at July 29, 2015 03:42 AM

Clifford V. Johnson - Asymptotia

Existence Proof
imageThe picture is evidence that bug-free Skype seminars are possible! Well, I suppose it only captured an instant, and not the full hour's worth of two separate bug-free talks each with their own Q&A, but that is what happened. The back story is that two of our invited speakers, Lara Anderson and James Gray, had flight delays that prevented them from arriving in Santiago on time and so I spent a bit of time (at the suggestion of my co-organizer Wati Taylor, who also could not make the trip) figuring out how we could save the schedule by having them give Skype seminars. (We already had to make a replacement elsewhere in the schedule since another of our speakers was ill and had to cancel his trip.) Two Skype talks seemed a long shot back on Sunday when Wati had the idea, but after some local legwork on my part it gradually because more likely, and by lunchtime today I had the local staff fully on board with the idea and we tested it all and it worked! It helps that you can send the whole of your computer screen as the video feed, and so the slides came out nicely (I'd originally planned a more complicated arrangement where we'd have the [...] Click to continue reading this post

by Clifford at July 29, 2015 01:49 AM

July 28, 2015

Christian P. Robert - xi'an's og

Bayesian model averaging in astrophysics

[A 2013 post that somewhat got lost in a pile of postponed entries and referee’s reports…]

In this review paper, now published in Statistical Analysis and Data Mining 6, 3 (2013), David Parkinson and Andrew R. Liddle go over the (Bayesian) model selection and model averaging perspectives. Their argument in favour of model averaging is that model selection via Bayes factors may simply be too inconclusive to favour one model and only one model. While this is a correct perspective, this is about it for the theoretical background provided therein. The authors then move to the computational aspects and the first difficulty is their approximation (6) to the evidence

P(D|M) = E \approx \frac{1}{n} \sum_{i=1}^n L(\theta_i)Pr(\theta_i)\, ,

where they average the likelihood x prior terms over simulations from the posterior, which does not provide a valid (either unbiased or converging) approximation. They surprisingly fail to account for the huge statistical literature on evidence and Bayes factor approximation, incl. Chen, Shao and Ibrahim (2000). Which covers earlier developments like bridge sampling (Gelman and Meng, 1998).

As often the case in astrophysics, at least since 2007, the authors’ description of nested sampling drifts away from perceiving it as a regular Monte Carlo technique, with the same convergence speed n1/2 as other Monte Carlo techniques and the same dependence on dimension. It is certainly not the only simulation method where the produced “samples, as well as contributing to the evidence integral, can also be used as posterior samples.” The authors then move to “population Monte Carlo [which] is an adaptive form of importance sampling designed to give a good estimate of the evidence”, a particularly restrictive description of a generic adaptive importance sampling method (Cappé et al., 2004). The approximation of the evidence (9) based on PMC also seems invalid:

E \approx \frac{1}{n} \sum_{i=1}^n \dfrac{L(\theta_i)}{q(\theta_i)}\, ,

is missing the prior in the numerator. (The switch from θ in Section 3.1 to X in Section 3.4 is  confusing.) Further, the sentence “PMC gives an unbiased estimator of the evidence in a very small number of such iterations” is misleading in that PMC is unbiased at each iteration. Reversible jump is not described at all (the supposedly higher efficiency of this algorithm is far from guaranteed when facing a small number of models, which is the case here, since the moves between models are governed by a random walk and the acceptance probabilities can be quite low).

The second quite unrelated part of the paper covers published applications in astrophysics. Unrelated because the three different methods exposed in the first part are not compared on the same dataset. Model averaging is obviously based on a computational device that explores the posteriors of the different models under comparison (or, rather, averaging), however no recommendation is found in the paper as to efficiently implement the averaging or anything of the kind. In conclusion, I thus find this review somehow anticlimactic.

Filed under: Books, Statistics, University life Tagged: adaptive importance sampling, Astrophysics, Bayes factor, bridge sampling, computational statistics, evidence, likelihood, model averaging, Monte Carlo technique, population Monte Carlo, statistical analysis and data mining

by xi'an at July 28, 2015 10:15 PM

Symmetrybreaking - Fermilab/SLAC

Is this the only universe?

Our universe could be just one small piece of a bubbling multiverse.

Human history has been a journey toward insignificance.

As we’ve gained more knowledge, we’ve had our planet downgraded from the center of the universe to a chunk of rock orbiting an average star in a galaxy that is one among billions.

So it only makes sense that many physicists now believe that even our universe might be just a small piece of a greater whole. In fact, there may be infinitely many universes, bubbling into existence and growing exponentially. It’s a theory known as the multiverse.

One of the best pieces of evidence for the multiverse was first discovered in 1998, when physicists realized that the universe was expanding at ever increasing speed. They dubbed the force behind this acceleration dark energy. The value of its energy density, also known as the cosmological constant, is bizarrely tiny: 120 orders of magnitude smaller than theory says it should be.

For decades, physicists have sought an explanation for this disparity. The best one they’ve come up with so far, says Yasunori Nomura, a theoretical physicist at the University of California, Berkeley, is that it’s only small in our universe. There may be other universes where the number takes a different value, and it is only here that the rate of expansion is just right to form galaxies and stars and planets where people like us can observe it. “Only if this vacuum energy stayed to a very special value will we exist,” Nomura says. “There are no good other theories to understand why we observe this specific value.”

For further evidence of a multiverse, just look to string theory, which posits that the fundamental laws of physics have their own phases, just like matter can exist as a solid, liquid or gas. If that’s correct, there should be other universes where the laws are in different phases from our own—which would affect seemingly fundamental values that we observe here in our universe, like the cosmological constant. “In that situation you’ll have a patchwork of regions, some in this phase, some in others,” says Matthew Kleban, a theoretical physicist at New York University.

These regions could take the form of bubbles, with new universes popping into existence all the time. One of these bubbles could collide with our own, leaving traces that, if discovered, would prove other universes are out there. We haven't seen one of these collisions yet, but physicists are hopeful that we might in the not so distant future.

If we can’t find evidence of a collision, Kleban says, it may be possible to experimentally induce a phase change—an ultra-high-energy version of coaxing water into vapor by boiling it on the stove. You could effectively prove our universe is not the only one if you could produce phase-transitioned energy, though you would run the risk of it expanding out of control and destroying the Earth. “If those phases do exist—if they can be brought into being by some kind of experiment—then they certainly exist somewhere in the universe,” Kleban says.

No one is yet trying to do this.

There might be a (relatively) simpler way. Einstein’s general theory of relativity implies that our universe may have a “shape.” It could be either positively curved, like a sphere, or negatively curved, like a saddle. A negatively curved universe would be strong evidence of a multiverse, Nomura says. And a positively curved universe would show that there’s something wrong with our current theory of the multiverse, while not necessarily proving there’s only one. (Proving that is a next-to-impossible task. If there are other universes out there that don’t interact with ours in any sense, we can’t prove whether they exist.)

In recent years, physicists have discovered that the universe appears almost entirely flat. But there’s still a possibility that it’s slightly curved in one direction or the other, and Nomura predicts that within the next few decades, measurements of the universe’s shape could be precise enough to detect a slight curvature. That would give physicists new evidence about the nature of the multiverse. “In fact, this evidence will be reasonably strong since we do not know any other theory which may naturally lead to a nonzero curvature at a level observable in the universe,” Nomura says.

If the curvature turned out to be positive, theorists would face some very difficult questions. They would still be left without an explanation for why the expansion rate of the universe is what it is. The phases within string theory would also need re-examining. “We will face difficult problems,” Nomura says. “Our theory of dark energy is gone if it’s the wrong curvature.”

But with the right curvature, a curved universe could reframe how physicists look at values that, at present, appear to be fundamental. If there were different universes with different phases of laws, we might not need to seek fundamental explanations for some of the properties our universe exhibits.

And it would, of course, mean we are tinier still than we ever imagined. “It’s like another step in this kind of existential crisis,” Kleban says. “It would have a huge impact on people’s imaginations.”


Like what you see? Sign up for a free subscription to symmetry!

by Laura Dattaro at July 28, 2015 08:03 PM

astrobites - astro-ph reader's digest

Gone Without a Bang

Title: Gone without a bang: An archival HST survey for disappearing massive stars
Authors: Thomas Reynolds, Morgan Fraser, Gerard Gilmore
First author’s institution: University of Cambridge
Status: Submitted to MNRAS

It’s well known that stars with a mass about 10 times that of the Sun will explode in a supernova and leave behind a neutron star. We also know that colossal stars, those greater than 40 times more massive than our Sun, will also explode as a supernova and leave behind a black hole.

So what happens to stars in between? You might guess that they will also explode in a spectacular supernova, following the pattern of their siblings. As it turns out, many of these stars can die by collapsing into a black hole…without their characteristic supernova. How does this work?

Core-collapse supernovae usually explode when the inner iron core of a massive star has reached its Chandrasekhar mass and cannot support itself against its own gravity. The core then collapses until it reaches the density of an atomic nucleus – or about 5 billion tons for a teaspoon of matter. At this point, the infalling material rebounds outwards, producing a shockwave that blasts the outer layers of the star with the help of neutrinos, leading to a supernova. Things can go awry in this last step. If insufficient energy is supplied to the shock, the shockwave may stall before leaving the star. This lapse allows the black hole formed by the inner core of the star to simply gobble up the star before an explosion can occur. And–poof!–just like that, the star is gone.

Well, sorta. It’s predicted that there will be a very dim (about 10,000 times fainter than a supernova), red and long transient from the explosion, but we have never seen such an event! This is especially odd because these events shouldn’t be that rare; about 1/3 of all core-collapse may actually result in a failed supernova.The evidence for these events is the fact that red supergiants should end their lives as supernovae, but we haven’t found many that have done so (greater than about 15 times the mass of our Sun). This is known as the “Red Supergiant Problem”. The question then seems obvious: could these red supergiants be disappearing into the night sky as failed supernovae?

Fig. 1: HST image of the Antenna galaxies (NGC 4038/4039)  overlaid on an image from a ground-based CTIO telescope.  The image shows three snapshots taken by HST during 1996, 2004 and 2010.

Fig. 1: HST image of the Antenna galaxies (NGC 4038/4039) overlaid on an image from a ground-based CTIO telescope. The image shows three snapshots taken by HST during 1996, 2004 and 2010. [Figure 2 of Reynolds et al. 2015.]

This is what the authors of today’s paper explore this question by looking for the culprits themselves. The astronomers look for stars which have disappeared using archival data from the Hubble Space Telescope (HST). HST has been orbiting the Earth since 1990, so it has the unique advantage of having plenty of high quality images of galaxies, such as the Antenna galaxies in Figure 1. The authors look for galaxies which have been observed multiple times by HST over the course of its life, searching for any disappearing stars. In theory, you would only need three HST images in sequence to do this: two images before the failed supernova to ensure that the star was not extremely variable and a second image to capture its disappearance. By narrowing down the possible galaxies using this criterion, as well as some distance and galaxy-type cuts, the authors are able to find six potential failed supernovae in fifteen galaxies.

Of these six candidates, two are actually bright, variable stars and three others are far too dim to be red supergiants. This leaves a single, potential failed supernova! The lightcurve of this mysterious star is shown in Figure 2.

The authors can’t be entirely sure that this is truly a failed supernova based on the HST data alone; other transients or variable objects can mimic the predicted lightcurve of a failed supernova. One notable possibility are R Coronae Borealis stars (RCB), which are evolved stars lacking hydrogen. RCB stars can dim by many order of magnitude, likely due to intense clouds of carbon dust in their atmosphere. If these stars dim during the last few observations, they might seem to have disappeared altogether. Future data on this candidate would certainly help to unveil its true identity.

Fig 2: Light curve of a possible failed supernovae in NGC3021.

Fig 2: Light curve of a possible failed supernovae in NGC3021. The candidate clearly dims during the last two observations, but it is uncertain if it has disappeared altogether. [Figure 6 of Reynolds et al. 2015]

This is now the second survey to search for failed supernovae, following a survey by Gerke et al. in 2014 which used the ground-based Large Binocular Telescope. Both of these surveys resulted in single candidates, but, unfortunately, neither had sufficient data to build a complete transient lightcurve. In this regard, the mystery surrounding failed supernovae remains unsolved. This leaves an extremely exciting open question about the nature of black hole creation when a supernova fails to explode and the solution to the Red Supergiant Problem. Lastly, it’s worth emphasizing that this study was done on a small fraction of the thousands of galaxies publically available on the Hubble Archive – there is nothing stopping you, the reader, from trying to find a failed supernova for yourself!


by Ashley Villar at July 28, 2015 06:55 PM

Quantum Diaries

La route cahoteuse menant aux découvertes

La conférence de physique des particules de la Société de Physique Européenne (EPS) se poursuit à Vienne, les sessions parallèles ayant cédé la place aux sessions plénières. Les présentateurs et présentatrices ont maintenant la dure tâche de récapituler les centaines de résultats présentés jusqu’ici à la conférence et d’en tirer une vue d’ensemble.

Durant les deux dernières années, le Grand Collisionneur de Hadrons (LHC) a subi des améliorations majeures. Les expérimentalistes en ont profité pour examiner sous toutes les coutures (et même plus!) l’ensemble des données accumulées avant l’arrêt. Avec les calibrations finales et des algorithmes améliorés, presque toutes les analyses incluent maintenant la totalité des données récoltées à une énergie de 8 TeV. Dans la plupart des cas, ces mois de travail acharné effectué par des centaines de personnes n’auront produit qu’une légère amélioration dans la précision des résultats. Ces récents résultats, bien que solides comme le roc, n’ont malheureusement rien révélé de nouveau.

C’est la mauvaise nouvelle. La bonne nouvelle : on s’attend à quatre fois plus de données dans l’année qui vient et à plus haute énergie, ce qui rendra de nouveaux phénomènes accessibles.

En voici un exemple. Les expériences CMS et ATLAS cherchent, entre autres, des particules lourdes mais encore hypothétiques qui se désintègreraient en deux bosons connus, à savoir des photons, ou des bosons Z, W ou de Higgs. Les trois derniers bosons peuvent à leur tour se désintégrer en jets de particules légères faites de quarks.

La désintégration d’une particule s’apparente à faire la monnaie pour une grosse pièce de monnaie : la pièce de monnaie initiale ne contient pas de petites pièces, mais peut être échangée pour des pièces de valeur égale, comme sur le diagramme ci-dessous. Les quatre pièces de 50 centimes pourraient provenir d’une pièce de deux euros ou de deux pièces de un euro. De même, dans nos détecteurs, quand nous trouvons quatre jets de particules, ils peuvent provenir de deux bosons produits indépendamment (dans l’exemple ci-dessus, deux bosons Z), ou venir de quatre quarks produits directement. Tout ceci constitue le bruit de fond, tandis que le signal correspond dans ce cas au nouveau boson, celui qui s’est désintégré en deux bosons.

pieces de monnaie

La désintégration d’une particule s’apparente à faire la monnaie pour une pièce.

Une pièce de monnaie n’a qu’une valeur mais une particule possède à la fois masse et énergie. Quand on échange une grosse pièce pour de la monnaie, la valeur initiale est conservée. Avec des particules, nous devons prendre en compte la masse et l’énergie de tous les produits de désintégration pour calculer la masse combinée de la particule originale. Dernier détail : si la particule qui se désintègre est beaucoup plus lourde que les deux bosons qu’elle produit, les jets venant de ces bosons seront à peine séparés. Ils se déplaceront côte à côte. On n’observera alors non pas quatre jets, mais seulement deux jets plus évasés.

Si ces deux larges jets proviennent de deux Z bosons produits indépendamment, la valeur totale de leur masse combinée sera aléatoire, comme si nous additionnions la valeur de la monnaie au fond de nos poches. Si des milliers de personnes notaient sur un graphe la valeur de leur petite monnaie, nous obtiendrions une distribution comme celle de la ligne bleue ci-dessous. La majorité des gens ne traîne qu’un peu de monnaie, mais certaines personnes trimbalent une petite fortune en pièces de monnaie.


Un excès d’évènement trouvés ayant une masse de 2 TeV trouvés par ATLAS

L’axe horizontal donne la valeur de la masse combinée des deux jets pour chaque événement récolté par la Collaboration d’ATLAS qui en contenait deux. L’axe vertical montre combien d’événements ont été trouvés avec une valeur de masse particulière. La ligne bleue montre les contributions du bruit de fond et les autres lignes colorées correspondent à diverses hypothèses théoriques. Les points noirs représentent les données réelles et devraient être distribués de façon similaire à la ligne bleu en l’absence de nouvelles particules.

Une petite bosse est visible autour d’une valeur de masse de 2 TeV : il y a plus d’événements dans les données que ce à quoi on s’attend venant de sources connues. Mais il y a toujours un certain flou dans toute mesure à cause des erreurs expérimentales. Si on répétait la même mesure mille fois, au moins une de ces mesures aurait un écart semblable. Il est donc beaucoup trop tôt pour dire qu’il pourrait s’agir des premiers signes de la présence d’une nouvelle particule, comme un boson W’ hypothétique par exemple. Mais ce sera à suivre dans les nouvelles données.


Des évènements intrigants trouvés par CMS dans les nouvelles (à gauche) et les anciennes données (à droite)

La Collaboration CMS a aussi quelques événements intrigants, comme celui ci-dessus à gauche trouvé parmi les toutes nouvelles données recueillies depuis la reprise du LHC à 13 TeV. Les deux jets ont une masse combinée d’environ 5,0 TeV. Un évènement semblable ayant une masse combinée de 5,15 TeV (droite) a aussi été trouvé dans les données accumulées à 8 TeV. Il y a 500 fois moins de données à 13 TeV qu’à 8 TeV, mais les expériences peuvent déjà poursuivre les analyses effectuées à 8 TeV.

Il est beaucoup trop tôt pour dire quoi que ce soit. Un peu comme si nous regardions à distance, par un jour brumeux et à la tombée de la nuit, essayant de voir si le train s’en vient. La forme floue aperçue au loin est-elle réelle ou juste une illusion ? Personne ne le sait, il faut attendre que le train se rapproche. Mais pas pour longtemps puisque le LHC est déjà en marche. Les expériences CMS et ATLAS devraient bientôt avoir suffisamment de nouvelles données pour pouvoir trancher. Et là, attachez bien vos tuques, ça va devenir excitant!

Pauline Gagnon

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution ou consultez mon site web.

by Pauline Gagnon at July 28, 2015 01:32 PM

Quantum Diaries

The bumpy road to discoveries

Yesterday, at the European Physics Society (EPS) Particle Physics conference in Vienna, we moved from parallel sessions to plenary sessions. The tasks of the speakers is now to summarize the hundreds of results presented so far at the conference, and draw the big picture.

For the past two years, the Large Hadron Collider underwent major upgrade work. Experimentalists have used this downtime to look at all collected data from all possible angles (and a few more!). With final calibrations and improved algorithms everywhere, nearly all analyses now included all data collected at 8 TeV. In most cases, months of hard work for hundreds of people only slightly improved the resolution. But these rock solid results have unfortunately not revealed new discoveries.

That’s the bad news. The good news is that four times more data is expected in the coming year at higher energy, making new phenomena accessible.

Here is one example. Both the CMS and ATLAS experiments are looking for heavy hypothetical particles that would decay into two of the known bosons, namely photons, Z, W or Higgs bosons. In turns, the last three bosons could decay into jets of light particles made of quarks.

A particle decay is very similar to making change for a large coin: the initial coin does not contain the smaller coins but can be exchanged for smaller coins of equal value, like on the diagram below. The four pieces of 50 centimes could come either from a two euro coin or from two coins of one euro. Likewise in our detectors, when we find four jets of particles, they can come from two independently produced Z, W or H bosons, or simply from four quarks produced directly. All this is called the background while the signal in this case would be a new boson that first decayed into two bosons.


A particle decay is like making small change for a large coin.

A coin only has one value but a particle carries both mass and energy. When one breaks a large coin, its total value is conserved. With particles, we must take into account the mass and the energy of all the decay products to calculate the combined mass of the original particle. One last detail: when the initial decaying particle is much heavier than the two bosons it produces, the jets coming from these bosons will hardly be separated. They will fly along side each other. In the end, we will not see four jets but rather two broader jets.

If the two broad jets come from two unrelated Z bosons, their total combined mass will be random, just as if we were to sum up the values of the small coins we carry in our pocket. If thousands of people told us the value of their small change, we would get a distribution like the one shown below by the blue line. Most people have only a little change, but some carry a small fortune in coins.

ATLAS-bumpThe horizontal axis gives the combined mass value of each event containing two broad jets found by the ATLAS Collaboration. The vertical axis shows (on a logarithmic scale) how many events were found with a particular value. The blue line shows what is expected from various backgrounds and the other colourful lines correspond to a few hypotheses. The black dots represent the real data and would look similar to the blue line if nothing new were there.

A small bump shows up around a mass value of 2 TeV, that is, more events are seen in data than what is predicted. The excess is 3.4 σ. Since there is always a spread in measured values due to the experimental errors, such a difference would occur at least once if we were to measure this quantity 1000 times. Hence, it is to early to say this could be the first sign of something new like a hypothetical boson denoted W’.


Intriguing events found by CMS with a mass around 5 TeV in the new (left) and old (right) data.

The CMS Collaboration also showed a few intriguing events. One is found in the newest data collected at 13 TeV after the restart of the LHC. The two jets combined mass is 5 TeV (left figure). The second event comes from the data collected earlier at 8 TeV and has a mass of 5.15 TeV. With 500 times less data at 13 TeV than 8 TeV, the experiments are already extending the analyses started with the 8 TeV data.

At this stage, it is way too early to tell. This is similar to looking in the distance on a foggy day, at dusk, trying to see if the train is coming. A faint shape is visible but is this real or just a mirage? No one knows, we must wait for the train to come closer. But not for long since the LHC is on track. Both experiments should soon have enough new data to be more definitive. And then, hold on to your hat, it’s going to get really exciting.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline  or sign-up on this mailing list to receive an e-mail notification. You can also visit my website


by Pauline Gagnon at July 28, 2015 01:21 PM

Lubos Motl - string vacua and pheno

Ask a question to Stephen Hawking
Stephen Hawking believes that artificial intelligence is dangerous: those robots may revolt and become our landlords. He co-authored a new letter with Elon Musk (text) demanding all man-made machines to be at least as stupid as a Tesla car to avoid "arms races" with the robots. Hawking himself has become much more powerful when his biological underpinnings have been enhanced by computer technology.

He must believe that he has become much more effective in answering people's questions. That's why he agreed to answer questions posted at
Science Ama Series: I am Stephen Hawking, theoretical physicist. Join me to talk about making the future of technology more human, reddit. AMA!
So far, there are over 8,000 comments over there.

If you assume that there is approximately one question in each comment and the Sun will go red giant in less than 8 billion years, you may conclude that Stephen Hawking will have slightly less than one million years to address the average comment. Whether it's enough is yet to be seen.

A somewhat more serious topic than the destruction of the mankind by malicious, excessively clever robots (sorry, I am still much more afraid of ordinary machines controlled by the stupid people!): Alon E. Faraggi and Marco Guzzi have a new paper providing us with the free fermionic heterotic explanations of the \(2\TeV\) excess observed at ATLAS.

It could be due to a new \(Z'\) and/or \(W'\) boson, they say, but they argue that the possible new bosons of this kind that their seemingly "special" class of compactifications, the free fermionic heterotic ones, offer may be divided to seven basic categories:
  1. \(U(1)_{Z'}\in SO(10)\) of the grand unification
  2. \(U(1)_{Z'}\notin SO(10)\) of the grand unification but family-universal
  3. non-universal \(U(1)_{Z'}\)
  4. hidden sector \(U(1)\) symmetries with kinetic mixing
  5. left-right-symmetric models
  6. Pati-Salam models
  7. leptophobic and custodial symmetries
It's pretty cool that the free fermionic heterotic models are capable of producing all these ideas depending on the choice of the periodicities of the world sheet fermions and corresponding GSO projections. All these models are consistent but depending on technical details, they seem to cover almost all the good ideas that are considered by the phenomenologists.

By the way, you may always watch the plenary talks at EPS-HEP 2015 in Vienna (live).


I have looked at the Hawking-Musk Luddite letter a bit more carefully. They're not the only signatories. The letter is against "autonomous weapons". There are thousands or tens of thousands of signatures including those of Steve Wozniak, Lisa Randall, Frank Wilczek, Max Tegmark, Noam Chomsky, Barbara Grosz (the ex-chairbitch of an anti-Summers femi-Nazi task force) and so on and so on.

Autonomous weapons hysteria must be some new global warming-like hysteria that I must have almost completely missed!

These autonomous weapons are surely dangerous for the targets – which may include innocent targets if something goes wrong – but they're still weapons that can make certain operations more efficient and that are likely to give the civilized parties of various conflicts an edge – because they have a technological edge. Autonomous weapons could be helpful to fight the Islamic State and similar foes which is why I think that they may be a good idea. (And indeed, I can't get rid of the worry that those leftists oppose autonomous weapons exactly because they could be threatening for forces such as the Islamic State – organizations that these leftists semi-secretly root for.)

America builds an army of robots for the future

One must be careful what he produces and that it can't be abused by wrong overlords but otherwise it seems better to me when an autonomous robot, and not a 20-year-old soldier, is sacrificed in a fight with bloody savages. So my letter to the robotics experts is: Don't listen to these Luddites and keep on doing your job.

by Luboš Motl ( at July 28, 2015 12:09 PM

July 27, 2015

astrobites - astro-ph reader's digest

Shifting the Pillars – Constraining Lithium Production in Big Bang Nucleosynthesis

Title: Constraining Big Bang lithium production with recent solar neutrino data
Authors: Marcell P. Takacs, Daniel Bemmerer, Tamas Szucs, Kai Zuber
First Author’s Institution: Helmholtz-Zentrum, Dresden-Rossendorf
Notes: in press at Phys. Rev. D

Tom McClintock

Guest author Tom McClintock

Today’s post was written by Tom McClintock, third year graduate student in Physics at the University of Arizona. His research interests include cosmology and large scale structure. Tom did his undergrad at Amherst College and a MSc in high performance computing at the University of Edinburgh. In addition to his research, he is in a long term relationship with ultimate frisbee and dungeons and dragons.

Among the tests passed by the standard cosmological model, Big Bang Nucleosynthesis (BBN) may be the most rigorous, in that predictions of light element abundances are consistent with observations over ten orders of magnitude. All of this production occurs within the first fifteen minutes(!) following the Big Bang, and ceases once weak reactions producing neutrons fall out of equilibrium. However, for over thirty years there has been tension over the abundance of lithium-7 between theoretical BBN calculations and measurements of metal-poor stars known as the Cosmic Lithium Problem (which astrobites has discussed here and here). The numerical simulations of \LambdaCDM predict an abundance that is over three times that found on the surface of Population II stars. Something has to give.

The authors of today’s paper investigate a nuclear physics solution, the reaction rate 3He + 4He \rightarrow \gamma + 7Be, shortened to 3He(a,g)7Be. Production of beryllium-7 is important because 7Be eventually decays to 7Li through electron capture. Nuclear reactions are described by reaction rates, which in turn are described by interaction cross sections, which can be measured by experiments. In the case of 3He(a,g)7Be, any change in the measured cross section affects the theoretical BBN 7Li yield, and thus the compatibility between the standard cosmological model and abundance observations.

In addition, 3He(a,g)7Be is a critical step in both the pp-2 and pp-3 branches of the pp-chain of hydrogen burning in the Sun. Both of these branches also produce electron neutrinos, observable on Earth. The authors use new stellar neutrino flux data published by the BOREXINO collaboration in order to constrain the 3He(a,g)7Be reaction rate. From there they recalculate the theoretical 7Li yield and confirm the significant tension between theory and observation.

The Tricky Part

Nuclear reaction cross sections have a temperature dependent sweet spot, called the Gamow peak, which allows for a maximum reaction rate to occur. For this reason, it is much easier for experiments to probe cross sections near the Gamow peak; at lower energies there isn’t enough juice to get the nuclei to smash together and at higher energies they whiz by each other too fast. Unfortunately, the energy range of interest (0.1 – 0.5 MeV) for BBN temperatures (~500,000 K) is too low, and lies just out of the capabilities of most experiments. Therefore, in order to perform BBN calculations it has been necessary to extrapolate the cross section down to these energies.

Takacs et al. sidestepped this limitation by utilizing the solar neutrino data to constrain the reaction rate at an energy lower than that of BBN, thereby removing the need for extrapolation.


By assuming a standard solar model (SSM) as well as the standard neutrino oscillation model, the authors determine that the predicted neutrino flux depends on a variety of parameters such as solar luminosity, age, opacity, and nuclear reaction rates. They then use calculations of the sensitivity of the neutrino flux for a variation in the 3He(a,g)7Be reaction rate in order to write this rate in terms of the observed flux, the expected flux from SSM, and the best theoretical reaction rate from SSM. As shown in the figure below, their data point was measured at an energy almost a factor of ten below all previous measurements of the cross section.

The so-called “astrophysical S-factor” S34 is a parameterization of, and is directly proportional to, the interaction cross section. Takacs et al. were able to measure S at an energy almost a factor of ten lower than the best accelerator experiments. The fit to S is given as a red line, while the blue dashed line is calculated analytically from theory but due to numerical limitations cannot reach energies considered in this paper. The solar Gamow peak is given as the red shaded region, and the blue shaded region indicates the energy range for BBN. This figure is from Takacs et al.

The so-called “astrophysical S-factor” S34 is a parameterization of, and is directly proportional to, the interaction cross section. Takacs et al. were able to measure S at an energy almost a factor of ten lower than the best accelerator experiments. The fit to S is given as a red line, while the blue dashed line is calculated analytically from theory but due to numerical limitations cannot reach energies considered in this paper. The solar Gamow peak is given as the red shaded region, and the blue shaded region indicates the energy range for BBN. This figure is from Takacs et al.


The cross section for 3He(a,g)7Be was lower by about 5% compared to the value previously used in several BBN calculations, and the precision increased almost by a factor of three, mostly due to the elimination of extrapolation. Using this cross section, the authors updated the reaction rate in a public BBN code and determined a small increase in the disagreement between the theoretical 7Li abundance and the abundance observed on the surface of metal poor stars. However, they caution that further work on the SSM may change their error budget for the 3He(a,g)7Be cross section.

This study both confirms and exacerbates the cosmic lithium problem (albeit slightly), yet it demonstrates how astrophysical processes even in our solar system can serve as probes into fundamental physics. BBN marks the boundary between precision and speculative cosmology, and the lithium problem restricts researchers from pushing this boundary further.

by Guest at July 27, 2015 11:40 PM

Tommaso Dorigo - Scientificblogging

New Results From The LHC At 13 TeV!
Well, as some of you may have heard, the restart of the LHC has not been as smooth as we had hoped. In a machine as complex as this the chance that something gets in the way of a well-followed schedule is quite significant. So there have been slight delays, but the important thing is that the data at 13 TeV centre-of-mass energy are coming, and the first results are being extracted from them.

read more

by Tommaso Dorigo at July 27, 2015 08:53 PM

Symmetrybreaking - Fermilab/SLAC

W bosons remain left-handed

A new result from the LHCb collaboration weakens previous hints at the existence of a new type of W boson.

A measurement released today by the LHCb collaboration dumped some cold water on previous results that suggested an expanded cast of characters mediating the weak force.

The weak force is one of the four fundamental forces, along with the electromagnetic, gravitational and strong forces. The weak force acts on quarks, fundamental building blocks of nature, through particles called W and Z bosons.

Just like a pair of gloves, particles can in principle be left-handed or right-handed. The new result from LHCb presents evidence that the W bosons that mediate the weak force are all left-handed; they interact only with left-handed quarks.

This weakens earlier hints from the Belle and BaBar experiments of the existence of right-handed W bosons.

The LHCb experiment at the Large Hadron Collider examined the decays of a heavy and unstable particle called Lambda-b—a baryon consisting of an up quark, down quark and bottom quark. Weak decays can change a bottom quark into either a charm quark, about 1 percent of the time, or into a lighter up quark. The LHCb experiment measured how often the bottom quark in this particle transformed into an up quark, resulting in a proton, muon and neutrino in the final state.

“We found no evidence for a new right-handed W boson,” says Marina Artuso, a Professor of Physics at Syracuse University and a scientist working on the LHCb experiment.

If the scientists on LHCb had seen bottom quarks turning into up quarks more often than predicted, it could have meant that a new interaction with right-handed W bosons had been uncovered, Artuso says. “But our measured value agreed with our model’s value, indicating that the right-handed universe may not be there.”

Earlier experiments by the Belle and BaBar collaborations studied transformations of bottom quarks into up quarks in two different ways: in studies of a single, specific type of transformation, and in studies that ideally included all the different ways the transformation occurs.

If nothing were interfering with the process (like, say, a right-handed W boson), then these two types of studies would give the same value of the bottom-to-up transformation parameter. However, that wasn’t the case.

The difference, however, was small enough that it could have come from calculations used in interpreting the result. Today’s LHCb result makes it seem like right-handed W bosons might not exist after all, at least not in a way that is revealed in these measurements.

Michael Roney, spokesperson for the BaBar experiment, says, "This result not only provides a new, precise measurement of this important Standard Model parameter, but it also rules out one of the interesting theoretical explanations for the discrepancy... which still leaves us with this puzzle to solve."

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at July 27, 2015 03:53 PM

ATLAS Experiment

From ATLAS Around the World: Triggers (and dark) matter

To the best of our knowledge, it took the Universe about 13.798 billion years (plus or minus 37 million) to allow funny looking condensates of mostly oxygen, carbon and hydrogen to ponder on their own existence, the fate of the cosmos and all the rest. Some particularly curious specimens became scientists, founded CERN, dug several rings into the ground near Geneva, Switzerland, built the Large Hadron Collider in the biggest ring, and also installed a handful of large detectors along the way. All of that just in order to understand a bit better why we are here in the first place. Well, here we are!

CERN was founded after World War II as a research facility dedicated to peaceful science (in contrast to military research). Germany is one of CERN’s founding members and it is great to be a part of it. Thousands of scientists are associated with CERN from over 100 countries, including some nations that do not have the most relaxed diplomatic relationships with each other. Yet this doesn’t matter at CERN, as we are working hand-in-hand for the greater good of science and technology.

Monitoring and analysing events provided by the first beam of the LHC since the first run. (Picture by R. Stamen)

Monitoring and analysing events provided by the LHC. (Picture by R. Stamen)

In the ATLAS collaboration, Germany has institutes from 14 different cities contributing to one of the largest and most complex detectors ever built. My institute, the Kirchhoff-Institut für Physik (KIP) in Heidelberg, was (and is) involved in the development and operation of the trigger mechanism that selects the interesting interactions from the not so interesting ones. Furthermore, we are doing analyses on the data to confirm the Standard Model of Particle Physics or – better yet – to find hints of excess events that point to dark matter particles (although we are still waiting for that…).

But let’s start with the trigger. The interaction rate (that is the rate at which bunches of LHC protons collide within the ATLAS detector) is way too high to save every single event. That is why a selection process is needed to decide which events to save and which to let go. This trigger mechanism is split up into several stages; the first of which handles such high rates that it needs to be implemented using custom hardware, as commercial PCs are not fast enough.

This first stage (also called the level-1 trigger) is what we work on here at KIP. For instance, together with a fellow student, I took care of one of the first timing checks after the long shutdown. This was important, because we wanted to know if the extensive maintenance that started after the Run 1 (wherein we had personally installed new hardware) had somehow changed the timing behaviour of the level-1 trigger. Having a timed system is crucial, since if you are off by even a few nanoseconds, your trigger starts misbehaving and you might miss Higgs bosons or other interesting events.

In order to determine the timing of our system we used “splash” events. Instead of collisions at the centre of the detector, a “splash” is an energetic spray of a huge number of particles that comes from only one direction (more information on splashes here). They are great for timing the system, because they light up the entire detector like a Christmas tree. Also, they came from the first LHC beam since Run 1 – so it was the first opportunity to see the detector at work. This work was intense and cool. The beam splashes were scheduled over Easter, but we did not care. We gladly spent our holiday together in the ATLAS control room with other highly motivated people who sacrificed their long weekend for science. To see the first beams live in the control room after a long shutdown was a special experience. Extremely enthusiastic!

Murrough Landon (r.) and Manuel (l.) discussing results from the beam splashes. (Picture by R. Stamen)

Murrough Landon (right) and Manuel (left) discussing results from the beam splashes. (Picture by R. Stamen)

But of course, timing is not the only thing that has to be done. We also write the firmware for our hardware, code software (for instance, to monitor our system in real time), plan future upgrades (in both hardware and software) and do even more calibration. Each of these items is important for the operation of the detector and also very exciting to work on. I find it cool to know that the stuff I worked on helps keep ATLAS running.

Once we have the data – what do we do with it? Each student at KIP can choose which topic he or she wants to work on, yet the majority of us study processes that are related to electroweak interactions. This part of the Standard Model has become even more interesting after the discovery of the Higgs boson and has potential for the discovery of new physics. For example, dark matter. Many models predict dark matter interacts electroweakly, which is what I am working on. We can search for this in the data by looking for events from which we know that particles escaped the detector without interacting with it (leaving “missing transverse energy“; neutrinos do this too) and than comparing the results to models of electroweak coupling to dark matter. The discovery of dark matter would be awesome. The cosmological evidence for dark matter is convincing (for instance galactic rotation curves or the agreement between observations from the Planck satellite and models such as ΛCDM). It is just a matter of finding it…

Going back to the beginning – literally. I am extremely curious to see what we – those funny-looking condensates of mostly oxygen, carbon and hydrogen – will find out about the Universe, its beginning, end, in-between, composition, geometry, behaviour and countless other aspects. And CERN, and especially the ATLAS collaboration, is a great environment in which to do so.

doc01069020150318091345_001 Manuel is a PhD student at the Kirchhoff-Institut für Physik at the University of Heidelberg, Germany. He joined ATLAS in 2014 and has since been working on both the level-1 calorimeter trigger and an analysis searching for dark matter. He did his Bachelor’s and Master’s degrees in Physics in Bielefeld, Germany, in the fields of molecular magnetism theory and material science. For his PhD he decided to switch fields and become an experimental particle physicist.

by Manuel Geisler at July 27, 2015 03:25 PM

arXiv blog

How the New Science of Game Stories Could Change the Future of Sports

Every sporting event tells a story. Now the first computational analysis of “game stories” suggests that future sports could be designed to prefer certain kinds of stories over others.

“Serious sport is war minus the shooting.” Many athletes will agree with George Orwell’s famous observation. But many fans might add that the best sport is a form of unscripted storytelling: the dominant one-sided thrashing, the back-and forth struggle, the improbable comeback, and so on.

July 27, 2015 03:20 PM

Quantum Diaries

Trop passionnant pour ne pas partager

La plupart des physiciens et physiciennes sont d’accord: la physique est bien trop passionnante pour la réserver seulement aux scientifiques. Et pour la première fois, la Société Européenne de Physique (EPS) y a consacré une session entière samedi lors de sa conférence de physique des particules en cours à Vienne. Plusieurs y ont rapporté des initiatives variées visant à partager le meilleur de la physique des particules avec le grand public.

La plupart des activités décrites visaient des étudiants et étudiantes de tous âges, venant de pays développés ou en développement. Kate Shaw, chercheure au Centre International de Physique Théorique (ICTP) de Trieste en Italie, a souligné comment la science peut aider à résoudre divers problèmes d’environnement et de développement. Le monde a besoin de plus de scientifiques, a déclaré Kate. Investir dans l’éducation, ainsi que dans les institutions technologiques et culturelles jouent un rôle-clé dans le développement d’une économie basée sur la connaissance. La recherche fondamentale stimule les sciences appliquées par l’innovation, la technologie et l’ingénierie. Elle a aussi souligné l’importance d’inclure toutes les minorités et les jeunes issus de familles à faible revenu.

Kate a fondé le programme “Physique sans Frontières” au ICTP et organisé des “Masterclasses” (voir ci-dessous) et autres activités en Palestine, en Égypte, au Népal, au Liban, au Viêt-Nam et en Algérie. Non seulement elle inspire les jeunes à entreprendre des études en science, mais elle les assiste aussi, les aidant à accéder à des programmes de maîtrise et de doctorat. Kate a reçu aujourd’hui le Outreach Award de l’EPS « pour son travail de dissémination de la physique des particules dans des pays qui n’ont pas de programmes bien établis ».


Etudiantes participant à une Masterclasse en Palestine dans le cadre du programme “Physique sans Frontières”

Une Masterclasse consiste en une journée entière d’activités interactives conçues pour des élèves. Des scientifiques décrivent d’abord la physique des particules et l’expérience à laquelle ils ou elles participent. Un repas pris en commun facilite les échanges avant de se lancer dans de vraies analyses avec de vraies données. Chaque année, une masterclasse internationale réunit environ 10 000 élèves de 42 pays. Ils et elles rejoignent des scientifiques de 200 universités ou laboratoires voisins, pour effectuer de véritables mesures de physique en collaboration internationale avec les autres élèves. Pourquoi ne pas participer à une Masterclasse?

Ces élèves ainsi que d’autres groupes peuvent aussi prendre part à une visite virtuelle d’une expérience de physique. Un ou une scientifique sur place au laboratoire interagit avec le groupe, avant de leur faire visiter les installations à l’aide d’une connexion vidéo en direct.

Vous cherchez une activité inspirante qui soit simple, bon marché et accessible pour un événement spécial, une conférence ou un groupe? Invitez-les à une visite virtuelle au CERN (ATLAS ou CMS). Ainsi en janvier, 500 élèves de Mumbai ont profité de leur “visite” de l’expérience IceCube située à 12 000 km au pôle sud, pour bombarder les scientifiques avec leurs questions.

Le Teacher Programme du CERN a déjà accueilli un millier de personnes. Les enseignants et enseignantes du niveau secondaire venus de partout dans le monde s’en font mettre plein la vue pendant plusieurs semaines afin de s’assurer qu’ils partageront leur enthousiasme avec leurs élèves à leur retour.

Les présentations publiques et les livres de vulgarisation scientifique visent un public plus général. Beaucoup de scientifiques, moi y compris, se feront un plaisir de venir donner une conférence près de chez vous. Il suffit de demander.

Pauline Gagnon

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution ou consultez mon site web.


by Pauline Gagnon at July 27, 2015 12:11 PM

July 26, 2015

Clifford V. Johnson - Asymptotia

imageI'm in Santiago, Chile, for a short stay. My first thought, in a very similar thought process to the one I had over ten years ago in a similar context, is one of surprise as to how wonderfully far south of the equator I now am! Somehow, just like last time I was in chile (even further south in Valdivia), I only properly looked at the latitude on a map when I was most of the way here (due to being somewhat preoccupied with other things right up to leaving), and it is a bit of a jolt. You will perhaps be happy to know that I will refrain from digressions about the Coriolis force and bathtubs, hurricanes and typhoons, and the like. I arrived too early to check into my hotel and so after leaving my bag there I went wandering for a while using the subway, finding a place to sit and have lunch and coffee while watching the world go by for a while. It happened to be at Plaza de Armaz. I sketched a piece of what I saw, and that's what you see in the snap above. I think the main building I sketched is in fact the Central Post Office... And that is a bit of some statuary in front of the Metropolitan Cathedral to the left. I like that the main cathedral and post office are next to each other like that. And yes, [...] Click to continue reading this post

by Clifford at July 26, 2015 09:05 PM

July 24, 2015

Clifford V. Johnson - Asymptotia

Page Samples…!
second_set_samples_editThere's something really satisfying about getting copies of printed pages back from the publisher. Makes it all seem a bit more real. This is a second batch of samples (first batch had some errors resulting from miscommunication, so don't count), and already I think we are converging. The colours are closer to what I intended, although you can't of course see that since the camera I used to take the snap, and the screen you are using, have made changes to them (I'll spare you lots of mumblings about CMYK vs RGB and monitor profiles and various PDF formats and conventions and so forth) and this is all done with pages I redid to fit the new page sizes I talked about in the last post on the book project. Our next step is to work on more paper choices, keeping in mind that this will adjust colours a bit again, and so forth - and we must also keep an eye on things like projected production costs and so forth. Some samples have been mailed to me and I shall get them next week. Looking forward to seeing them. For those who care, the pages you can see have a mixture of digital colours (most of it in fact) and analogue colours (Derwent watercolour pencils, applied [...] Click to continue reading this post

by Clifford at July 24, 2015 10:21 PM

Tommaso Dorigo - Scientificblogging

I apologize for my lack of posting in the past few days. I will resume it very soon... So as a means of apology I thought I would explain what I have been up to this week.

read more

by Tommaso Dorigo at July 24, 2015 08:22 PM

Sean Carroll - Preposterous Universe

Guest Post: Aidan Chatwin-Davies on Recovering One Qubit from a Black Hole

47858f217602be036c32e8ac76271a75_400x400 The question of how information escapes from evaporating black holes has puzzled physicists for almost forty years now, and while we’ve learned a lot we still don’t seem close to an answer. Increasingly, people who care about such things have been taking more seriously the intricacies of quantum information theory, and learning how to apply that general formalism to the specific issues of black hole information.

Now two students and I have offered a small contribution to this effort. Aidan Chatwin-Davies is a grad student here at Caltech, while Adam Jermyn was an undergraduate who has now gone on to do graduate work at Cambridge. Aidan came up with a simple method for getting out one “quantum bit” (qubit) of information from a black hole, using a strategy similar to “quantum teleportation.” Here’s our paper that just appeared on arxiv:

How to Recover a Qubit That Has Fallen Into a Black Hole
Aidan Chatwin-Davies, Adam S. Jermyn, Sean M. Carroll

We demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a no-firewall unitary black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods only require the ability to perform measurements from outside the event horizon and to collect the Hawking radiation emitted after the state of interest is dropped into the black hole.

It’s a very specific — i.e. not very general — method: you have to have done measurements on the black hole ahead of time, and then drop in one qubit, and we show how to get it back out. Sadly it doesn’t work for two qubits (or more), so there’s no obvious way to generalize the procedure. But maybe the imagination of some clever person will be inspired by this particular thought experiment to come up with a way to get out two qubits, and we’ll be off.

I’m happy to host this guest post by Aidan, explaining the general method behind our madness.

If you were to ask someone on the bus which of Stephen Hawking’s contributions to physics he or she thought was most notable, the answer that you would almost certainly get is his prediction that a black hole should glow as if it were an object with some temperature. This glow is made up of thermal radiation which, unsurprisingly, we call Hawking radiation. As the black hole radiates, its mass slowly decreases and the black hole decreases in size. So, if you waited long enough and were careful not to enlarge the black hole by throwing stuff back in, then eventually it would completely evaporate away, leaving behind nothing but a bunch of Hawking radiation.

At a first glance, this phenomenon of black hole evaporation challenges a central notion in quantum theory, which is that it should not be possible to destroy information. Suppose, for example, that you were to toss a book, or a handful of atoms in a particular quantum state into the black hole. As the black hole evaporates into a collection of thermal Hawking particles, what happens to the information that was contained in that book or in the state of (what were formerly) your atoms? One possibility is that the information actually is destroyed, but then we would have to contend with some pretty ugly foundational consequences for quantum theory. Instead, it could be that the information is preserved in the state of the leftover Hawking radiation, albeit highly scrambled and difficult to distinguish from a thermal state. Besides being very pleasing on philosophical grounds, we also have evidence for the latter possibility from the AdS/CFT correspondence. Moreover, if the process of converting a black hole to Hawking radiation conserves information, then a stunning result of Hayden and Preskill says that for sufficiently old black holes, any information that you toss in comes back out almost a fast as possible!

Even so, exactly how information leaks out of a black hole and how one would go about converting a bunch of Hawking radiation to a useful state is quite mysterious. On that note, what we did in a recent piece of work was to propose a protocol whereby, under very modest and special circumstances, you can toss one qubit (a single unit of quantum information) into a black hole and then recover its state, and hence the information that it carried.

More precisely, the protocol describes how to recover a single qubit that is encoded in the spin angular momentum of a particle, i.e., a spin qubit. Spin is a property that any given particle possesses, just like mass or electric charge. For particles that have spin equal to 1/2 (like those that we consider in our protocol), at least classically, you can think of spin as a little arrow which points up or down and says whether the particle is spinning clockwise or counterclockwise about a line drawn through the arrow. In this classical picture, whether the arrow points up or down constitutes one classical bit of information. According to quantum mechanics, however, spin can actually exist in a superposition of being part up and part down; these proportions constitute one qubit of quantum information.


So, how does one throw a spin qubit into a black hole and get it back out again? Suppose that Alice is sitting outside of a black hole, the properties of which she is monitoring. From the outside, a black hole is characterized by only three properties: its total mass, total charge, and total spin. This latter property is essentially just a much bigger version of the spin of an individual particle and will be important for the protocol.

Next, suppose that Alice accidentally drops a spin qubit into the black hole. First, she doesn’t panic. Instead, she patiently waits and collects one particle of Hawking radiation from the black hole. Crucially, when a Hawking particle is produced by the black hole, a bizarro version of the same particle is also produced, but just behind the black hole’s horizon (boundary) so that it falls into the black hole. This bizarro ingoing particle is the same as the outgoing Hawking particle, but with opposite properties. In particular, its spin state will always be flipped relative to the outgoing Hawking particle. (The outgoing Hawking particle and the ingoing particle are entangled, for those in the know.)


The picture so far is that Alice, who is outside of the black hole, collects a single particle of Hawking radiation whilst the spin qubit that she dropped and the ingoing bizarro Hawking particle fall into the black hole. When the dropped particle and the bizarro particle fall into the black hole, their spins combine with the spin of the black hole—but remember! The bizarro particle’s spin was highly correlated with the spin of the outgoing Hawking particle. As such, the new combined total spin of the black hole becomes highly correlated with the spin of the outgoing Hawking particle, which Alice now holds. So, Alice measures the black hole’s new total spin state. Then, essentially, she can exploit the correlations between her held Hawking particle and the black hole to transfer the old spin state of the particle that she dropped into the hole to the Hawking particle that she now holds. Alice’s lost qubit is thus restored. Furthermore, Alice didn’t even need to know the precise state that her initial particle was in to begin with; the qubit is recovered regardless!

That’s the protocol in a nutshell. If the words “quantum teleportation” mean anything to you, then you can think of the protocol as a variation on the quantum teleportation protocol where the transmitting party is the black hole and measurement is performed in the total angular momentum basis instead of the Bell basis. Of course, this is far from a resolution of the information problem for black holes. However, it is certainly a neat trick which shows, in a special set of circumstances, how to “bounce” a qubit of quantum information off of a black hole.

by Sean Carroll at July 24, 2015 03:51 PM

arXiv blog

Deep Neural Nets Can Now Recognize Your Face in Thermal Images

Matching an infrared image of a face to its visible light counterpart is a difficult task, but one that deep neural networks are now coming to grips with.

One problem with infrared surveillance videos or infrared CCTV images is that it is hard to recognize the people in them. Faces look different in the infrared and matching these images to their normal appearance is a significant unsolved challenge.  

July 24, 2015 08:25 AM

July 23, 2015

Symmetrybreaking - Fermilab/SLAC

A new first for T2K

The Japan-based neutrino experiment has seen its first three candidate electron antineutrinos.

Scientists on the T2K neutrino experiment in Japan announced today that they have spotted their first possible electron antineutrinos.

When the T2K experiment first began taking data in January 2010, it studied a beam of neutrinos traveling 295 kilometers from the J-PARC facility in Tokai, on the east coast, to the Super-Kamiokande detector in Kamioka in western Japan. Neutrinos rarely interact with matter, so they can stream straight through the earth from source to detector.

From May 2014 to June 2015, scientists used a different beamline configuration to produce predominantly the antimatter partners of neutrinos, antineutrinos. After scientists eliminated signals that could have come from other particles, three candidate electron antineutrino events remained.

T2K scientists hope to determine if there is a difference in the behavior of neutrinos and antineutrinos.

“That is the holy grail of neutrino physics,” says Chang Kee Jung of State University of New York at Stony Brook, who until recently served as international co-spokesperson for the experiment.

If scientists caught neutrinos and their antiparticles acting differently, it could help explain how matter came to dominate over antimatter after the big bang. The big bang should have produced equal amounts of each, which would have annihilated one another completely, leaving nothing to form our universe. And yet, here we are; scientists are looking for a way to explain that.

“In the current paradigm of particle physics, this is the best bet,” Jung says.

Scientists have previously seen differences in the ways that other matter and antimatter particles behave, but the differences have never been enough to explain our universe. Whether neutrinos and antineutrinos act differently is still an open question.

Neutrinos come in three types: electron neutrinos, muon neutrinos and tau neutrinos. As they travel, they morph from one type to another. T2K scientists want to know if there’s a difference between the oscillations of muon neutrinos and muon antineutrinos. A possible upgrade to the Super-Kamiokande detector could help with future data-taking.

One other currently operating experiment can look for this matter-antimatter difference: the NOvA experiment, which studies a beam that originates at Fermilab near Chicago with a detector near the Canadian border in Minnesota.

“This result shows the principle of the experiment is going to work,” says Indiana University physicist Mark Messier, co-spokesperson for the NOvA experiment. “With more data, we will be on the path to answering the big questions.”

It might take T2K and NOvA data combined to get scientists closer to the answer, Jung says, and it will likely take until the construction of the even larger DUNE neutrino experiment in South Dakota to get a final verdict.


Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at July 23, 2015 05:22 PM

Ben Still - Neutrino Blog

Pentaquark Series 3: Antiquarks and Anti-colour
This is the third in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. Post 2 here. Today we discuss particles that can be made from less than three quarks.
Antiparticles have opposite properties like
electric charge.

Antiquarks and Anti-colour

In the last post I mentioned that particles made from quarks must be strong charge neutral, this can be achieved if each quark is colour charged with a primary colour of light (red, green, and blue) so that the overall colour charge of the particle is white. There is another way to build particles with a neutral, white colour, overall strong charge but for this we must talk about antiparticles. The three generations of fundamental particle also have mirror versions of themselves; the antiparticles. When you look into a mirror left becomes right but you still look the same size. A similar thing is true in the particle world - mirror antiparticle versions of particles have the same mass but they see the world in opposite ways. They see the world differently in the way they feel and interact through the forces of nature. We say an electron particle has a negative electric charge, then its antimatter version, the positron, will have a positive electric charge. The anti-electron (positron) was first seen in experiment in 1932 (the same year the neutron was discovered), and since then it has been confirmed that antiparticles do indeed exist for all of the three generations of particle.

Antiquarks, the antimatter versions of the quark, also have their electric charges mirrored from positive to negative. As Antiquarks also feel the strong force they must also have their strong colour charges mirrored too - but what is an anti-colour? Let us think about the colours produced when mixing the primary colours of light. If we shine white light through a prism refraction splits it into a rainbow. Looking at the rainbow spectrum (diagram below) we see that directly in between the primary colours blue and green there is the colour cyan. It turns out that if we mix pure blue and pure green light we would see cyan as a result. As it is made up from two primary colours (green and blue) cyan is said to be a secondary colour. In-between green and red in the rainbow is another secondary colour; yellow. We would perceive the colour yellow from a mixture green and red light. What about a third secondary colour?

Rainbow spectrum of white light.

Mixing the three primary colours of light
to make the secondary colours and white.
The only mixture of primary colours not yet mentioned is red and blue; but wait - the colour in the middle of blue (at one end of the spectrum) and red (at the other end) is green. As I have already mentioned, green is a primary colour so it can’t be a secondary as well. The third and final secondary colour is not in fact a true rainbow colour at all, but one constructed by our mind. If we see blue and red light mixed we do not end up at green but instead we perceive the colour magenta. If you were to shine magenta light through a prism to split it into its component rainbow colours you would see the blue and red parts of the rainbow only. In this magenta 'rainbow' the middle green part would be entirely missing (see the spectra at the bottom). In this sense magenta is the anti-green - everything that green is not. To demonstrate this look at the optical illusion below (gif "borrowed" from Steve Mould) - stare at the centre cross. Do you see a green circle appearing? Now look away from the cross: a green circle is not present at all, what is in fact happening is that there is a lack of a magenta circle in the pattern not that a green one is appearing. Your mind is putting green where there is a lack of magenta!

Magenta: the anti-green. Image "borrowed" from Steve Mould

So magenta is anti-green. It turns out that all three secondary colours are in fact the anti colours we are looking for to be the strong charges of our antiquarks. Cyan has no red if split by a prism, only green and blue, so is therefore anti-red. Yellow would not contain blue in its spectrum, just red and green, so is therefore anti-blue. We could then say that the opposite, antiparticle versions, of the red, green, and blue strong charges of quarks would be either cyan, magenta, or yellow.

The whole set of Quarks and Antiquarks that are know to exist; they are one half of the building blocks
that make up all particles in our visible Universe.

Now what happens if we combine a quark with an anti quark? Magenta is made from red and blue, add green to it and you would have white light; yellow made from red and green, add blue and you get white light; cyan is made from blue and green, add red to it and you would get white light. So to create white, strong charge neutral, particles with quarks and antiquarks you would only need to have one quark and one antiquark. A green quark and a magenta antiquark; a red quark and a cyan antiquark; a blue quark and a yellow antiquark would all make particles. These quark-antiquark combinations are a group of particles called Mesons.

Just like the Baryons there is a pattern that Gell-Mann theorised in his Eightfold Way for the possible Meson particles that can be made from up, down and strange quarks; the Meson Octet (below). Mesons do not survive very long because particles and antiparticles are not very stable around one another. Generally when a particle meets its own antiparticle they annihilate one another to produce pure energy. Mesons, as they are constructed by quark and antiquark, use the first opportunity available to either form pure energy or a number of lighter particles. The middle row of the Meson Octet are particles called pions (π) which play a role in keeping protons and neutrons together in the nucleus but also in the production of neutrino particle beams.

The Meson Octet shows all possible Mesons that can be constructed
with up, down, strange, anti-down, anti-up, and anti-strange quark.

The refracted spectrum, or 'rainbow',
 of the secondary colours of light.
The refracted spectrum, or 'rainbow',
 of the primary colours of light.

by Ben ( at July 23, 2015 05:03 PM

Ben Still - Neutrino Blog

Pentaquark Series 2: Rule of Three...
This is the second in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb (previous post here). Today we discuss why quarks like to come in threes.

The two charges of the electromagnetic force and the
three charges of the strong force.

Rule of three …

Protons, neutrons, and other particles that are made up from 3 quarks are called Baryons. But why do they all have 3 quarks? Why not 4 or 6 or 10? It is all down to the way the strong force, responsible for binding the quarks together, works. The electromagnetic force has a possible two charges; which we label positive electric charge (like protons) and negative electric charge (like electrons).  These different charges attract, which is the reason electrons remain orbiting the proton rich nucleus of an atom. The strong force it seems has not two but three possible charges! As there is no clear way to describe this in terms of whole numbers like positive and negative another analogy had to be found. The best way to think of strong charge is as colours of light. 

Overlapping light
**Disclaimer** Before I start talking of colours of light I want to clarify that I am not talking of colours and mixing that you may have come across when using paints or other pigments in art. Colours of light add to each other when mixed to create new colours. Colours of paint and and other pigments change because they subtract by absorbing different colours of light that is reflected from them.

The three primary colours of light we see are red, green, and blue. The reason we have decided upon these colours is a selfish biological one; our eyes have evolved to be sensitive in particular to these three colours individually. When these three colours are combined, added together, they form what we perceive as white light. If we assigned the three primary colours of light to the three possible strong charges we could say that a quark can have a strong charge of red, green, or blue. 

It doesn't matter which of the quarks have which strong
 charge just that there is at least one of each primary colour.
An atom is electrically neutral because it has a balance of positively charged protons in the nucleus and negatively charged electrons surrounding it; a helium nucleus contains two protons and has two electrons surrounding it which means the electric charge is +2 -2 = 0. In the same vein a proton has to be strong force neutral, it must have a balance of the three strong charges; composed from one green charged quark, one red charged quark, and one blue charged quark. Which of the two up or one down quarks is charged with each colour doesn’t matter - the fact is just that we need one of each to make a stable proton. 

We can then say that the stable proton is white as green plus blue plus red light equals white. The same rules applies for all other particles made in a similar way, the group of particles known as Baryons. Almost any combinations of three quarks can create a Baryon as long as the Baryon is white in strong charge. Remember I am in no way saying that quarks have colour in the traditional sense, because we cannot see quarks in the traditional sense - assigning them a colour is an analogy that fits the way in which the strong force behaves. Below are diagrams showing Murray Gell-Mann's mathematical idea of explaining experimental data of the time, called the Eightfold way. These two diagrams shows all ways you can create Baryons made from up, down, and strange quark building blocks. The particle made of three strange quarks at the very bottom of the second diagram (Baryon Decuplet) is the Ωparticle that Gell-Mann predicted to exist and won him the nobel prize in 1968 after it was discovered.

The Baryon Octet: The central combination of quarks manifests itself as two distinct type of particle so
there is eight in all, hence the name Oct.

The Baryon Decuplet: Show more possibilities of Baryon using the up, down, and strange quarks. In the 60's the heaviest quark known of was the strange quark.

by Ben ( at July 23, 2015 05:03 PM

July 22, 2015

Symmetrybreaking - Fermilab/SLAC

Underground plans

The Super-Kamiokande collaboration has approved a project to improve the sensitivity of the Super-K neutrino detector.

Super-Kamiokande, buried under about 1 kilometer of mountain rock in Kamioka, Japan, is one of the largest neutrino detectors on Earth. Its tank is full of 50,000 tons (about 13 million gallons) of ultrapure water, which it uses to search for signs of notoriously difficult-to-catch particles.

Recently members of the Super-K collaboration gave the go-ahead to a plan to make the detector a thousand times more sensitive with the help of a chemical compound called gadolinium sulfate.

Neutrinos are made in a variety of natural processes. They are also produced in nuclear reactors, and scientists can create beams of neutrinos in particle accelerators. These particles are electrically neutral, have little mass and interact only weakly with matter—characteristics that make them extremely difficult to detect even though trillions fly through any given detector each second.

Super-K catches about 30 neutrinos that interact with the hydrogen and oxygen in the water molecules in its tank each day. It keeps its water ultrapure with a filtration system that removes bacteria, ions and gases.

Scientists take extra precautions both to keep the ultrapure water clean and to avoid contact with the highly corrosive substance.

“Somebody once dropped a hammer into the tank,” says experimentalist Mark Vagins of the University of Tokyo's Kavli Institute for the Physics and Mathematics of the Universe. “It was chrome-plated to look nice and shiny. Eventually we found the chrome and not the hammer.”

When a neutrino interacts in the Super-K detector, it creates other particles that travel through the water faster than the speed of light, creating a blue flash. The tank is lined with about 13,000 phototube detectors that can see the light.

Looking for relic neutrinos

On average, several massive stars explode as supernovae every second somewhere in the universe. If theory is correct, all supernovae to have exploded throughout the universe’s 13.8 billion years have thrown out trillions upon trillions of neutrinos. That means the cosmos would glow in a faint background of relic neutrinos—if scientists could just find a way to see even a fraction of those ghostlike particles.

For about half of the year, the Super-K detector is used in the T2K experiment, which produces a beam of neutrinos in Tokai, Japan, some 183 miles (295 kilometers) away, and aims it at Super-K. During the trip to the detector, some of the neutrinos change from one type of neutrino to another. T2K studies that change, which could give scientists hints as to why our universe holds so much more matter than antimatter.

But a T2K beam doesn’t run continuously during that half year. Instead, researchers send a beam pulse every few seconds, and each pulse lasts just a few microseconds long. Super-K still detects neutrinos from natural processes while scientists are running T2K.

In 2002, at a neutrino meeting in Munich, Germany, experimentalist Vagins and theorist John Beacom of The Ohio State University began thinking of how they could better use Super-K to spy the universe’s relic supernova neutrinos.

“For at least a few hours we were standing there in the Munich subway station somewhere deep underground, hatching our underground plans,” Beacom says.

To pick out the few signals that come from neutrino events, you have to battle a constant clatter of background noise of other particles. Other incoming cosmic particles such as muons (the electron’s heavier cousin) or even electrons emitted from naturally occurring radioactive substances in rock can produce signals that look like the ones scientists hope to find from neutrinos. No one wants to claim a discovery that later turns out to be a signal from a nearby rock.

Super-K already guards against some of this background noise by being buried underground. But some unwanted particles can get through, and so scientists need ways to separate the signals they want from deceiving background signals.

Vagins and Beacom settled on an idea—and a name for the next stage of the experiment:  Gadolinium Antineutrino Detector Zealously Outperforming Old Kamiokande, Super! (GADZOOKS!). They proposed to add 100 tons of the compound gadolinium sulfate—Gd2(SO4)3—to Super-K’s ultrapure water.

When a neutrino interacts with a molecule, it releases a charged lepton (a muon, electron, tau or one of their antiparticles) along with a neutron. Neutrons are thousands of times more likely to interact with the gadolinium sulfate than with another water molecule. So when a neutrino traverses Super-K and interacts with a molecule, its muon, electron, or antiparticle (Super-K can’t see tau particles) will generate a first pulse of light, and the neutron will create a second pulse of light: “two pulses, like a knock-knock,” Beacom says.

By contrast, a background muon or electron will make only one light pulse.

To extract only the neutrino interactions, scientists will use GADZOOKS! to focus on the two-signal events and throw out the single-signal events, reducing the background noise considerably.

The prototype

But you can’t just add 100 tons of a chemical compound to a huge detector without doing some tests first. So Vagins and colleagues built a scaled-down version, which they called Evaluating Gadolinium’s Action on Detector Systems (EGADS). At 0.4 percent the size of Super-K, it uses 240 of the same phototubes and 200 tons (52,000 gallons) of ultrapure water.

Over the past several years, Vagins’ team has worked extensively to show the benefits of their idea. One aspect of their efforts has been to build a filtration system that removes everything from the ultrapure water except for the gadolinium sulfate. They presented their results at a collaboration meeting in late June.

On June 27, the Super-K team officially approved the proposal to add gadolinium sulfate but renamed the project SuperK-Gd. The next steps are to drain Super-K to check for leaks and fix them, replace any burned out phototubes, and then refill the tank.

But this process must be coordinated with T2K, says Masayuki Nakahata, the Super-K collaboration spokesperson.

Once the tank is refilled with ultrapure water, scientists will add in the 100 tons of gadolinium sulfate. Once the compound is added, the current filtration system could remove it any time researchers would like, Vagins says.

“But I believe that once we get this into Super-K and we see the power of it, it’s going to become indispensable,” he says. “It’s going to be the kind of thing that people wouldn’t want to give up the extra physics once they’re used to it.”

Like what you see? Sign up for a free subscription to symmetry!

by Liz Kruesi at July 22, 2015 03:18 PM

ZapperZ - Physics and Physicists

The Standard Model Interactive Chart
Symmetry has published a webpage of an interactive chart for the Standard Model of elementary particle. It is almost like a periodic table, but with only the most basic, necessary information. A rather useful link when you need just the basic info.


by ZapperZ ( at July 22, 2015 03:08 PM

July 21, 2015

Lubos Motl - string vacua and pheno

The \(2\TeV\) LHC excess could prove string theory
On Friday, I praised the beauty of the left-right-symmetric models that replace the hypercharge \(U(1)_Y\) by a new \(SU(2)_R\) group. They could explain the excess that especially ATLAS but also (in a different search) CMS seems to be seeing at the invariant mass around \(1.9\TeV\), an excess that I placed at the first place of attractiveness among the known bumps at the LHC.

A random picture of intersecting D-branes

Alternatively, if that bump were real, it could have been a sign of compositeness, a heavy scalar (instead of a spin-one boson), or a triboson pretending to be a diboson. However, on Sunday, six string phenomenologists proposed a much more exciting explanation:
Stringy origin of diboson and dijet excesses at the LHC
The multinational corporation (SUNY, Paris, Munich, Taiwan, Bern, Boston) consisting of Anchordoqui, Antoniadis, Goldberg, Huang, Lüst, and Taylor argues that the bump has the required features to grow into the first package of exclusive collider evidence in favor of string theory – yes, I mean the theory that stinky brainless chimps yell to be disconnected from experiments.

Why would such an ambitious conclusion follow from such a seemingly innocent bump on the road? We need just a little bit of patience to understand this point.

They agree with the defenders of the left-right-symmetric explanation of the bump that the particle that decays in order to manifest itself as the bump is a new spin-one boson, namely a \(Z'\). But its corresponding \(U(1)_a\) symmetry may be anomalous: there may exist a mixed anomaly in the triangle\[

U(1)_a SU(2)_L SU(2)_L

\] with two copies of the regular electroweak \(SU(2)\) gauge group. An anomaly in the gauge group would mean that the field theory is inconsistent. In the characteristic field theory constructions, the right multiplicities and charges of the spectrum are needed to cancel the anomaly. However, string theory has one more trick that may cancel gauge anomalies. It's a trick that actually launched the First Superstring Revolution in 1984.

It's the Green-Schwarz mechanism.

In 1984, Green and Schwarz figured out how the anomaly works in type I superstring theory with the \(SO(32)\) gauge group – which is given by a hexagon diagram in \(d=10\) much like it needs a triangle in \(d=4\) – but the same trick may apply even after compactification. The new spin-one gauge field is told to transform surprisingly nontrivially under a gauge invariance of a seemingly independent field, a two-index field, and the hexagon is then cancelled against a 2+4 tree diagram with the exchange of the two-index field.

In the \(d=4\) case, we may see that this Green-Schwarz mechanism makes the previously anomalous \(U(1)_a\) gauge boson massive – and the "Stückelberg" mass is just an order of magnitude or so lower than the string scale (which they therefore assume to be \(M_s\approx 20\TeV\)). This is normally viewed as an extremely high energy scale which is why these possibilities don't enter the conventional quantum field theoretical models.

But string theory may also be around the corner – in the case of some stringy braneworld models, particularly the intersecting braneworlds. In these braneworlds, which are very concrete stringy realizations of the "old large dimensions" paradigm, the Standard Model fields live on stacks of branes, they have the form of open strings whose basic duty is to stay attached to a D-brane. Some string modes (particles) live near the intersections of the D-brane stacks because one of their endpoint is attached to one stack and the other to the other stack and the strings always want to be stringy short, not to carry insanely high energy.

To make the story short, the anomaly-producing triangle diagram may also be interpreted as the Feynman diagram for a decay of the new \(Z'\) boson of the \(U(1)_a\) groups into two \(SU(2)_L\) gauge bosons. When the latter pair is decomposed into the basis of the usual particles we know, the decays may be\[

Z' &\to W^+ W^-,\\
Z' &\to Z^0 Z^0,\\
Z' &\to Z^0 \gamma

\] All these three decays are made unavoidable in the Green-Schwarz-mechanism-based models – and the relative branching ratios are pretty much given. Note that \(W^0\equiv W_3\) is a mixture of \(Z^0\) and \(\gamma\) so all three pairs created from \(Z^0\) and \(\gamma\) would be possible but the Landau-Yang theorem implies that the \(\gamma\gamma\) decay of \(Z'\) is forbidden (the rate is zero) for symmetry reasons.

Their storyline is so predictive that then may tell you that the new coupling constant is \(g_a\approx 0.36\), too.

So if their explanation is right, the bump near \(2\TeV\) will be growing – it may already be growing now: the first Run II results will be announced on EPS-HEP in Vienna, a meeting that starts tomorrow (follow the conference website)! Only about 1 inverse femtobarn of \(13\TeV\) data has been accumulated in 2015 so far – much less than 20-30/fb at \(8\TeV\) in 2012. And if the authors of the paper discussed here are right, one more thing is true. The decay channel \(Z\gamma\) of the new particle will soon be detected as well – and it will be a smoking gun for low-scale string theory!

No known consistent field theory predicts a nonzero \(Z\gamma\) decay rate of the new massive gauge boson. The string-theoretical Green-Schwarz mechanism mixes what looks like a field-theoretical tree-level diagram with a one-loop diagram. Their being on equal footing implies that the regular QFT-like perturbation theory breaks down and instead, there is a hidden loop inside a vertex of the would-be tree-level diagram. This loop can't be expanded in terms of regular particles in a loop, however: it implies some stringy compositeness of the particles and processes.

A smoking gun. This particular one is a smoking gun of someone else than string theory, however.

This sounds to good to be true but it may be true. I still think it's very unlikely but these smart authors obviously think it's a totally sensible scenario. It's hard to figure out whether they really impartially believe that these low-scale intersecting braneworlds are likely; or their belief mostly boils down to a wishful thinking.

If these ideas were right, we could observe megatons of stringy physics with finite-price colliders!

by Luboš Motl ( at July 21, 2015 05:13 PM

Clifford V. Johnson - Asymptotia

Ian McKellen on Fresh Air!
Ian McKellen twitter profile imageI had a major treat last night! While making myself an evening meal I turned on the radio to find Ian McKellen (whose voice and delivery I love so very much I can listen to him slowly reading an arbitrarily long list of random numbers) being interviewed by Dave Davies on NPR's Fresh Air. It was of course delightful, and some of the best radio I've enjoyed in a while (and I listen to a ton of good radio every day, between having either NPR or BBC Radio 4 on most of the time) since it was the medium at its simple best - a splendid conversation with an interesting, thoughtful, well-spoken person. They also played and discussed a number of clips from his work, recent (I've been hugely excited to see Mr. Holmes, just released) and less recent (less well known delights such as Gods and Monsters -you should see it if you have not- and popular material like the first Hobbit film), and spoke at length about his private and public life and the intersection between the two, for example how his coming out as gay in 1988 positively affected his acting, and why.... There's so much in that 35 minutes! [...] Click to continue reading this post

by Clifford at July 21, 2015 04:55 PM

Lubos Motl - string vacua and pheno

A new LHC Kaggle contest: discover "\(\tau \to 3 \mu\)" decay
A year ago, the machine learning contest server along with the ATLAS Collaboration at the LHC organized a contest in which you were asked to determine whether a collision of two protons was involving the Higgs boson (that later decayed to the \(\tau^+\tau^-\) pair, one of the taus is leptonic and the other is hadronic). To make the story short, there's a new similar contest out there:
Identify an unknown decay phenomenon
Again, you will submit a file in which each "test" collision is labeled as either "interesting" or "uninteresting". But in this case, you may actually discover a phenomenon that is believed not to exist at the LHC, according to the state-of-the-art theory (the Standard Model)!

The Higgs contest was all about the simulated data. They looked real but they were not real and several technicalities were switched off in the simulation, to simplify things. Incredibly enough, here you are going to work with the real data from the relevant detector at the LHC, the LHCb detector: the LHCb collaboration is the co-organizer.

For each test event, you will have to announce a probability \(P_i\) that the event involved the following decay of a tau:\[

\tau^\pm \to \mu^\pm \mu^+\mu^-

\] The tau lepton decayed to three muons. The charge is conserved but the lepton number is not: among the decay products, the negative muon and the positive muon cancel but there's still another muon – and it was created from a tau. \(L_\mu\) and \(L_\tau\) conservation laws were violated.

At many leading orders of the Standard Model, the probability of such a decay is zero. I believe that the actual predicted rate is nonzero but unmeasurably tiny. New physics allows this "flavor-violating" process to take place, however.

To show you the unexpected relationships between different TRF blog posts, let me tell you that the blog post right before this one talked about the \(Z'\) boson and this new spin-one particle could actually cause this "so far non-existent" process.

In fact, this option appears in the logo of the contest! The \(\tau^\pm\) lepton decays to one \(\mu^\pm\) and a virtual \(Z'\), and the virtual \(Z'\) decays to \(\mu^+\mu^-\). The first vertex violates the flavor numbers but it's not so shocking for a new heavy particle to couple to leptons in this "non-diagonal" way.

The LHCb contest is harder than the Higgs contest in several respects such as
  1. lower prizes: $7k, $5k, $3k for the winner, silver medal, and bronze medal. It's harder to write difficult programs if you're less financially motivated. But LHCb is smaller than ATLAS so you should have expected that. ;-)
  2. no sharing of scripts: you won't be permitted to share your scripts for this contest so everyone has to start from "scratch". Sadly, you may still use your programs and experience from other projects so the machine learning folks will still have a huge advantage, perhaps a bigger one than in the Higgs contest.
  3. agreement and correlation pre-checks: to make things worse, your submission won't be counted at all if it fails to pass two tests: the agreement test and the correlation test. This feature of the contest, along with the previous one, will make the leaderboard much smaller than in the Higgs contest. The two tests reflect the fact that the dataset is composed of several groups of events – real collisions, simulated realistic ones, and simulated new-physics ones for verification purposes.
  4. larger files to download: in total, you have to download 400 MB worth of ZIP files that decompress to many gigabytes.
  5. messy details of the LHC are kept: lots of the technical details that make the real life of experimental physicists hard were kept – although translated to the machine-learning-friendly conventions. Also, the evaluation metric is more sophisticated – some weighted area under the curve (depicting the graph relating the number of false positives and the false negatives).
  6. and I forgot about 3 more complications that have scared me...
An ambitious contestant may view all these vices as virtues (or at least some of them). After all, money corrupts and sucks; sharing encourages losers to accidentally mix with the skillful guys; it's good for the submissions to pass some extra tests so that one doesn't coincidentally submit garbage; all these difficulties will keep the leaderboard of true competitors shorter and easier to follow (instead of the 2,000 people in the Higgs contest); I vaguely guess that the final, private leaderboard will be much closer to the preliminary, public one (there was a substantial change in the Higgs contest, sadly for your humble correspondent LOL). The reason for this belief of mine is that the contestants submit a larger number of guesses, they're continuous numbers, and the evaluation metric is a more continuous function of those, too. So the room for overfitting will probably be much lower than in the Higgs contest.

So far, there are only 13 people in the leaderboard and it's plausible that the total number will remain very low throughout the contest. If you write a single script that passes the tests at all, chances are high that you will be immediately placed very high in the leaderboard.

At any rate, you have 2 months left to win this contest and proudly announce it to the world on this blog and in The Wall Street Journal. Your solution may be much more useful than in the Higgs case; technicalities weren't eliminated, so your ideas may be used directly. And what you may discover is a genuinely new, surprising process – but one that may actually be already present in the LHCb data (as the hints of a \(Z'\) and flavor-violating Higgs decays suggest).

Good luck.

Correction: the Higgs money was just $7k, $4k, $2k, so this contest actually has better prizes. The money comes from CERN, Intel, two subdivisions of Yandex (a Russian Google competitor), and universities in Zurich, Warwick, Poland, and Russia.

by Luboš Motl ( at July 21, 2015 01:55 PM

Symmetrybreaking - Fermilab/SLAC

The Standard Model of particle physics

Explore the elementary particles that make up our universe.

The Standard Model is a kind of periodic table of the elements for particle physics. But instead of listing the chemical elements, it lists the fundamental particles that make up the atoms that make up the chemical elements, along with any other particles that cannot be broken down into any smaller pieces.

The complete Standard Model took a long time to build. Physicist J.J. Thomson discovered the electron in 1897, and scientists at the Large Hadron Collider found the final piece of the puzzle, the Higgs boson, in 2012.

Use this interactive model (based on a design by Walter Murch for the documentary Particle Fever) to explore the different particles that make up the building blocks of our universe.





Up Quark

Discovered in:



2.3 MeV

Discovered at:









Up and down quarks make up protons and neutrons, which make up the nucleus of every atom.

Charm Quark

Discovered in:



1.275 GeV

Discovered at:

Brookhaven & SLAC








In 1974, two independent research groups conducting experiments at two independent labs discovered the charm quark, the fourth quark to be found. The surprising discovery forced physicists to reconsider how the universe works at the smallest scale.

Top Quark

Discovered in:



173.21 GeV

Discovered at:









The top quark is the heaviest quark discovered so far. It has about the same weight as a gold atom. But unlike an atom, it is a fundamental, or elementary, particle; as far as we know, it is not made of smaller building blocks.

Down Quark

Discovered in:



4.8 MeV

Discovered at:









Nobody knows why, but a down quark is a just a little bit heavier than an up quark. If that weren’t the case, the protons inside every atom would decay and the universe would look very different.

Strange Quark

Discovered in:



95 MeV

Discovered at:

Manchester University








Scientists discovered particles with “strange" properties many years before it became clear that those strange properties were due to the fact that they all contained a new, “strange” kind of quark. Theorist Murray Gell-Mann was awarded the Nobel Prize for introducing the concepts of strangeness and quarks.

Bottom Quark

Discovered in:



4.18 GeV

Discovered at:









This particle is a heavier cousin of the down and strange quarks. Its discovery confirmed that all elementary building blocks of ordinary matter come in three different versions.




Discovered in:



0.511 MeV

Discovered at:

Cavendish Laboratory








The electron powers the world. It is the lightest particle with an electric charge and a building block of all atoms. The electron belongs to the family of charged leptons.


Discovered in:



105.66 MeV

Discovered at:

Caltech & Harvard








The muon is a heavier version of the electron. It rains down on us as it is created in collisions of cosmic rays with the Earth’s atmosphere. When it was discovered in 1937, a physicist asked, “Who ordered that?”


Discovered in:



1776.82 MeV

Discovered at:









The discovery of this particle in 1976 completely surprised scientists. It was the first discovery of a particle of the so-called third generation. It is the third and heaviest of the charged leptons, heavier than both the electron and the muon.

Electron Neutrino

Discovered in:



<2 eV

Discovered at:

Savannah River Plant








Measurements and calculations in the 1920s led to the prediction of the existence of an elusive particle without electric charge, the neutrino. But it wasn’t until 1956 that scientists observed the signal of an electron neutrino interacting with other particles. Nuclear reactions in the sun and in nuclear power plants produce electron antineutrinos.

Muon Neutrino

Discovered in:



<0.19 MeV

Discovered at:









Neutrinos come in three flavors. The muon neutrino was first discovered in 1962. Neutrino beams from accelerators are typically made up of muon neutrinos and muon antineutrinos.

Tau Neutrino

Discovered in:



<18.2 MeV

Discovered at:









Based on theoretical models and indirect observations, scientists expected to find a third generation of neutrino. But it took until 2000 for scientists to develop the technologies to identify the particle tracks created by tau neutrino interactions.





Discovered in:



<1x10^-18 eV

Discovered at:

Washington University







The photon is the only elementary particle visible to the human eye—but only if it has the right energy and frequency (color). It transmits the electromagnetic force between charged particles.

Physicists and their quantum theories treat the photon as a massless particle; so far even the most sophisticated experiments haven’t found any evidence to the contrary.


Discovered in:




Discovered at:








The gluon is the glue that holds together quarks to form protons, neutrons and other particles. It mediates the strong nuclear force.

Z Boson

Discovered in:



91.1876 GeV

Discovered at:








The Z boson is the electrically neutral cousin of the W boson and a heavy relative of the photon. Together, these particles explain the electroweak force.

W Boson

Discovered in:



80.385 GeV

Discovered at:








The W boson is the only force carrier that has an electric charge. It’s essential for weak nuclear reactions: Without it, the sun would not shine.

Higgs Boson

Discovered in:



125.7 GeV

Discovered at:








Discovered in 2012, the Higgs boson was the last missing piece of the Standard Model puzzle. It is a different kind of force carrier from the other elementary forces, and it gives mass to quarks as well as the W and Z bosons. Whether it also gives mass to neutrinos remains to be discovered.

Launch the interactive model »



by Kurt Riesselmann at July 21, 2015 01:00 PM

ZapperZ - Physics and Physicists

Yoichiro Nambu
This is a bit late, but I will kick myself if I don't acknowledge the passing of Yoichiro Nambu this past week. This person, if you've never heard of his name before, is truly a GIANT in physics, and not just in elementary particle. His work transcends any field of physics, and had a significant impact in condensed matter.

I wrote an entry on his work when he won the Nobel prize a few years ago. His legacy will live on long after him.


by ZapperZ ( at July 21, 2015 12:48 PM

July 20, 2015

Sean Carroll - Preposterous Universe

Why is the Universe So Damn Big?

I love reading io9, it’s such a fun mixture of science fiction, entertainment, and pure science. So I was happy to respond when their writer George Dvorsky emailed to ask an innocent-sounding question: “Why is the scale of the universe so freakishly large?”

You can find the fruits of George’s labors at this io9 post. But my own answer went on at sufficient length that I might as well put it up here as well. Of course, as with any “Why?” question, we need to keep in mind that the answer might simply be “Because that’s the way it is.”

Whenever we seem surprised or confused about some aspect of the universe, it’s because we have some pre-existing expectation for what it “should” be like, or what a “natural” universe might be. But the universe doesn’t have a purpose, and there’s nothing more natural than Nature itself — so what we’re really trying to do is figure out what our expectations should be.

The universe is big on human scales, but that doesn’t mean very much. It’s not surprising that humans are small compared to the universe, but big compared to atoms. That feature does have an obvious anthropic explanation — complex structures can only form on in-between scales, not at the very largest or very smallest sizes. Given that living organisms are going to be complex, it’s no surprise that we find ourselves at an in-between size compared to the universe and compared to elementary particles.

What is arguably more interesting is that the universe is so big compared to particle-physics scales. The Planck length, from quantum gravity, is 10^{-33} centimeters, and the size of an atom is roughly 10^{-8} centimeters. The difference between these two numbers is already puzzling — that’s related to the “hierarchy problem” of particle physics. (The size of atoms is fixed by the length scale set by electroweak interactions, while the Planck length is set by Newton’s constant; the two distances are extremely different, and we’re not sure why.) But the scale of the universe is roughly 10^29 centimeters across, which is enormous by any scale of microphysics. It’s perfectly reasonable to ask why.

Part of the answer is that “typical” configurations of stuff, given the laws of physics as we know them, tend to be very close to empty space. (“Typical” means “high entropy” in this context.) That’s a feature of general relativity, which says that space is dynamical, and can expand and contract. So you give me any particular configuration of matter in space, and I can find a lot more configurations where the same collection of matter is spread out over a much larger volume of space. So if we were to “pick a random collection of stuff” obeying the laws of physics, it would be mostly empty space. Which our universe is, kind of.

Two big problems with that. First, even empty space has a natural length scale, which is set by the cosmological constant (energy of the vacuum). In 1998 we discovered that the cosmological constant is not quite zero, although it’s very small. The length scale that it sets (roughly, the distance over which the curvature of space due to the cosmological constant becomes appreciable) is indeed the size of the universe today — about 10^26 centimeters. (Note that the cosmological constant itself is inversely proportional to this length scale — so the question “Why is the cosmological-constant length scale so large?” is the same as “Why is the cosmological constant so small?”)

This raises two big questions. The first is the “coincidence problem”: the universe is expanding, but the length scale associated with the cosmological constant is a constant, so why are they approximately equal today? The second is simply the “cosmological constant problem”: why is the cosmological constant scale so enormously larger than the Planck scale, or event than the atomic scale? It’s safe to say that right now there are no widely-accepted answers to either of these questions.

So roughly: the answer to “Why is the universe so big?” is “Because the cosmological constant is so small.” And the answer to “Why is the cosmological constant so small?” is “Nobody knows.”

But there’s yet another wrinkle. Typical configurations of stuff tend to look like empty space. But our universe, while relatively empty, isn’t *that* empty. It has over a hundred billion galaxies, with a hundred billion stars each, and over 10^50 atoms per star. Worse, there are maybe 10^88 particles (mostly photons and neutrinos) within the observable universe. That’s a lot of particles! A much more natural state of the universe would be enormously emptier than that. Indeed, as space expands the density of particles dilutes away — we’re headed toward a much more natural state, which will be much emptier than the universe we see today.

So, given what we know about physics, the real question is “Why are there so many particles in the observable universe?” That’s one angle on the question “Why is the entropy of the observable universe so small?” And of course the density of particles was much higher, and the entropy much lower, at early times. These questions are also ones to which we have no good answers at the moment.

by Sean Carroll at July 20, 2015 06:16 PM

John Baez - Azimuth

The Game of Googol

Here’s a puzzle from a recent issue of Quanta, an online science magazine:

Puzzle 1: I write down two different numbers that are completely unknown to you, and hold one in my left hand and one in my right. You have absolutely no idea how I generated these two numbers. Which is larger?

You can point to one of my hands, and I will show you the number in it. Then you can decide to either select the number you have seen or switch to the number you have not seen, held in the other hand. Is there a strategy that will give you a greater than 50% chance of choosing the larger number, no matter which two numbers I write down?

At first it seems the answer is no. Whatever number you see, the other number could be larger or smaller. There’s no way to tell. So obviously you can’t get a better than 50% chance of picking the hand with the largest number—even if you’ve seen one of those numbers!

But “obviously” is not a proof. Sometimes “obvious” things are wrong!

It turns out that, amazingly, the answer to the puzzle is yes! You can find a strategy to do better than 50%. But the strategy uses randomness. So, this puzzle is a great illustration of the power of randomness.

If you want to solve it yourself, stop now or read Quanta magazine for some clues—they offered a small prize for the best answer:

• Pradeep Mutalik, Can information rise from randomness?, Quanta, 7 July 2015.

Greg Egan gave a nice solution in the comments to this magazine article, and I’ll reprint it below along with two followup puzzles. So don’t look down there unless you want a spoiler.

I should add: the most common mistake among educated readers seems to be assuming that the first player, the one who chooses the two numbers, chooses them according to some probability distribution. Don’t assume that. They are simply arbitrary numbers.

The history of this puzzle

I’d seen this puzzle before—do you know who invented it? On G+, Hans Havermann wrote:

I believe the origin of this puzzle goes back to (at least) John Fox and Gerald Marnie’s 1958 betting game ‘Googol’. Martin Gardner mentioned it in his February 1960 column in Scientific American. Wikipedia mentions it under the heading ‘Secretary problem’. Gardner suggested that a variant of the game was proposed by Arthur Cayley in 1875.

Actually the game of Googol is a generalization of the puzzle that we’ve been discussing. Martin Gardner explained it thus:

Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred 0s) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.

So, the puzzle I just showed you is the special case when there are just 2 slips of paper. I seem to recall that Gardner incorrectly dismissed this case as trivial!

There’s been a lot of work on Googol. Julien Berestycki writes:

I heard about this puzzle a few years ago from Sasha Gnedin. He has a very nice paper about this

• Alexander V. Gnedin, A solution to the game of Googol, Annals of Probability (1994), 1588–1595.

One of the many beautiful ideas in this paper is that it asks what is the best strategy for the guy who writes the numbers! It also cites a paper by Gnedin and Berezowskyi (of oligarchic fame). 

Egan’s solution

Okay, here is Greg Egan’s solution, paraphrased a bit:

Pick some function f : \mathbb{R} \to \mathbb{R} such that:

\displaystyle{ \lim_{x \to -\infty} f(x) = 0 }

\displaystyle{ \lim_{x \to +\infty} f(x) = 1 }

f is monotonically increasing: if x > y then f(x) > f(y)

There are lots of functions like this, for example

\displaystyle{f(x) = \frac{e^x}{e^x + 1} }

Next, pick one of the first player’s hands at random. If the number you are shown is x, compute f(x). Then generate a uniformly distributed random number z between 0 and 1. If z is less than or equal to f(x) guess that x is the larger number, but if z is greater than f(x) guess that the larger number is in the other hand.

The probability of guessing correctly can be calculated as the probability of seeing the larger number initially and then, correctly, sticking with it, plus the probability of seeing the smaller number initially and then, correctly, choosing the other hand.

This is

\frac{1}{2} f(x) + \frac{1}{2} (1 - f(y)) =  \frac{1}{2} + \frac{1}{2} (f(x) - f(y))

This is strictly greater than \frac{1}{2} since x > y so f(x) - f(y) > 0.

So, you have a more than 50% chance of winning! But as you play the game, there’s no way to tell how much more than 50%. If the numbers on the other players hands are very large, or very small, your chance will be just slightly more than 50%.

Followup puzzles

Here are two more puzzles:

Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.

Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?

But watch out—here come Egan’s solutions to those!


Egan writes:

Here are my answers to your two puzzles on G+.

Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.

Answer: If we adopt a deterministic strategy, that means there is a function S: \mathbb{R} \to \{0,1\} that tells us whether on not we stick with the number x when we see it. If S(x)=1 we stick with it, if S(x)=0 we swap it for the other number.

If the two numbers are x and y, with x > y, then the probability of success will be:

P = 0.5 + 0.5(S(x)-S(y))

This is exactly the same as the formula we obtained when we stuck with x with probability f(x), but we have specialised to functions S valued in \{0,1\}.

We can only guarantee a more than 50% chance of choosing the larger number if S is monotonically increasing everywhere, i.e. S(x) > S(y) whenever x > y. But this is impossible for a function valued in \{0,1\}. To prove this, define x_0 to be any number in [1,2] such that S(x_0)=0; such an x_0 must exist, otherwise S would be constant on [1,2] and hence not monotonically increasing. Similarly define x_1 to be any number in [-2,-1] such that S(x_1) = 1. We then have x_0 > x_1 but S(x_0) < S(x_1).

Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?

Answer: As Philip Gibbs noted, a deterministic pseudo-random number generator is still deterministic. Using a specific sequence of algorithmically random bits

(b_1, b_2, \dots )

to construct a number z between 0 and 1 means z takes on the specific value:

z_0 = \sum_i b_i 2^{-i}

So rather than sticking with x with probability f(x) for our monotonically increasing function f, we end up always sticking with x if z_0 \le f(x), and always swapping if z_0 > f(x). This is just using a function S:\mathbb{R} \to \{0,1\} as in Puzzle 2, with:

S(x) = 0 if x < f^{-1}(z_0)

S(x) = 1 if x \ge f^{-1}(z_0)

So all the same consequences as in Puzzle 2 apply, and we cannot guarantee a more than 50% chance of choosing the larger number.

Puzzle 3 emphasizes the huge gulf between ‘true randomness’, where we only have a probability distribution of numbers z, and the situation where we have a specific number z_0, generated by any means whatsoever.

We could generate z_0 using a pseudorandom number generator, radioactive decay of atoms, an oracle whose randomness is certified by all the Greek gods, or whatever. No matter how randomly z_0 is generated, once we have it, we know there exist choices for the first player that will guarantee our defeat!

This may seem weird at first, but if you think about simple games of luck you’ll see it’s completely ordinary. We can have a more than 50% chance of winning such a game even if for any particular play we make the other player has a move that ensures our defeat. That’s just how randomness works.

by John Baez at July 20, 2015 07:55 AM

July 19, 2015

Ben Still - Neutrino Blog

Pentaquark Series 1: What Are Quarks?
This is the first in a series of posts I will release over the next two weeks aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. We start off today by first answering the question:

What Are Quarks?

Quarks are building blocks that cannot be broken into
smaller things.
Quarks are: a group of fundamental particles that are indivisible, meaning that they cannot be broken into smaller pieces. They are building blocks that combine in groups to make up a whole zoo of other (composite) particles. They were first thought up by physicists Murray Gell-Mann and George Zweig while attempting to mathematically explain the vast array of new particles popping up in experiments throughout the 1950’s and 1960’s. 
Debris that results from smashing protons and protons into each other was seen in experiments to be a whole lot messier than debris from two electrons colliding headlong. Gell-Mann and others reasoned that this would happen if the proton were not a single entity like the electron but instead, like a bag of groceries, containing multiple particles within itself. 

The menagerie of particles being discovered each week at particle accelerators could, in Gell-Manns model, all be explained as different composites of just a few types of truly fundamental particles. The multiple that seemed to fit the data in most cases was three and Gell-Mann got the spelling for his 'kwork' from a passage in James Joyce’s ‘Fineganns Wake’ - “Three quarks for Muster Mark”. Proof of Gell-Mann’s model came when a particle he predicted in 1962 to exist (which he called Ω-) was seen at an experiment at Brookhaven National Lab in the US in 1964. Gell-Mann received the Nobel Prize in Physics in 1969 for this work which was the birth of the quark.

We know today that the proton is made up from three quarks; two up quarks and one down quark. The naming of ‘up’ and ‘down’ shows that some poetry disappeared in naming the individual types of quark! The up and down quarks have the lightest mass of all of the quarks (they would weigh the least if we could practically weigh something so small!). The fact they are so light also means they are the most stable of all of the quarks. Experiment has shown us that the heavier a particle is the shorter its lifespan. Just like high fashion models, particles are constantly wanting to become as light as possible. 

Protons and neutrons are each made from three quark building blocks.

A neutron particle is also composed of a grouping of three quarks; one up quark and two down. The lifetime of a neutron sitting by itself is limited because although moderately stable (metastable) it knows it can still become a lighter proton. The change from neutron to proton (plus electron and neutrino) is known as radioactive beta decay. Experiments around the world have been looking closely at protons to see if they, like the neutron, change into something lighter. To date not a single experiment has seen a proton decay into anything else which suggests that the proton is immortal and certainly the most stable composite particle we know of.

The up and down quarks are part of what is known as the first generation of fundamental particles. For reasons which we do not know Nature has presented us with two more generations. The only difference between particles in each generation is their mass. Generation 1 particles are lightest, with generation 2 particles heavier than 1 but in turn lighter than generation 3, which are the heaviest. All of the other properties of the particles, they way they feel forces, seem to remain the same; E.g. their electric charge. The heavier versions of the down quark in generation 2 and 3 are called strange quark and bottom quark. The heavier versions of the up quark are called the charm quark in generation 2 and top quark in generation 3. 

Heavy particles are made in particle accelerators like the LHC thanks to Einstein’s most famous equation E=mc2 which tells us that mass of new particles (m) can be created from lots of energy (E). The heavier the particles we want to make the higher in energy we have to accelerate protons to in our accelerator before smashing them together. Remember I said heavy particles are unstable, it turns out that the heavier they get the more unstable they become which means any heavy particle made with quarks from generations 2 or 3 are usually not around for very long.

Next Post: Rule of Three - Why are there not a different number of quarks in protons and other similar particles?

by Ben ( at July 19, 2015 09:51 PM

The n-Category Cafe

Category Theory 2015

Just a quick note: you can see lots of talk slides here:

Category Theory 2015, Aveiro, Portugal, June 14-19, 2015.

The Giry monad, tangent categories, Hopf monoids in duoidal categories, model categories, topoi… and much more!

by john ( at July 19, 2015 08:25 AM

The n-Category Cafe

What's so HoTT about Formalization?

In my last post I promised to follow up by explaining something about the relationship between homotopy type theory (HoTT) and computer formalization. (I’m getting tired of writing “publicity”, so this will probably be my last post for a while in this vein — for which I expect that some readers will be as grateful as I).

As a potential foundation for mathematics, HoTT/UF is a formal system existing at the same level as set theory (ZFC) and first-order logic: it’s a collection of rules for manipulating syntax, into which we can encode most or all of mathematics. No such formal system requires computer formalization, and conversely any such system can be used for computer formalization. For example, the HoTT Book was intentionally written to make the point that HoTT can be done without a computer, while the Mizar project has formalized huge amounts of mathematics in a ZFC-like system.

Why, then, does HoTT/UF seem so closely connected to computer formalization? Why do the overwhelming majority of publications in HoTT/UF come with computer formalizations, when such is still the exception rather than the rule in mathematics as a whole? And why are so many of the people working on HoTT/UF computer scientists or advocates of computer formalization?

To start with, note that the premise of the third question partially answers the first two. If we take it as a given that many homotopy type theorists care about computer formalization, then it’s only natural that they would be formalizing most of their papers, creating a close connection between the two subjects in people’s minds.

Of course, that forces us to ask why so many homotopy type theorists are into computer formalization. I don’t have a complete answer to that question, but here are a few partial ones.

  1. HoTT/UF is built on type theory, and type theory is closely connected to computers, because it is the foundation of typed functional programming languages like Haskell, ML, and Scala (and, to a lesser extent, less-functional typed programming languages like Java, C++, and so on). Thus, computer proof assistants built on type theory are well-suited to formal proofs of the correctness of software, and thus have received a lot of work from the computer science end. Naturally, therefore, when a new kind of type theory like HoTT comes along, the existing type theorists will be interested in it, and will bring along their predilection for formalization.

  2. HoTT/UF is by default constructive, meaning that we don’t need to assert the law of excluded middle or the axiom of choice unless we want to. Of course, most or all formal systems have a constructive version, but with type theories the constructive version is the “most natural one” due to the Curry-Howard correspondence. Moreover, one of the intriguing things about HoTT/UF is that it allows us to prove certain things constructively that in other systems require LEM or AC. Thus, it naturally attracts attention from constructive mathematicians, many of whom are interested in computable mathematics (i.e. when something exists, can we give an algorithm to find it?), which is only a short step away from computer formalization of proofs.

  3. One could, however, try to make similar arguments from the other side. For instance, HoTT/UF is (at least conjecturally) an internal language for higher topos theory and homotopy theory. Thus, one might expect it to attract an equal influx of higher topos theorists and homotopy theorists, who don’t care about computer formalization. Why hasn’t this happened? My best guess is that at present the traditional 1-topos theorists seem to be largely disjoint from the higher topos theorists. The former care about internal languages, but not so much about higher categories, while for the latter it is reversed; thus, there aren’t many of us in the intersection who care about both and appreciate this aspect of HoTT. But I hope that over time this will change.

  4. Another possible reason why the influx from type theory has been greater is that HoTT/UF is less strange-looking to type theorists (it’s just another type theory) than to the average mathematician. In the HoTT Book we tried to make it as accessible as possible, but there are still a lot of tricky things about type theory that one seemingly has to get used to before being able to appreciate the homotopical version.

  5. Another sociological effect is that Vladimir Voevodsky, who introduced the univalence axiom and is a Fields medalist with “charisma”, is also a very vocal and visible advocate of computer formalization. Indeed, his personal programme that he calls “Univalent Foundations” is to formalize all of mathematics using a HoTT-like type theory.

  6. Finally, many of us believe that HoTT is actually the best formal system extant for computer formalization of mathematics. It shares most of the advantages of type theory, such as the above-mentioned close connection to programming, the avoidance of complicated ZF-encodings for even basic concepts like natural numbers, and the production of small easily-verifiable “certificates” of proof correctness. (The advantages of some type theories that HoTT doesn’t yet share, like a computational interpretation, are work in progress.) But it also rectifies certain infelicious features of previously existing type theories, by specifying what equality of types means (univalence), including extensionality for functions and truth values, providing well-behaved quotient types (HITs), and so on, making it more comfortable for ordinary mathematicians. (I believe that historically, this was what led Voevodsky to type theory and univalence in the first place.)

There are probably additional reasons why HoTT/UF attracts more people interested in computer formalization. (If you can think of others, please share them in the comments.) However, there is more to it than this, as one can guess from the fact that even people like me, coming from a background of homotopy theory and higher category theory, tend to formalize a lot of our work on HoTT. Of course there is a bit of a “peer pressure” effect: if all the other homotopy type theorists formalize their papers, then it starts to seem expected in the subject. But that’s far from the only reason; here are some “real” ones.

  1. Computer formalization of synthetic homotopy theory (the “uniquely HoTT” part of HoTT/UF) is “easier”, in certain respects, than most computer formalization of mathematics. In particular, it requires less infrastructure and library support, because it is “closer to the metal” of the underlying formal system than is usual for actually “interesting” mathematics. Thus, formalizing it still feels more like “doing mathematics” than like programming, making it more attractive to a mathematician. You really can open up a proof assistant, load up no pre-written libraries at all, and in fairly short order be doing interesting HoTT. (Of course, this doesn’t mean that there is no value in having libraries and in thinking hard about how best to design those libraries, just that the barrier to entry is lower.)

  2. Precisely because, as mentioned above, type theory is hard to grok for a mathematician, there is a significant benefit to using a proof assistant that will automatically tell you when you make a mistake. In fact, messing around with a proof assistant is one of the best ways to learn type theory! I posted about this almost exactly four years ago.

  3. I think the previous point goes double for homotopy type theory, because it is an unfamiliar new world for almost everyone. The types of HoTT/UF behave kind of like spaces in homotopy theory, but they have their own idiosyncracies that it takes time to develop an intuition for. Playing around with a proof assistant is a great way to develop that intuition. It’s how I did it.

  4. Moreover, because that intuition is unique and recently developed for all of us, we may be less confident in the correctness of our informal arguments than we would be in classical mathematics. Thus, even an established “homotopy type theorist” may be more likely to want the comfort of a formalization.

  5. Finally, there is an additional benefit to doing mathematics with a proof assistant (as opposed to formalizing mathematics that you’ve already done on paper), which I think is particularly pronounced for type theory and homotopy type theory. Namely, the computer always tells you what you need to do next: you don’t need to work it out for yourself. A central part of type theory is inductive types, and a central part of HoTT is higher inductive types; both of which are characterized by an induction principle (or “eliminator”) which says that in order to prove a statement of the form “for all <semantics>x:W<annotation encoding="application/x-tex">x:W</annotation></semantics>, <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>”, it suffices to prove some number of other statements involving the predicate <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. The most familiar example is induction on the natural numbers, which says that in order to prove “for all <semantics>n<annotation encoding="application/x-tex">n\in \mathbb{N}</annotation></semantics>, <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics>” it suffices to prove <semantics>P(0)<annotation encoding="application/x-tex">P(0)</annotation></semantics> and “for all <semantics>n<annotation encoding="application/x-tex">n\in \mathbb{N}</annotation></semantics>, if <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics> then <semantics>P(n+1)<annotation encoding="application/x-tex">P(n+1)</annotation></semantics>”. When using proof by induction, you need to isolate <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> as a predicate on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>, specialize to <semantics>n=0<annotation encoding="application/x-tex">n=0</annotation></semantics> to check the base case, write down <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics> as the inductive hypothesis, then replace <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> by <semantics>n+1<annotation encoding="application/x-tex">n+1</annotation></semantics> to find what you have to prove in the induction step. The students in an intro to proofs class have trouble with all of these steps, but professional mathematicians have learned to do them automatically. However, for a general inductive or higher inductive type, there might instead be four, six, ten, or more separate statements to prove when applying the induction principle, many of which involve more complicated transformations of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and it’s common to have to apply several such inductions in a nested way. Thus, when doing HoTT on paper, a substantial amount of time is sometimes spent simply figuring out what has to be proven. But a proof assistant equipped with a unification algorithm can do that for you automatically: you simply say “apply induction for the type <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>” and it immediately decides what <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is and presents you with a list of the remaining goals that have to be proven.

To summarize this second list, then, I think it’s fair to say that compared to formalizing traditional mathematics, formalizing HoTT tends to give more benefit at lower cost. However, that cost is still high, especially when you take into account the time spent learning to use a proof assistant, which is often not the most user-friendly of software. This is why I always emphasize that HoTT can perfectly well be done without a computer, and why we wrote the book the way we did.

by shulman ( at July 19, 2015 08:19 AM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
August 02, 2015 12:06 PM
All times are UTC.

Suggest a blog: