# Particle Physics Planet

## April 26, 2017

### Christian P. Robert - xi'an's og

ABC postdoc in Olso

Jukka Corander sent me the announcement that he is opening a 3 year postdoctoral position at the University of Oslo, to work with him and his team on ABC projects. This sounds quite an exciting offer, plus gives the nominee the opportunity to live in the most enjoyable city of Oslo for several years in fairly comfy conditions! The deadline is May 31. (If I was at a stage of my career where applying made sense, I would definitely candidate. Not even waiting for the outcome of the French elections on May 7!)

Filed under: Kids, Mountains, pictures, Travel, University life Tagged: ABC, French elections, Norway, Oslo, postdoc, postdoctoral position, Scandinavia, UiO, University of Oslo

### Emily Lakdawalla - The Planetary Society Blog

The first Space Launch System flight will probably be delayed
NASA's new heavy lift rocket is currently scheduled to launch the Orion spacecraft on a test flight next year. But all signs are pointing to a probable delay.

### Peter Coles - In the Dark

The STFC ‘Breadth of Programme’ Exercise

I suddenly realized this morning that I there was a bit of community service I meant to do when I got back from vacations, namely to pass on to astronomers and particle physicists a link to the results of the latest Programmatic Review (actually ‘Breadth of Programme’ Exercise) produced by the Science and Technology Facilities Council.

It’s a lengthy document, running to 89 pages, but it’s a must-read if you’re in the UK and work in area of science under the remit of STFC. There was considerable uncertainty about the science funding situation anyway because of BrExit, and that has increased dramatically because of the impending General Election which will probably kick quite a few things into the long grass, quite possibly delaying the planned reorganization of the research councils. Nevertheless, this document is well worth reading as it will almost certainly inform key decisions that will have to be made whatever happens in the broader landscape. With `flat cash’ being the most optimistic scenario, increasing inflation means that some savings will have to be found so belts will inevitable have to be tightened. Moreover, there are strong strategic arguments that some areas should grow, rather than remain static, which means that others will have to shrink to compensate.

There are 29 detailed recommendations and I can’t discuss them all here, but here are a couple of tasters:

The E-ELT is the European Extremely Large Telescope, in case you didn’t know.

Another one that caught my eye is this:

I’ve never really understood why gravitational-wave research came under ‘Particle Astrophysics’ anyway, but given their recent discovery by Advanced LIGO there is a clear case for further investment in future developments, especially because the UK community is currently rather small.

Anyway, do read the document and, should you be minded to do so, please feel free to comment on it below through the comments box.

## April 25, 2017

### Christian P. Robert - xi'an's og

marginal likelihoods from MCMC

A new arXiv entry on ways to approximate marginal likelihoods based on MCMC output, by astronomers (apparently). With an application to the 2015 Planck satellite analysis of cosmic microwave background radiation data, which reminded me of our joint work with the cosmologists of the Paris Institut d’Astrophysique ten years ago. In the literature review, the authors miss several surveys on the approximation of those marginals, including our San Antonio chapter, on Bayes factors approximations, but mention our ABC survey somewhat inappropriately since it is not advocating the use of ABC for such a purpose. (They mention as well variational Bayes approximations, INLA, powered likelihoods, if not nested sampling.)

The proposal of this paper is to identify the marginal m [actually denoted a there] as the normalising constant of an unnormalised posterior density. And to do so the authors estimate the posterior by a non-parametric approach, namely a k-nearest-neighbour estimate. With the additional twist of producing a sort of Bayesian posterior on the constant m. [And the unusual notion of number density, used for the unnormalised posterior.] The Bayesian estimation of m relies on a Poisson sampling assumption on the k-nearest neighbour distribution. (Sort of, since k is actually fixed, not random.)

If the above sounds confusing and imprecise it is because I am myself rather mystified by the whole approach and find it difficult to see the point in this alternative. The Bayesian numerics does not seem to have other purposes than producing a MAP estimate. And using a non-parametric density estimate opens a Pandora box of difficulties, the most obvious one being the curse of dimension(ality). This reminded me of the commented paper of Delyon and Portier where they achieve super-efficient convergence when using a kernel estimator, but with a considerable cost and a similar sensitivity to dimension.

Filed under: Books, pictures, Statistics, University life Tagged: ABC, arXiv, Bayesian Methods in Cosmology, curse of dimensionality, evidence, INLA, k-nearest neighbour, marginal likelihood, nested sampling, Planck experiment, San Antonio, satellite

### Symmetrybreaking - Fermilab/SLAC

Archaeology meets particle physics

Undergraduates search for hidden tombs in Turkey using cosmic-ray muons.

While the human eye is an amazing feat of evolution, it has its limitations. What we can see tells only a sliver of the whole story. Often, it is what is on the inside that counts.

To see a broken femur, we pass X-rays through a leg and create an image on a metal film. Archaeologists can use a similar technique to look for ancient cities buried in hillsides. Instead of using X-rays, they use muons, particles that are constantly raining down on us from the upper atmosphere.

Muons are heavy cousins of the electron and are produced when single-atom meteorites called cosmic rays collide with the Earth’s atmosphere. Hold your hand up and a few muons will pass through it every second.

Physics undergraduates at Texas Tech University, led by Professors Nural Akchurin and Shuichi Kunori, are currently developing detectors that will act like an X-ray film and record the patterns left behind by muons as they pass through hillsides in Turkey. Archaeologists will use these detectors to map the internal structure of hills and look for promising places to dig for buried archaeological sites.

Like X-rays, muons are readily absorbed by thick, dense materials but can traverse through lighter materials. So they can be stopped by rock but move easily through the air in a buried cavern.

The detector under development at Texas Tech will measure the amount of cosmic-ray muons that make it through the hill.  An unexpected excess could mean that there’s a hollow subterranean structure facilitating the muon’s passage.

“We’re looking for a void, or a tomb, that the archaeologists can investigate to learn more about the history of the people that were buried there,” says Hunter Cymes, one of the students working on the project.

The technique of using cosmic muons to probe for subterranean structures was developed almost half a century ago. Luis Alvarez, a Nobel Laureate in Physics, first used this technique to look inside the Second Pyramid of Chephren, one of the three great pyramids of Egypt. Since then, it has been used for many different applications, including searching for hidden cavities in other pyramids and estimating the lava content of volcanoes.

According to Jason Peirce, another undergraduate student working on this project, those previous applications had resolutions of about 10 meters. “We’re trying to make that smaller, somewhere in the range of 2 to 5 meters, to find a smaller room than what’s previously been done.”

They hope to accomplish this by using an array of scintillators, a type of plastic that can be used to detect particles. “When a muon passes through it, it absorbs some of that energy and creates light,” says student Hunter Cymes. That light can then be detected and measured and the data stored for later analysis.

Unfortunately, muons with enough energy to travel through a hill and reach the detector are relatively rare, meaning that the students will need to develop robust detectors which can collect data over a long period of time. Just like it’s hard to see in dim light, it’s difficult to reconstruct the internal structure of a hill with only a handful of muons.

Aashish Gupta, another undergraduate working on this project, is currently developing a simulation of cosmic-ray muons, the hill, and the detector prototype. The group hopes to use the simulation to guide their design process by predicting how well different designs will work and much data they will need to take.

As Peirce describes it, they are “getting some real, hands-on experience putting this together while also keeping in mind that we need to have some more of these results from the simulation to put together the final design.”

They hope to finish building the prototype detector within the next few months and are optimistic about having a final design by next fall.

### Peter Coles - In the Dark

One Hundred Years of Ella Fitzgerald

This morning Radio 3 reminded me that the great jazz singer Ella Fitzgerald was born exactly one hundred years ago today, on April 25th 1917. She passed away in 1996, but her legacy lives on through a vast array of wonderful recordings. I couldn’t resist marking the anniversary of her birth with this track, which I hope brings a smile to your face as it does to mine every time I listen to it. This track won her  Grammy award for the best vocal performance that year, which is pretty remarkable because she forgot the lyrics to the song! Besides this, there’s a lot of other great stuff on the album Ella in Berlin (including more improvised lyrics and some sensational scat singing on How High The Moon) so if you’re looking to start an Ella Fitzgerald collection this is a great place to start.

Mack the Knife had been a huge hit for Louis Armstrong in 1956 and then again for Bobby Darin in 1959. By all accounts Ella was prevailed upon to add it to her repertoire for live concerts. She wasn’t that keen but  reluctantly agreed. Obviously however she wasn’t so  enthusiastic as to actually learn the words! On the other hand, when you have a wonderful voice and an amazing musical imagination, who needs the words? Ella not only made up some lyrics herself on the fly, but also threw in a rather wonderful Louis Armstrong impersonation for good measure. Enjoy!

### Emily Lakdawalla - The Planetary Society Blog

Curiosity update, sols 1600-1674: The second Bagnold Dunes campaign
The four-stop dune science campaign offered the engineers some time to continue troubleshooting the drill without any pressure to use it for science. They scooped sand at a site called Ogunquit Beach but couldn't complete the planned sample activity because of new developments in the drill inquiry. The rover has now headed onward toward Vera Rubin Ridge.

### Peter Coles - In the Dark

Newcastle Up!

I had a very full first day back at work after my holiday yesterday, which carried on after I had my dinner. So engrossed was in a research problem that I completely forgot that there was an important football match in the Championship last night. It was only when I finally downed tools – ‘tools’ in this case being pencil and paper – at about 11.30pm that I remembered that I should check the football results.

Last night’s game between Newcastle United and Preston North End (of the Midlands) finished 4-1 in favour of the home side (the one from the North). That result, combined with defeats on Saturday for Huddersfield and Reading in other games of the antepenultimate round of Championship matches, means that Newcastle have now secured promotion to the Premiership next season.

After last night’s match the top of the Championship table looks like this:

You will see that the maximum points total Reading can now  reach  is 85, Sheffield Wednesday 84, and Huddersfield Town (who have a game in hand) can only get 87, so Newcastle are guaranteed to be no longer than 2nd place.

At one point it looked like Newcastle United were going to take the Championship title by some margin, but they faltered in the last games while Brighton & Hove Albion kept up the pressure. It even looked at one point that Newcastle might fall into the playoff pack, but fortunately none of the chasing teams put together a strong enough run of games to catch them.

It’s anyone’s guess who will get the third promotion spot through the playoffs. Fulham and Sheffield Wednesday are both on good runs, but picking a winner out of those two, Huddersfield, Reading (and possibly Leeds) is very difficult.

Brighton look like being Champions now. They lost 2-0 on Friday away at Norwich City, when a win would have secured top spot, but they still only need 3 points to finish Champions. Mathematically, Newcastle could catch them but I’d say it is rather unlikely.

I do have worries about how well Newcastle might fare in the Premiership next season. Their home form has not been as good as one would have hoped this season, despite the fact that they regularly attract crowds in excess of 50,000 to St James’s Park.  Sometimes it seems that this increases the level of anxiety rather than spurring the team on. Moreover, I don’t think the squad has the quality needed to prosper in the top flight. The demands of the Championship are quite different from those of the Premier League. Manager Rafael Benitez knows this very well,  so I hope he is given the resources he needs to meet the new challenge. We’ll see.

Coincidentally, Newcastle United are on their travels on Friday for a match against Cardiff City…..

## April 24, 2017

### Christian P. Robert - xi'an's og

Paris-Dauphine in Nature

Since this is an event unlikely to occur that frequently, let me point out that Université Paris-Dauphine got a nominal mention in Nature of two weeks ago, through an article covering the recent Abel Prize of Yves Meyer and his work on wavelets through a collection of French institutions, including Paris-Dauphine where he was a professor in the maths department (CEREMADE) from 1985 till 1996. (Except for including a somewhat distantly related picture of an oscilloscope and a mention of the Higgs boson, the Nature article is quite nice!)

Filed under: Statistics Tagged: Abel Prize, bois de Boulogne, France, Higgs boson, La Défense, Nature, Paris, Université Paris Dauphine, wavelets, Yves Meyer

### Symmetrybreaking - Fermilab/SLAC

A tiny droplet of the early universe?

Particles seen by the ALICE experiment hint at the formation of quark-gluon plasma during proton-proton collisions.

About 13.8 billion years ago, the universe was a hot, thick soup of quarks and gluons—the fundamental components that eventually combined into protons, neutrons and other hadrons.

Scientists can produce this primitive particle soup, called the quark-gluon plasma, in collisions between heavy ions. But for the first time physicists on an experiment at the Large Hadron Collider have observed particle evidence of its creation in collisions between protons as well.

The LHC collides protons during the majority of its run time. This new result, published in Nature Physics by the ALICE collaboration, challenges long-held notions about the nature of those proton-proton collisions and about possible phenomena that were previously missed.

“Many people think that protons are too light to produce this extremely hot and dense plasma,” says Livio Bianchi, a postdoc at the University of Houston who worked on this analysis. “But these new results are making us question this assumption.”

Scientists at the LHC and at the US Department of Energy’s Brookhaven National Laboratory’s Relativistic Heavy Ion Collider, or RHIC, have previously created quark-gluon plasma in gold-gold and lead-lead collisions.

In the quark gluon plasma, mid-sized quarks—such as strange quarks—freely roam and eventually bond into bigger, composite particles (similar to the way quartz crystals grow within molten granite rocks as they slowly cool). These hadrons are ejected as the plasma fizzles out and serve as a telltale signature of their soupy origin. ALICE researchers noticed numerous proton-proton collisions emitting strange hadrons at an elevated rate.

“In proton collisions that produced many particles, we saw more hadrons containing strange quarks than predicted,” says Rene Bellwied, a professor at the University of Houston. “And interestingly, we saw an even bigger gap between the predicted number and our experimental results when we examined particles containing two or three strange quarks.”

From a theoretical perspective, a proliferation of strange hadrons is not enough to definitively confirm the existence of quark-gluon plasma. Rather, it could be the result of some other unknown processes occurring at the subatomic scale.

“This measurement is of great interest to quark-gluon-plasma researchers who wonder how a possible QGP signature can arise in proton-proton collisions,” says Urs Wiedemann, a theorist at CERN. “But it is also of great interest for high energy physicists who have never encountered such a phenomenon in proton-proton collisions.”

Earlier research at the LHC found that the spatial orientation of particles produced during some proton-proton collisions mirrored the patterns created during heavy-ion collisions, suggesting that maybe these two types of collisions have more in common than originally predicted. Scientists working on the ALICE experiment will need to explore multiple characteristics of these strange proton-proton collisions before they can confirm if they are really seeing a miniscule droplet of the early universe.

“Quark-gluon plasma is a liquid, so we also need to look at the hydrodynamic features,” Bianchi says. “The composition of the escaping particles is not enough on its own.”

This finding comes from data collected the first run of the LHC between 2009 and 2013. More research over the next few years will help scientists determine whether the LHC can really make quark-gluon plasma in proton-proton collisions.

“We are very excited about this discovery,” says Federico Antinori, spokesperson of the ALICE collaboration. “We are again learning a lot about this extreme state of matter. Being able to isolate the quark-gluon-plasma-like phenomena in a smaller and simpler system, such as the collision between two protons, opens up an entirely new dimension for the study of the properties of the primordial state that our universe emerged from.”

Other experiments, such as those using RHIC, will provide more information about the observable traits and experimental characteristics of quark-gluon plasmas at lower energies, enabling researchers to gain a more complete picture of the characteristics of this primordial particle soup.

“The field makes far more progress by sharing techniques and comparing results than we would be able to with one facility alone,” says James Dunlop, a researcher at RHIC. “We look forward to seeing further discoveries from our colleagues in ALICE.”

### CERN Bulletin

L’Association du personnel (AP) en réunion du Directorat élargi (ED) !

Le 3 avril dernier, la Vice-Présidente et le Président de l’Association du personnel ont présenté en réunion du Directorat élargi (Directeurs et Chefs de départements et d’unités) le plan des activités de l’Association du personnel pour 2017 et ont fait part des préoccupations de l’AP.

Cinq sujets ont été abordés en commençant par la mise en œuvre des décisions prises dans le cadre de l’examen quinquennal de 2015.

# Examen quinquennal – suivi (voir Echo n° 257)

## 2016 – Principales mises en œuvre

De nombreux changements ont déjà été mis en place en 2016 :

• Révision des Statut et Règlement du personnel en janvier 2016, pour les aspects de diversité, et en septembre 2016, pour la nouvelle structure de carrière : grille des salaires avec l’introduction des grades ;
• Révision de la Circulaire administrative n° 26 (Rev 11) sur la « Reconnaissance du mérite » ;
• Placement des titulaires dans des grades et placement provisoire dans des emplois repères ;
• Définition des lignes directrices de l’exercice MERIT pour 2017.

L’Association du personnel a été largement associée à ces révisions et à leur mise en place. Le processus de concertation a en général bien fonctionné dans ce cadre : des accords qui préservent les intérêts du personnel et ceux de l’Organisation ont été trouvés.

## 2017 – 1ère année de l’exercice MERIT(voir Echo n°259)

L’Association du personnel a mis l’accent sur les points suivants :

### Correction de placement dans un emploi repère (voir Echo n° 261)

Fin février 2017, de nombreuses demandes de corrections ont déjà été formulées auprès du Département HR. Ces demandes émanaient :

• de titulaires (144) : majoritairement des demandes de changement d’emploi repère pour un emploi repère dans une gamme de grades supérieure (p. ex. de technicien(ne) en 3-4-5 à ingénieur(e) technicien(ne) en 4-5-6) et, dans une moindre mesure, des demandes de changement de grade ;
• de la hiérarchie (242) : majoritairement des changements de titre d’emploi repère dans une même gamme de grades.

Pour l’Association du personnel, l’accord reste que les demandes de changement de grade (promotion) doivent être étudiées dans le cadre de la procédure de promotion.

En revanche, nous avons insisté pour que les corrections suite à un placement dans le mauvais emploi repère, avec ou sans changement de gamme de grades, soient instruites et traitées au plus tôt. Ces corrections doivent être effectives avant le 1er juillet 2017, date de la confirmation officielle du placement dans un emploi repère.

### Positions personnelles de titulaires

Visuel présenté lors de la réunion publique du 22 septembre 2016 (voir Echo n°254)

La mise en application de la nouvelle grille de salaires a entrainé le placement de nombreux titulaires dans des « positions personnelles », c’est-à-dire des positions salariales en dehors de la grille des salaires, soit au-dessous du minimum de leur grade, soit plus fréquemment au-dessus du maximum de leur grade.

L’Association du personnel a dit au ED être consciente que nos collègues en position personnelle, avec un salaire supérieur au salaire maximum de leur grade, ne pourront pas tous bénéficier d’une promotion cette année ; l’AP a même conscience que, pour certains d’entre eux, il n’y aura pas de promotion tout court.

Néanmoins, nous avons insisté pour que le cas de chaque collègue en position personnelle soit considéré avec une réponse individuelle donnée.

### Lignes directrices MERIT de 2017

L’Association du personnel a rappelé :

• qu’une promotion est un changement de grade ;
• qu’un changement d’emploi repère sanctionne un changement de fonctions ;
• que ces deux concepts sont différents dans leur usage et suivent donc des procédures différentes ;
• que ces procédures s’appliquent à l’ensemble du CERN de la même façon (CERN-Wide) ;
• qu’aucune ligne directrice numérique n’est applicable, comme décidé par la Direction et accepté par l’Association du personnel.

En conséquence, l’Association du personnel s’attend en 2017 à un maximum de promotions, tout en tenant compte de la maitrise de l’augmentation du budget à long terme.

### Emplois repères sur trois grades et non sur deux + un

Sur la base du Guide des promotions (voir Echo n° 263), le passage au 3e grade d’un emploi repère est analysé et évalué de la même façon que le passage du 1er au 2e grade, sur la base de critères tenant compte du niveau des fonctions occupées, de l’expérience et de l’expertise acquises, etc.

Par ailleurs, les recrutements s’effectuent normalement sur le 1er ou le 2e grade d’un emploi repère, en fonction de l’expérience du candidat et de son expertise ; toutefois, l’embauche sur un 3e grade, bien qu’exceptionnelle, reste possible. Le(s) grade(s) de recrutement doi(ven)t toujours être spécifié(s) dans la vacance de poste.

En conclusion, tout affichage de grades qui fait apparaitre des parenthèses « 1-2-(3) » ou un 3e grade grisé n’est absolument pas nécessaire en raison des processus HR et ne peut être que démotivant. Nous avons donc instamment demandé que cet affichage se fasse sur trois grades « 1-2-3 » et sans partie grisée.

### Mises en garde

L’Association du personnel a fait part d’informations concernant le non-respect de règles concertées qui lui ont été rapportées, et notamment des deux points suivants :

• la non-éligibilité à une promotion pour les titulaires dont la position salariale serait inférieure à 110 % du salaire médian de leur grade, ce qui revient à limiter les propositions de promotions aux seuls titulaires ayant une position salariale égale ou supérieure à 110 % de leur grade. Ceci est inacceptable et contraire aux règles fixées par le Management, en accord avec l’Association du personnel, et valable pour l’ensemble du CERN ;
• le refus de changement d’emploi repère pour des raisons de convenance personnelle. Il faut rappeler que l’emploi-repère assigné à une personne doit refléter les fonctions réelles de la personne et non les diplômes obtenus ou un titre académique. En effet, les emplois repères doivent permettre d’avoir une vue précise des fonctions occupées au CERN (type et nombre de postes) et donc d’aider à établir une planification des ressources (« Capacity planning »). Enfin, une personne dont les fonctions ne correspondent pas à l’emploi repère assigné sera évaluée, pour les exercices de promotion, sur les fonctions associées à l’emploi repère et non sur celles réellement occupées, ce qui aura sans aucun doute un impact sur la carrière de cette personne.
L’Association du personnel a recommandé fortement que chaque personne au CERN ait le bon emploi-repère, même si celui-ci ne correspond plus au diplôme initial de la personne.

### Encore trois thèmes à aborder

Pour clore la mise en œuvre de l’examen quinquennal, trois thèmes sont encore à traiter en 2017 :

• la mobilité interne,
• la Validation des Acquis de l’Expérience (VAE),
• les entretiens en développement de carrière.

Trois groupes de travail ont été lancés par le Département HR, avec la participation de représentants de l’Association du personnel. Pour l’Association du personnel ces éléments vont dynamiser les carrières et en partie compenser les pertes sur l’avancement actées lors de la révision quinquennale.

## Concertation

L’Association du personnel a rappelé que la concertation est un processus selon lequel le Directeur général et l’Association du personnel se concertent afin de trouver autant que possible une position commune. La concertation nécessite une attitude positive, loin de toute défiance, et une confiance mutuelle. L’Association du personnel est fermement engagée dans ce sens, mais elle constate que la concertation ne se porte pas aussi bien que nous le voudrions. En réponse à une question de la Directrice générale, l’exemple a été donné de la communication décalée des minutes et des documents du Comité de Concertation Permanent qui garde l’Association à bonne distance sans aucune raison objective.

## Enquêtes et justice internes

Un travail sur les processus internes d’enquêtes et de justice est nécessaire et urgent. Ce constat est partagé par différents services et à différents niveaux.

L’Association du personnel a rappelé que le CERN comme Organisation Internationale a les devoirs d’un État à l’égard de son personnel et qu’il doit mettre en place des processus exemplaires dans le domaine touchant aux enquêtes et à la justice interne.

L’Association du personnel demande donc qu’un groupe de travail soit mis en place aussi rapidement que possible, sous l’égide du Département HR et avec une participation de l’AP dans ce groupe.

## Santé et Sécurité

Le Service Médical du CERN a fait état, dans son rapport annuel, de problèmes en lien avec le bien-être psychosocial : le nombre de jours d’absences de longue durée pour maladie en lien avec des problèmes psychosociaux a augmenté de façon significative.

Un groupe de travail a été lancé par HR afin de bien appréhender cette problématique, identifier les causes et établir un plan d’action. L’Association du personnel participe à cette étude, au même titre que HR, le Service Médical, HSE et la hiérarchie en général. Le message de l’AP au ED a été qu’il n’y a pas lieu de paniquer mais que le CERN ne peut ignorer les signaux qui sont perçus et qui reflètent une souffrance au travail mais aussi une désorganisation et une perte économique pour les services.

## VICO et Élections

### VICO (VIsite COlleagues) (voir Echo n° 264)

Une campagne de courtes visites au personnel du CERN par les délégués du personnel a été lancée mi-mars et se poursuivra jusqu’à mi-juin.

Le but de cette campagne est de rencontrer nos collègues, d’initier un dialogue sur des sujets d’intérêt mutuel et de répondre autant que possible à leurs interrogations. C’est aussi une opportunité pour inciter nos collègues à adhérer à l’Association et pour proposer à certains de se présenter aux élections du Conseil du personnel prévues en novembre 2017.

### Collèges électoraux

Suite à la restructuration de l’Organisation en janvier 2016 et au remplacement des filières de carrière par des grades, l’Association du personnel doit revoir les collèges électoraux en tenant compte des différentes catégories professionnelles, des différents secteurs / départements / unités, de la distribution du nombre de titulaires par départements / unités, etc.

Nous avons rappelé que cinq places au Conseil du personnel sont réservées aux délégués représentant les boursiers et les membres du personnel associés. En réponse à une question, l’AP a indiqué que le nombre de ces places sera augmenté dès que l’intérêt pour l’Association aura augmenté chez les boursiers et MPA en nombre d’adhérents et de candidats aux élections ; actuellement seules deux de ces cinq places sont pourvues.

Nous avons insisté auprès des Directeurs et des Chefs de départements et d’unités sur la nécessité d’une bonne représentation de toutes les catégories professionnelles et de tous les secteurs et départements au sein du Conseil du personnel, et nous leur avons demandé de contribuer à assurer cette représentativité.

La présentation s’est terminée par une série de questions et réponses. La Directrice générale a remercié la Vice-présidente et le Président de l’Association du personnel pour les sujets soulevés dans cette présentation et les franches réponses aux questions et a invité l’Association à revenir plus tard devant le Directorat élargi pour poursuivre ce dialogue constructif.

Ce que nous ne manquerons pas de faire, bien sûr !

La version anglaise de cet article sera publié dans le prochain Echo.

### CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre.

La prochaine permanence se tiendra le :
Mardi 30 mai de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 29 août, 26 septembre, 31 octobre et 28 novembre 2017.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/.
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

### Peter Coles - In the Dark

PhD Opportunities in Data-intensive Physics & Astrophysics!

I’m back from my little holiday having accumulated a very long to-do list, near the top of which are a number of things related to our new STFC-funded Centre for Doctoral Training involving the Universities of Cardiff, Bristol and Swansea. This will be coordinated by the Data Innovation Institute at Cardiff University and it covers  a wide range of data-intensive research in particle physics, astrophysics and cosmology carried on at the three member institutions. ‘Data-intensive’ here means involving very big data sets, very sophisticated analysis methods or high-performance computing,  or any combination of these.

The Centre for Doctoral Training is being coordinated by the Data Innovation Institute at Cardiff University. It will commence in September 2017. Applications have been open for a couple of weeks and we will be starting to make selections very soon so if you’re interested in this opportunity you will have to get your skates on! In fact, to secure a PhD place at this STFC CDT administered by the DII you’d better apply PDQ!

By the way, for  this special programme, STFC have relaxed the  rules relating to  nationality, so full funding is potentially available for  non-UK citizens under this scheme – that isn’t normally the case for PhD studentships funded by the UK research councils.

If you’re looking to do a PhD in data-intensive physics or astrophysics, get your application in now!

Cine Club

# Kagemusha

Directed by Akira Kurosawa
Japan, 1980, 162 minutes

When a powerful warlord in medieval Japan dies, a poor thief recruited to impersonate him finds difficulty living up to his role and clashes with the spirit of the warlord during turbulent times in the kingdom.

Original version Japanese; English subtitles.

### CERN Bulletin

Cine Club - Special Event

# Special event

## on Thursday 4 May 2017 at 18:30 CERN Council Chamber

In collaboration with the CERN Running Club and the Women In Technology initiative, the CERN CineClub is happy to announce the screening of the film

# Free to Run

Directed by Pierre Morath
Switzerland, 2016, 99 minutes

Today, all anybody needs to run is the determination and a pair of the right shoes. But just fifty years ago, running was viewed almost exclusively as the domain of elite male athletes who competed on tracks. With insight and propulsive energy, director Pierre Morath traces running's rise to the 1960s, examining how the liberation movements and newfound sense of personal freedom that defined the era took the sport out of the stadiums and onto the streets, and how legends like Steve Prefontaine, Fred Lebow, and Kathrine Switzer redefined running as a populist phenomenon.

Original version French; English subtitles.

http://freetorun.ch/

Come along to watch the film and learn more about the history of popular races and amateur running, and how women had to fight for their rights to be free to run! Join us after the projection for drinks in restaurant 1, so that we can share impressions and discuss about the film.

Exhibition

# La couleur des jours

## oriSio

Du 2 au 12 mai 2017
CERN Meyrin, Bâtiment principal

oriSio - Motus

Suite à un fort intérêt pour la Chine et une curiosité pour un médium très ancien, la laque !

Je réinterprète cet art à travers un style abstrait.

Je présente ici des laques sur aluminium, travaillés au plasma et ensuite colorés à l’aide de pigments pour l’essentiel.

Mes œuvres je les veux brutes, déchirées, évanescentes, gondolées, voire trouées mais avec une belle approche de profondeur de la couleur.

Pour plus d’informations : staff.association@cern.ch | Tél: 022 766 37 38

### John Baez - Azimuth

Complexity Theory and Evolution in Economics

This book looks interesting:

• David S. Wilson and Alan Kirman, editors, Complexity and Evolution: Toward a New Synthesis for Economics, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

Agent_Zero is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract. Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about human behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one can draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, Evonomics, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.

Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

## April 23, 2017

### Peter Coles - In the Dark

Gay Sex, Politics, Religion and the Law

It seems that Liberal Democrat leader Tim Farron is under fire again for refusing to say whether he thinks gay sex is a sin.

I’m not a particular fan of Mr Farron, and won’t be voting for his party, but I think the flak being directed at him on this issue is unjustified. Much of it is pure humbug, manufactured to cause political damage.

Mr Farron (who is heterosexual) describes himself as a ‘committed Christian’. He no doubt feels that if he spells out  in public what he believes in private then it will alienate many potential voters even though he has voted progressively on this issue in the past. He’s probably right. On the other hand, by not spelling it out, he appears weak and shifty. The media are out to exploit his difficulty.

As someone who is neither heterosexual nor Christian I can help him. It seems to me very clear that the Bible does teach  that homosexuality is a sin and that if you’re a Christian you have to believe this at some level.

I say ‘at some level’ because another thing that is clear is that the Bible does not consider homosexuality a very important issue. Had it been a hot topic then perhaps Jesus might have been prepared to go on record about it, but there’s no reference in the New Testament to him personally saying anything about gay sex. ‘Thou shalt not have sex with someone of the same gender’ isn’t among the Ten Commandments, either.

I do find it strange that so many people who described themselves as Christian obsess about same-sex relationships while clearly failing to observe some of the more important biblical instructions, notably the one about loving thy neighbour…

But I digress.

I don’t care at all what Tim Farron’s (or anyone else’s) religious beliefs say about homosexuality, as long as they accept that such beliefs give nobody the right to dictate what others should do.

If you believe gay sex is sinful, fine. Don’t do it. If you don’t approve of same-sex marriage, that’s fine too. Don’t marry someone of the same sex. Just don’t try to deny other people rights and freedoms on the basis of your own personal religious beliefs.

And no, refusing you the right to impose your beliefs on others is not a form of discrimination. That goes whether you a Christian, Muslim, Buddhist, Atheist or merely confused. You are free to live by the rules you adopt. I don’t have to.

I’d go further actually. I don’t think religious beliefs should  have any place in the the laws of the land. It seems to me that’s the only way to guarantee freedom from religious prejudice. That’s why I’m a member of the National Secular Society. This does not exist to campaign against religion, but against religious privilege.

In fact the UK courts agree with me on this point. This is Lord Justice Laws, on behalf of the Court of Appeal relating to the case described here:

We do not live in a society where all the people share uniform religious beliefs. The precepts of any one religion, any belief system, cannot, by force of their religious origins, sound any louder in the general law than the precepts of any other. If they did, those out in the cold would be less than citizens and our constitution would be on the way to a theocracy, which is of necessity autocratic. The law of a theocracy is dictated without option to the people, not made by their judges and governments. The individual conscience is free to accept such dictated law, but the State, if its people are to be free, has the burdensome duty of thinking for itself.

To come back to Tim Farron, I say judge him and his party by what you see in the Liberal Democrat manifesto and on his track-record as a politician, not by what you think his interpretation might be of a few bits of scripture.

### The n-Category Cafe

On Clubs and Data-Type Constructors

Guest post by Pierre Cagne

The Kan Extension Seminar II continues with a third consecutive of Kelly, entitled On clubs and data-type constructors. It deals with the notion of club, first introduced by Kelly as an attempt to encode theories of categories with structure involving some kind of coherence issues. Astonishing enough, there is no mention of operads whatsoever in this article. (To be fair, there is a mention of “those Lawvere theories with only associativity axioms”…) Is it because the notion of club was developed in several stages at various time periods, making operads less identifiable among this work? Or does Kelly judge irrelevant the link between the two notions? I am not sure, but anyway I think it is quite interesting to read this article in the light of what we now know about operads.

Before starting with the mathematical content, I would like to thank Alexander, Brendan and Emily for organizing this online seminar. It is a great opportunity to take a deeper look at seminal papers that would have been hard to explore all by oneself. On that note, I am also very grateful for the rich discussions we have with my fellow participants.

Let us take a look at the simplest kind of operads: non symmetric $\mathrm{Set}\mathsf\left\{Set\right\}$-operads. Those are informally collections of operations with given arities closed under compositions. The usual way to define them is to endow the category $\left[N,\mathrm{Set}\right]\left[\mathbf\left\{N\right\},\mathsf\left\{Set\right\}\right]$ of $N\mathbf\left\{N\right\}$-indexed families of sets with the substitution monoidal product (see Simon’s post): for two such families $RR$ and $SS$, $\left(R\circ S{\right)}_{n}=\sum _{{k}_{1}+\dots +{k}_{m}=n}{R}_{m}×{S}_{{k}_{1}}×\dots ×{S}_{{k}_{m}}\phantom{\rule{1em}{0ex}}\forall n\in N \left(R \circ S\right)_n = \sum_\left\{k_1+\dots+k_m = n\right\} R_m \times S_\left\{k_1\right\} \times \dots \times S_\left\{k_m\right\} \quad \forall n \in \mathbf\left\{N\right\} $ This monoidal product is better understood when elements of ${R}_{n}R_n$ and ${S}_{n}S_n$ are thought as branching with $nn$ inputs and one output: $R\circ SR\circ S$ is then obtained by plugging outputs of elements of $SS$ to the inputs of elements of $RR$. A non symmetric operad is defined to be a monoid for that monoidal product, a typical example being the family $\left(\mathrm{Set}\left({X}^{n},X\right){\right)}_{n\in N}\left(\mathsf\left\{Set\right\}\left(X^n,X\right)\right)_\left\{n\in\mathbf\left\{N\right\}\right\}$ for a set $XX$.

We can now take advantage of the equivalence $\left[N,\mathrm{Set}\right]\stackrel{\sim }{\to }\mathrm{Set}/N\left[\mathbf\left\{N\right\},\mathsf\left\{Set\right\}\right] \overset \sim \to \mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$ to equip the category $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$ with a monoidal product. This equivalence maps a family $SS$ to the coproduct ${\sum }_{n}{S}_{n}\sum_n S_n$ with the canonical map to $N\mathbf\left\{N\right\}$, while the inverse equivalence maps a function $a:A\to Na: A \to \mathbf\left\{N\right\}$ to the family of fibers $\left({a}^{-1}\left(n\right){\right)}_{n\in N}\left(a^\left\{-1\right\}\left(n\right)\right)_\left\{n\in\mathbf\left\{N\right\}\right\}$. It means that a $N\mathbf\left\{N\right\}$-indexed family can be thought either as a set of operations of arity $nn$ for each $nn$ or as a bunch of operations, each labeled by an integer given its arity. Let us transport the monoidal product of $\left[N,\mathrm{Set}\right]\left[\mathbf\left\{N\right\}, \mathsf\left\{Set\right\}\right]$ to $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$: given two maps $a:A\to Na: A \to \mathbf\left\{N\right\}$ and $b:B\to Nb: B \to \mathbf\left\{N\right\}$, we compute the $\circ \circ$-product of the family of fibers, and then take the coproduct to get $A\circ B=\left\{\left(x,{y}_{1},\dots ,{y}_{m}\right):x\in A,{y}_{i}\in B,a\left(x\right)=m\right\} A\circ B = \\left\{ \left(x,y_1,\dots,y_m\right) : x \in A, y_i \in B, a\left(x\right) = m \\right\} $ with the map $A\circ B\to NA\circ B \to \mathbf\left\{N\right\}$ mapping $\left(x,{y}_{1},\dots ,{y}_{m}\right)↦{\sum }_{i}b\left({y}_{i}\right)\left(x,y_1,\dots,y_m\right)\mapsto \sum_i b\left(y_i\right)$. That is, the monoidal product is achieved by computing the following pullback:

where $LL$ is the free monoid monad (or list monad) on $\mathrm{Set}\mathsf\left\{Set\right\}$. Hence a non symmetric operad is equivalently a monoid in $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$ for this monoidal product. In Burroni’s terminology, it would be called a $LL$-category with one object.

In my opinion, Kelly’s clubs are a way to generalize this point of view to other kind of operads, replacing $N\mathbf\left\{N\right\}$ by the groupoid $P\mathbf P$ of bijections (to get symmetric operads) or the category $\mathrm{Fin}\mathsf\left\{Fin\right\}$ of finite sets (to get Lawvere theories). Obviously, $\mathrm{Set}/P\mathsf\left\{Set\right\}/\mathbf P$ or $\mathrm{Set}/\mathrm{Fin}\mathsf\left\{Set\right\}/\mathsf\left\{Fin\right\}$ does not make much sense, but the coproduct functor of earlier can be easily understood as a Grothendieck construction that adapts neatly in this context, providing functors: $\left[P,\mathrm{Set}\right]\to \mathrm{Cat}/P,\phantom{\rule{2em}{0ex}}\left[\mathrm{Fin},\mathrm{Set}\right]\to \mathrm{Cat}/\mathrm{Fin} \left[\mathbf P,\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathbf P,\qquad \left[\mathsf\left\{Fin\right\},\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathsf\left\{Fin\right\} $ Of course, these functors are not equivalences anymore, but it does not prevent us from looking for monoidal products on $\mathrm{Cat}/P\mathsf\left\{Cat\right\}/\mathbf P$ and $\mathrm{Cat}/\mathrm{Fin}\mathsf\left\{Cat\right\}/\mathsf\left\{Fin\right\}$ that restrict to the substitution product on the essential images of these functors (i.e. the discrete opfibrations). Before going to the abstract definitions, you might keep in mind the following goal: we are seeking those small categories $𝒞\mathcal\left\{C\right\}$ such that $\mathrm{Cat}/𝒞\mathsf\left\{Cat\right\}/\mathcal\left\{C\right\}$ admits a monoidal product reflecting through the Grothendieck construction the substition product in $\left[𝒞,\mathrm{Set}\right]\left[\mathcal\left\{C\right\},\mathsf\left\{Set\right\}\right]$.

### Abstract clubs

Recall that in a monoidal category $\mathcal\left\{E\right\}$ with product $\otimes \otimes$ and unit $II$, any monoid $MM$ with multiplication $m:M\otimes M\to Mm: M\otimes M \to M$ and unit $u:I\to Mu: I \to M$ induces a monoidal structure on $ℰ/M\mathcal\left\{E\right\}/M$ as follows: the unit is $u:I\to Mu: I \to M$ and the product of $f:X\to Mf: X \to M$ by $g:Y\to Mg: Y \to M$ is the composite $X\otimes Y\stackrel{f\otimes g}{\to }M\otimes M\stackrel{m}{\to }M X\otimes Y \overset \left\{f\otimes g\right\}\to M \otimes M \overset\left\{m\right\}\to M $ Be aware that this monoidal structure depends heavily on the monoid $MM$. For example, even if $\mathcal\left\{E\right\}$ is finitely complete and $\otimes \otimes$ is the cartesian product, the induced structure on $ℰ/M\mathcal\left\{E\right\}/M$ is almost never the cartesian one. A notable fact about this structure on $ℰ/M\mathcal\left\{E\right\}/M$ is that the monoids in it are exactly the morphisms of monoids with codomain $MM$.

We will use this property in the monoidal category $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ of endofunctors on a category $𝒜\mathcal\left\{A\right\}$. I will not say a lot about size issues here, but of course we assume that there exist enough universes to make sense of $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ as a category even when $𝒜\mathcal\left\{A\right\}$ is not small but only locally small: that is, if smallness is relative to a universe $𝕌\mathbb\left\{U\right\}$, then we posit a universe $𝕍\ni 𝕌\mathbb\left\{V\right\} \ni \mathbb\left\{U\right\}$ big enough to contain the set of objects of $𝒜\mathcal\left\{A\right\}$, making $𝒜\mathcal\left\{A\right\}$ a $𝕍\mathbb\left\{V\right\}$-small category hence $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ a locally $𝕍\mathbb\left\{V\right\}$-small category. The monoidal product on $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ is just the composition of endofunctors and the unit is the identity functor $\mathrm{Id}\mathrm\left\{Id\right\}$. The monoids in that category are precisely the monads on $𝒜\mathcal\left\{A\right\}$, and for any such $S:𝒜\to 𝒜S: \mathcal\left\{A\right\} \to \mathcal\left\{A\right\}$ with multiplication $n:\mathrm{SS}\to Sn: SS \to S$ and unit $j:\mathrm{Id}\to Sj: \mathrm\left\{Id\right\} \to S$, the slice category $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$ inherits a monoidal structure with unit $jj$ and product $\alpha {\circ }^{S}\beta \alpha \circ^S \beta$ the composite $TR\stackrel{\alpha \beta }{\to }SS\stackrel{n}{\to }S T R \overset\left\{\alpha\beta\right\} \to S S \overset n \to S $ for any $\alpha :T\to S\alpha: T \to S$ and $\beta :R\to S\beta: R \to S$.

Now a natural transformation $\gamma \gamma$ between two functors $F,G:𝒜\to 𝒜F,G: \mathcal\left\{A\right\} \to \mathcal\left\{A\right\}$ is said to be cartesian whenever the naturality squares

are pullback diagrams. If $𝒜\mathcal\left\{A\right\}$ is finitely complete, as it will be for the rest of the post, it admits in particular a terminal object $11$ and the pasting lemma ensures that we only have to check for the pullback property of the naturality squares of the form

to know if $\gamma \gamma$ is cartesian. Let us denote by $\mathcal\left\{M\right\}$ the (possibly large) set of morphsisms in $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ that are cartesian in this sense, and denote by $ℳ/S\mathcal\left\{M\right\}/S$ the full subcategory of $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$ whose objects are in $\mathcal\left\{M\right\}$.

Definition. A club in $𝒜\mathcal\left\{A\right\}$ is a monad $SS$ such that $ℳ/S\mathcal\left\{M\right\}/S$ is closed under the monoidal product ${\circ }^{S}\circ^S$.

By “closed under ${\circ }^{S}\circ^S$”, it is understood that the unit $jj$ of $SS$ is in $\mathcal\left\{M\right\}$ and that the product $\alpha {\circ }^{S}\beta \alpha \circ^S \beta$ of two elements of $\mathcal\left\{M\right\}$ with codomain $SS$ still is in $\mathcal\left\{M\right\}$. A useful alternate characterization is the following:

Lemma. A monad $\left(S,n,j\right)\left(S,n,j\right)$ is a club if and only if $n,j\in ℳn,j \in \mathcal\left\{M\right\}$ and $Sℳ\subseteq ℳS\mathcal\left\{M\right\}\subseteq \mathcal\left\{M\right\}$.

It is clear from the definition of ${\circ }^{S}\circ^S$ that the condition is sufficient, as the $\alpha {\circ }^{S}\beta \alpha \circ^S \beta$ can be written as $n\cdot \left(S\beta \right)\cdot \left(\alpha T\right)n\cdot\left(S\beta\right)\cdot\left(\alpha T\right)$ via the exchange rule. Now suppose $SS$ is a club: $j\in ℳj \in \mathcal\left\{M\right\}$ as it is the monoidal unit; $n\in ℳn \in \mathcal\left\{M\right\}$ comes from ${\mathrm{id}}_{S}{\circ }^{S}{\mathrm{id}}_{S}\in ℳ\mathrm\left\{id\right\}_S \circ^S \mathrm\left\{id\right\}_S \in \mathcal\left\{M\right\}$; finally for any $\alpha :T\to S\in ℳ\alpha: T \to S \in \mathcal\left\{M\right\}$, we should have ${\mathrm{id}}_{S}{\circ }^{S}\alpha =n\cdot \left(S\alpha \right)\in ℳ\mathrm\left\{id\right\}_S \circ^S \alpha = n\cdot\left(S\alpha\right) \in \mathcal\left\{M\right\}$, and having already $n\in ℳn\in\mathcal\left\{M\right\}$ this yields $S\alpha \in ℳS\alpha \in \mathcal\left\{M\right\}$ by the pasting lemma.

In particular, this lemma shows that monoids in $ℳ/S\mathcal\left\{M\right\}/S$, which coincide with monad maps $T\to S\in ℳT \to S \in \mathcal\left\{M\right\}$ for some monad $TT$, are clubs too. We shall denote the category of these by $\mathrm{Club}\left(𝒜\right)/S\mathbf\left\{Club\right\}\left(\mathcal\left\{A\right\}\right)/S$.

The lemma also implies that any cartesian monad, by which is meant a pullbacks preserving monad with cartesian unit and multiplication, is automatically a club.

Now note that evaluation at $11$ provides an equivalence $ℳ/S\stackrel{\sim }{\to }𝒜/S1\mathcal\left\{M\right\}/S \overset\sim\to \mathcal\left\{A\right\}/S1$ whose pseudo inverse is given for a map $f:K\to S1f:K \to S1$ by the natural transformation pointwise defined as the pullback

The previous monoidal product on $ℳ/S\mathcal\left\{M\right\}/S$ can be transported on $𝒜/S1\mathcal\left\{A\right\}/S1$ and bears a fairly simple description: given $f:K\to S1f:K \to S1$ and $g:H\to S1g:H \to S1$, the product, still denoted $f{\circ }^{S}gf\circ^S g$, is the evaluation at $11$ of the composite $\mathrm{TR}\to \mathrm{SS}\to STR \to SS \to S$ where $T\to ST \to S$ corresponds to $ff$ and $R\to SR\to S$ to $gg$. Hence the explicit equivalence given above allows us to write this as

Definition. By abuse of terminology, a monoid in $𝒜/S1\mathcal\left\{A\right\}/S1$ is said to be a club over $S1S1$.

### Examples of clubs

On $\mathrm{Set}\mathsf\left\{Set\right\}$, the free monoid monad $LL$ is cartesian, hence a club on $\mathrm{Set}\mathsf\left\{Set\right\}$ in the above sense. Of course, we retrieve as ${\circ }^{L}\circ^L$ the monoidal product of the introduction on $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$. Hence, clubs over $N\mathbf\left\{N\right\}$ in $\mathrm{Set}\mathsf\left\{Set\right\}$ are exactly the non symmetric $\mathrm{Set}\mathsf\left\{Set\right\}$-operads.

Considering $\mathrm{Cat}\mathsf\left\{Cat\right\}$ as a $11$-category, the free finite coproduct category monad $FF$ on $\mathrm{Cat}\mathsf\left\{Cat\right\}$ is a club in the above sense. This can be shown directly through the charaterization we stated earlier: its unit and multiplication are cartesian and it maps cartesian transformations to cartesian transformations. Moreover, the obvious monad map $P\to FP \to F$ is cartesian, where $PP$ is the free strict symmetric monoidal category monad on $\mathrm{Cat}\mathsf\left\{Cat\right\}$. Hence it yields for free that $PP$ is also a club on $\mathrm{Cat}\mathsf\left\{Cat\right\}$. Note that the groupoid $P\mathbf\left\{P\right\}$ of bijections is $P1P1$ and the category $\mathrm{Fin}\mathsf\left\{Fin\right\}$ of finite sets is $F1F1$. So it is now a matter of careful bookkeeping to establish that the functors (given by the Grothendieck construction) $\left[P,\mathrm{Set}\right]\to \mathrm{Cat}/P,\phantom{\rule{2em}{0ex}}\left[\mathrm{Fin},\mathrm{Set}\right]\to \mathrm{Cat}/\mathrm{Fin} \left[\mathbf\left\{P\right\},\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathbf\left\{P\right\}, \qquad \left[\mathsf\left\{Fin\right\},\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathsf\left\{Fin\right\} $ are strong monoidal where the domain categories are given Kelly’s substition product. In other words, it exhibits symmetric $\mathrm{Set}\mathsf\left\{Set\right\}$-operads and non enriched Lawvere theories as special clubs over $P\mathbf\left\{P\right\}$ and $\mathrm{Fin}\mathsf\left\{Fin\right\}$.

We could say that we are done: we have a polished abstract notion of clubs that can encompass the different notions of operads on $\mathrm{Set}\mathsf\left\{Set\right\}$ that we are used to. But what about operads on other categories? Also, the above monads $PP$ and $FF$ are actually $22$-monads on $\mathrm{Cat}\mathsf\left\{Cat\right\}$ when seen as a $22$-category. Can we extend the notion to this enrichement?

### Enriched clubs

We shall fix a cosmos $𝒱\mathcal\left\{V\right\}$ to enriched over (and denote as usual the underlying ordinary notions by a $00$-index), but we want it to have good properties, so that finite completeness makes sense in this enriched framework. Hence we ask that $𝒱\mathcal\left\{V\right\}$ is locally finitely presentable as a closed category (see David’s post). Taking a look at what we did in the ordinary case, we see that it heavily relies on the possibility of defining slice categories, which is not possible in full generality. Hence we ask for $𝒱\mathcal\left\{V\right\}$ to be semicartesian, meaning that the monoidal unit of $𝒱\mathcal\left\{V\right\}$ is its terminal object: then for a $𝒱\mathcal\left\{V\right\}$-category $\mathcal\left\{B\right\}$, the slice category $ℬ/B\mathcal\left\{B\right\}/B$ is defined to have elements $1\to ℬ\left(X,B\right)1 \to \mathcal\left\{B\right\}\left(X,B\right)$ as objects, and the space of morphisms between such $f:1\to ℬ\left(X,B\right)f:1 \to \mathcal\left\{B\right\}\left(X,B\right)$ and $f\prime :1\to ℬ\left(X\prime ,B\right)f\text{'}:1 \to \mathcal\left\{B\right\}\left(X\text{'},B\right)$ is given by the following pullback in ${𝒱}_{0}\mathcal\left\{V\right\}_0$:

If we also want to be able to talk about the category of enriched clubs over something, we should be able to make a $𝒱\mathcal\left\{V\right\}$-category out of the monoids in a monoidal $𝒱\mathcal\left\{V\right\}$-category. Again, this is a priori not possible to do: the space of monoid maps between $\left(M,m,i\right)\left(M,m,i\right)$ and $\left(N,n,j\right)\left(N,n,j\right)$ is supposed to interpret “the subspace of those $f:M\to Nf: M \to N$ such that $\mathrm{fi}=jfi=j$ and $\mathrm{fm}\left(x,y\right)=n\left(\mathrm{fx},\mathrm{fy}\right)fm\left(x,y\right)=n\left(fx,fy\right)$ for all $x,yx,y$”, where the later equation has two occurences of $ff$ on the right. Hence we ask that $𝒱\mathcal\left\{V\right\}$ is actually a cartesian cosmos, so that the interpretation of such a subspace is the joint equalizer of

Moreover, these hypothesis also resolve the set theoretical issues: because of all the hypotheses on $𝒱\mathcal\left\{V\right\}$, the underlying ${𝒱}_{0}\mathcal\left\{V\right\}_0$ identifies with the category $\mathrm{Lex}\left[{𝒯}_{0},\mathrm{Set}\right]\mathrm\left\{Lex\right\}\left[\mathcal\left\{T\right\}_0,\mathsf\left\{Set\right\}\right]$ of $\mathrm{Set}\mathsf\left\{Set\right\}$-valued left exact functors from the finitely presentables of ${𝒱}_{0}\mathcal\left\{V\right\}_0$. Hence, for a $𝒱\mathcal\left\{V\right\}$-category $𝒜\mathcal\left\{A\right\}$, the category of $𝒱\mathcal\left\{V\right\}$-endofunctors $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ is naturally a $𝒱\prime \mathcal\left\{V\right\}\text{'}$-category for the cartesian cosmos $𝒱\prime =\mathrm{Lex}\left[{𝒯}_{0},\mathrm{Set}\prime \right]\mathcal\left\{V\right\}\text{'}=\mathrm\left\{Lex\right\}\left[\mathcal\left\{T\right\}_0,\mathsf\left\{Set\right\}\text{'}\right]$ where $\mathrm{Set}\prime \mathsf\left\{Set\right\}\text{'}$ is the category of $𝕍\mathbb\left\{V\right\}$-small sets for a universe $𝕍\mathbb\left\{V\right\}$ big enough to contain the set of objects of $𝒜\mathcal\left\{A\right\}$. Hence we do not care so much about size issues and consider everything to be a $𝒱\mathcal\left\{V\right\}$-category; the careful reader will replace $𝒱\mathcal\left\{V\right\}$ by $𝒱\prime \mathcal\left\{V\right\}\text{'}$ when necessary.

In the context of categories enriched over a locally finitely presentable cartesian closed cosmos $𝒱\mathcal\left\{V\right\}$, all we did in the ordinary case is directly enrichable. We call a $𝒱\mathcal\left\{V\right\}$-natural transformation $\alpha :T\to S\alpha: T \to S$ cartesian just when it is so as a natural transformation ${T}_{0}\to {S}_{0}T_0 \to S_0$, and denote the set of these by $\mathcal\left\{M\right\}$. For a $𝒱\mathcal\left\{V\right\}$-monad $SS$ on $𝒜\mathcal\left\{A\right\}$, the category $ℳ/S\mathcal\left\{M\right\}/S$ is the full subcategory of the slice $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$ spanned by the objects in $\mathcal\left\{M\right\}$.

Definition. A $𝒱\mathcal\left\{V\right\}$-club on $𝒜\mathcal\left\{A\right\}$ is a $𝒱\mathcal\left\{V\right\}$-monad $SS$ such that $ℳ/S\mathcal\left\{M\right\}/S$ is closed under the induced $𝒱\mathcal\left\{V\right\}$-monoidal product of $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$.

Now comes the fundamental proposition about enriched clubs:

Proposition. A $𝒱\mathcal\left\{V\right\}$-monad $SS$ is a $𝒱\mathcal\left\{V\right\}$-club if and only if ${S}_{0}S_0$ is an ordinary club.

In that case, the category of monoids in $ℳ/S\mathcal\left\{M\right\}/S$ is composed of the clubs $TT$ together with a $𝒱\mathcal\left\{V\right\}$-monad map $1\to \left[𝒜,𝒜\right]\left(T,S\right)1 \to \left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]\left(T,S\right)$ in $\mathcal\left\{M\right\}$. We will still denote it $\mathrm{Club}\left(𝒜\right)/S\mathbf\left\{Club\right\}\left(\mathcal\left\{A\right\}\right)/S$ and its underlying ordinary category is $\mathrm{Club}\left({𝒜}_{0}\right)/{S}_{0}\mathbf\left\{Club\right\}\left(\mathcal\left\{A\right\}_0\right)/S_0$. We can once again take advantage of the $𝒱\mathcal\left\{V\right\}$-equivalence $ℳ/S\simeq 𝒜/S1\mathcal\left\{M\right\}/S \simeq \mathcal\left\{A\right\}/S1$ to equip the later with a $𝒱\mathcal\left\{V\right\}$-monoidal product, and abuse terminlogy to call its monoids $𝒱\mathcal\left\{V\right\}$-clubs over $S1S1$. Proving all that carefully require notions of enriched factorization systems that are of no use for this post.

So basically, the slogan is: as long as $𝒱\mathcal\left\{V\right\}$ is a cartesian cosmos which is loccally presentable as a closed category, everything works the same way as in the ordinary case, and $\left(-{\right)}_{0}\left(-\right)_0$ preserves and reflects clubs.

### Examples of enriched clubs

As we said earlier, $FF$ and $PP$ are $22$-monads on $\mathrm{Cat}\mathsf\left\{Cat\right\}$, and the underlying ${F}_{0}F_0$ and ${P}_{0}P_0$ (earlier just denoted $FF$ and $PP$) are ordinary clubs. So $FF$ and $PP$ are $\mathrm{Cat}\mathsf\left\{Cat\right\}$-clubs, maybe better called $22$-clubs. Moreover, the map ${P}_{0}\to {F}_{0}P_0 \to F_0$ mentioned earlier is easily promoted to a $22$-natural transformation making $P\mathbf\left\{P\right\}$ a $22$-club over $\mathrm{Fin}\mathsf\left\{Fin\right\}$.

The free monoid monad on a cartesian cosmos $𝒱\mathcal\left\{V\right\}$ is a $𝒱\mathcal\left\{V\right\}$-club and the clubs over $L1L1$ are precisely the non symmetric $𝒱\mathcal\left\{V\right\}$-operads.

Last but not least, a quite surprising example at first sight. Any small ordinary category ${𝒜}_{0}\mathcal\left\{A\right\}_0$ is naturally enriched in its category of presheaves $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$, as the full subcategory of the cartesian cosmos $𝒱=\mathrm{Psh}\left({𝒜}_{0}\right)\mathcal\left\{V\right\}=\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$ spanned by the representables. Concretely, the space of morphisms between $AA$ and $BB$ is given by the presheaf $𝒜\left(A,B\right):C↦{𝒜}_{0}\left(A×C,B\right) \mathcal\left\{A\right\}\left(A,B\right): C \mapsto \mathcal\left\{A\right\}_0\left(A \times C, B\right) $ Hence an $𝒱\mathcal\left\{V\right\}$-endofunctor $SS$ on $𝒜\mathcal\left\{A\right\}$ is the data of a map $A↦\mathrm{SA}A \mapsto SA$ on objects, together with for any $A,BA,B$ a $𝒱\mathcal\left\{V\right\}$-natural transformation ${\sigma }_{A,B}:𝒜\left(A,B\right)\to 𝒜\left(\mathrm{SA},\mathrm{SB}\right)\sigma_\left\{A,B\right\}: \mathcal\left\{A\right\}\left(A,B\right) \to \mathcal\left\{A\right\}\left(SA,SB\right)$ satisfying some axioms. Now fixing $A,C\in 𝒜A,C \in \mathcal\left\{A\right\}$, the collection of $\left({\sigma }_{A,B}{\right)}_{C}:{𝒜}_{0}\left(A×C,B\right)\to {𝒜}_{0}\left(\mathrm{SA}×C,\mathrm{SB}\right) \left(\sigma_\left\{A,B\right\}\right)_C : \mathcal\left\{A\right\}_0\left(A\times C,B\right) \to \mathcal\left\{A\right\}_0\left(SA \times C, SB\right) $ is equivalently, via Yoneda, a collection of ${\stackrel{˜}{\sigma }}_{A,C}:{𝒜}_{0}\left(\mathrm{SA}×C,S\left(A×C\right)\right). \tilde\left\{\sigma\right\}_\left\{A,C\right\} : \mathcal\left\{A\right\}_0\left(SA\times C,S\left(A \times C\right)\right). $ The axioms that $\sigma \sigma$ satisfies as a $𝒱\mathcal\left\{V\right\}$-enriched natural transformation make $\stackrel{˜}{\sigma }\tilde \sigma$ a strength for the endofunctor ${S}_{0}S_0$. Along this translation, a strong monad on $𝒜\mathcal\left\{A\right\}$ is then just a $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$-monad. And it is very common, when modelling side effects by monads in Computer Science, to end up with strong cartesian monads. As cartesian monads, they are in particular ordinary clubs on ${𝒜}_{0}\mathcal\left\{A\right\}_0$. Hence, those are $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$-monads whose underlying ordinary monad is a club: that is, they are $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$-clubs on $𝒜\mathcal\left\{A\right\}$.

In conclusion, let me point out that there is much more in Kelly’s article than presented here, especially on local factorisation systems and their link to (replete) reflexive subcategories with a left exact reflexion. It is by the way quite surprising that he does not stay in full generality longer, as one could define an abstract club in just that framework. Maybe there is just no interesting example to come up with at that level of generality…

Also, a great deal of examples of club comes from never published work of Robin Cockett (or at least, I was not able to find it), so these motivations are quite difficult to follow.

Going a little further in the generalization, the cautious reader should have noticed that we did not say anything about coloured operads. For then we would not have to look at slice categories of the form $𝒜/S1\mathcal\left\{A\right\}/S1$, but at categories of span with one leg pointing to $SCS C$ (morally mapping an operation to its coloured arity) and the other one to $CC$ (morally picking the output colour), where the $CC$ is the object of colours. Those spans actually appear above implicitly whenever a map or the form $!:X\to 1!:X \to 1$ is involved (morally, this is the map picking the “only output colour” in a non coloured operad). This somehow should be contained somewhere in Garner’s work on double clubs or in Shulman’s and Cruttwell’s unified framework for generalized multicategories. I am looking forward to learn more about that in the comments!

## April 22, 2017

### Lubos Motl - string vacua and pheno

Physicists, smart folks use same symbols for Lie groups, algebras for good reasons
I have always been amazed by the sheer stupidity and tastelessness of the people who aren't ashamed of the likes of Peter Woit. He is obviously a mediocre man with no talents, no achievements, no ethics, and no charisma but because of the existence of many people who have no taste and who want to have a leader in their jihad against modern physics, he was allowed to talk about physics as if his opinions mattered.

Woit is a typical failing-grade student who simply isn't and has never been the right material for college. His inability to learn string theory is a well-known aspect of this fact. But most people in the world – and maybe even most of the physics students – misunderstand string theory. But his low math-related intelligence is often manifested in things that are comprehensible to all average or better students of physics.

Two years ago, Woit argued that
the West Coast metric is the wrong one.
Now, unless you are a complete idiot, you must understand that the choice of the metric tensor – either $$({+}{-}{-}{-})$$ or $$({-}{+}{+}{+})$$ – is a pure convention. The metric tensor $$g^E_{\mu\nu}$$ of the first culture is simply equal to minus the metric tensor of the second culture $$g^W_{\mu\nu}$$, i.e. $$g^E_{\mu\nu} = - g^W_{\mu\nu}$$, and every statement or formula written with one set of conventions may obviously be translated to a statement written in the other, and vice versa. The equations or statements basically differ just by some signs. The translation from one convention to another is always possible and is no more mysterious than the translation from British to U.S. English or vice versa.

How stupid do you have to be to misunderstand this point, that there can't be any "wrong" convention for the sign? And how many people are willing to believe that someone's inability to get this simple point is compatible with the credibility of his comments about string theory?

Well, this individual has brought us a new ludicrous triviality of the same type,
Two Pet Peeves
We're told that we mustn't use the same notation for a Lie group and a Lie algebra. Why? Because Tony Zee, Pierre Ramond, and partially Howard Georgi were using the unified notation and Woit "remember[s] being very confused about this when I first started studying the subject". Well, Mr Woit, you were confused simply because you have never been college material. But it's easier to look for flaws in Lie groups and Lie algebras than in your own worthless existence, right?

Many physicists use the same symbols for Lie groups and the corresponding Lie algebras for a simple reason: they – or at least their behavior near the identity (or any other point on the group manifold) – is completely equivalent. Except for some global behavior, the information about the Lie group is completely equivalent to the information about the corresponding Lie algebra. They're just two languages to talk about the same thing.

Just to be sure, in my and Dr Zahradník's textbook on linear algebra, we used the separate symbols and I love the fraktur fonts. In Czechia and maybe elsewhere, most people who are familiar with similar fonts at all call them "Schwabacher" but strictly speaking, Textura, Rotunda, Schwabacher, and Fraktur are four different typefaces. Schwabacher is older and was replaced by Fraktura in the 16th century. In 1941, Hitler decided that there were too many typos in the newspapers and that foreigners couldn't decode Fraktura which diminishes the importance of Germany abroad, so he banned Fraktura and replaced it with Antiqua.

When we published our textbook, I was bragging about the extensive index that was automatically created by a $${\rm \LaTeX}$$ macro. I told somebody: Tell me any word and you will see that we can find it in the index. In front of several witnesses, the first person wanted to humiliate me so he said: "A broken bone." So I abruptly responded: "The index doesn't include a 'broken bone' literally but there's a fracture in it!" ;-) Yes, I did include a comment about the font in the index. You know, the composition of the index was as simple as placing the command like \placeInTheIndex{fraktura} in a given place of the source. After several compilations, the correct index was automatically created. I remember that in 1993 when I began to type it, one compilation of the book took 15 minutes on the PCs in the computer lab of our hostel! When we received new 90 GHz frequency PCs, the speed was almost doubled. ;-)

OK, I don't want to review elementary things because some readers know them and wouldn't learn anything new, while others don't know these things and a brief introduction wouldn't help them. But there is a simple relationship between a Lie algebra and a Lie group. You may obtain the elements of the group by a simple exponentiation of an element of a Lie algebra. For this reason, all the "structure coefficients" $$f_{ij}{}^k$$ that remember the structure of commutators$[T_i,T_j] = f_{ij}{}^k T_k$ contain the same information as all the curvature information about the group manifold near the identity. The Lie algebra simply is the tangent space of the group manifold around the identity (or any element) and all the commutators in the Lie algebra are equivalent to the information about the distortions that a projection of the neighborhood of the identity in the group manifold to a flat space causes.

We often use the same symbols because it's harder to write the gothic fonts. More importantly,
whenever a theory, a solution, or a situation is connected with a particular Lie group, it's also connected with the corresponding Lie algebra, and vice versa!
That's the real reason why it doesn't matter whether you talk about a Lie group or a Lie algebra. We use their labels for "identification purposes" and the identification is the same whether you have a Lie group or a Lie algebra in mind. A very simple example:
There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are $$SO(32)$$ and $$E_8\times E_8$$, respectively.

There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are (or have the Lie algebras) $${\mathfrak so}(32)$$ and $${\mathfrak e}_8\oplus {\mathfrak e}_8$$, respectively.
I wrote the sentence in two ways. The first one sort of talks about the group manifolds while the second talks about Lie algebras. The information is obviously almost completely equivalent.

Well, except for subtleties – the global choices and identifications in the group manifold that don't affect the behavior of the group manifold in the vicinity of the identity element. If you want to be careful about these subtleties, you need to talk about the group manifolds, not just Lie algebras, because the Lie algebras "forget" the information about these global issues.

So you might want to be accurate and talk about the Lie groups in 10 dimensions – and say that the allowed heterotic gauge groups are $$E_8\times E_8$$ and $$SO(32)$$. However, this effort of yours would actually make things worse because when you use a language that has the ambition of being correct about the global issues, it's your responsibility to be correct about them, indeed, and chances are that your first guess will be wrong!

In particular, the "$$SO(32)$$" heterotic string also contains spinors. So a somewhat smart person could say that the gauge group of that heterotic string is actually $$Spin(32)$$, not $$SO(32)$$. However, that would be about as wrong as $$SO(32)$$ itself – almost no improvement – because the actual perturbative gauge group of this heterotic theory is isomorphic to$Spin(32) / \ZZ_2$ where the $$\ZZ_2$$ is chosen in such a way that the group is not isomorphic to $$SO(32)$$. It's another $$\ZZ_2$$ from the center isomorphic to $$\ZZ_2\times \ZZ_2$$ that allows left-handed spinors but not the right-handed ones! By the way, funnily, the S-dual theory is type I superstring theory whose gauge group – arising from Chan-Paton factors of the open strings – seems to be $$O(32)$$. However, the global form of the gauge group gets modified by D-particles, the other half of $$O(32)$$ beyond $$SO(32)$$ is broken, and spinors of $$Spin(32)$$ are allowed by the D-particles so non-perturbatively, the gauge group of type I superstring theory agrees with that of the heterotic S-dual theory including the global subtleties.

(Peter Woit also ludicrously claims that physicists only need three groups, $$U(1),SU(2), SO(3)$$. That may have been almost correct in the 1920s but it's surely not true in the 21st century particle physics. If you're an undergraduate with plans to do particle physics and someone offers you to quickly learn about symplectic or exceptional groups, and perhaps a few others, you shouldn't refuse it.)

You don't need to talk about string theory to encounter similar subtleties. Ask a simple question. What is the gauge group of the Standard Model? Well, people will normally answer $$SU(3)\times SU(2)\times U(1)$$. But what they actually mean is just the statement that the Lie algebra of the gauge group is${\mathfrak su}(3) \oplus {\mathfrak su}(2) \oplus {\mathfrak u}(1).$ Note that the simple, Cartesian $$\times$$ product of Lie groups gets translated to the direct $$\oplus$$ sum of the Lie algebras – the latter are linear vector spaces. OK, so the statement that the Lie algebra of the gauge group of the Standard Model is the displayed expression above is correct.

But if you have the ambition to talk about the precise group manifolds, those know about all the "global subtleties" and it turns out that $$SU(3)\times SU(2)\times U(1)$$ is not isomorphic to the Standard Model gauge group. Instead, the Standard Model gauge group is$[SU(3)\times SU(2)\times U(1)] / \ZZ_6.$ The quotient by $$\ZZ_6$$ must be present because all the fields of the Standard Model have a correlation between the hypercharge $$Y$$ modulo $$1/6$$ and the spin under the $$SU(2)$$ as well as the representation under the $$SU(3)$$. It is therefore impossible to construct states that wouldn't be invariant under this $$\ZZ_6$$ even a priori which means that this $$\ZZ_6$$ acts trivially even on the original Hilbert space and "it's not there".

The $$\ZZ_6$$ must be divided by for the same reasons why we usually say that the Standard Model gauge group doesn't contain an $$E_8$$ factor. You could also say that there's also an $$E_8$$ factor except that all fields transform as a singlet. ;-) We don't do it – when we say that there is a symmetry or a gauge group, we want at least something to transform nontrivially.

OK, you see that the analysis of the correlations of the discrete charges modulo $$1/6$$ may be subtle. We usually don't care about these details when we want to determine much more important things – how many gauge bosons there are and what their couplings are. These important things are given purely by the Lie algebra which is why our statements about the identity of the gauge group should mostly be understood as statements about Lie algebras.

At some level, you may want to be picky and discuss the global properties of the gauge group and correlations. But you usually don't need to know these answers for anything else. The knowledge of these facts is usually only good for its own sake. You can't calculate any couplings from it, and so on. That's why our sentences should be assumed not to talk about these details at all – and/or be sloppy about these details.

(Just to be sure, the global subtleties, centers of the group, differences between $$SO(N)$$ and $$O(N)$$ and $$Spin(N)$$, differences for even and odd $$N$$, or dependence on $$N$$ modulo 8, may still lead to interesting physical consequences and consistency checks and several papers of mine, especially about the heterotic matrix models, were obsessed with these details, too. But this kind of concerns only represents a minority of physicists' interests, especially in the case of beginners.)

By the way, the second "pet sleeve" by Woit is that one should distinguish real and complexified versions of the same Lie algebras (and groups). Well, I agree you should distinguish them. But at some general analytic or algebraic level, all algebras and other structures should always be understood as the complexified ones – and only afterwards, we may impose some reality conditions on fields (and therefore the allowed symmetries, too). So I would say that to a large extent, even this complaint of Woit reflects his misunderstanding of something important – the fact that the most important information about the Lie groups is hiding in the structure constants of the corresponding Lie algebra, and those are identical for all Lie groups with the same Lie algebra, and they're also identical for real and complex versions of the groups.

(By the way, he pretends to be very careful about the complexification, but he writes the condition for matrix elements of an $$SU(2)$$ matrix as $$\alpha^2+\beta^2=1$$ instead of $$|\alpha|^2+|\beta|^2 = 1$$. Too bad. You just shouldn't insist on people's distinguishing non-essential things about the complexification if you can't even write the essential ones correctly yourself.)

In the futile conversations about the foundations of quantum mechanics, I often hear or read comments like:
Please, don't use the confusing word "observation" which makes it look like quantum mechanics depends on what is an observation and what isn't etc. and it's scary.
Well, the reason why my – and Heisenberg's – statements look like we are saying that quantum mechanics depends on observations is that quantum mechanics depends on observations, indeed. So the dissatisfied laymen or beginners really ask the physicists to use the language that would strengthen the listeners' belief that classical physics is still basically right. Except that it's not! We mostly use this language – including the word "observation" – because it really is essential in the new framework of physics.

In the same way, failing-grade students such as Peter Woit may be constantly asking whether a physicist talks about a Lie group or the corresponding Lie algebra. They are basically complaining:
Georgi, Ramond, Zee, don't use this notation that looks like it suggests that the Lie group and the Lie algebra are basically the same thing even though they are something completely different.
The problem is, of course, that the failing-grade students such as Peter Woit are wrong. Georgi, Ramond, Zee, and others often use the same symbols for the Lie groups and the Lie algebras because they really are basically the same thing. And it's just too bad if you don't understand this tight relationship – basically an equivalence.

I think that there exist many lousy teachers of mathematics and physics that are similar to Peter Woit. Those don't understand the substance – what is really important, what is true. So they focus on what they understand – arbitrarily invented rules what the students are obliged to parrot for the teacher to feel more important. So the poor students who have such teachers are often being punished for using a different metric tensor convention once or for using a wrong font for a Lie algebra. These teachers don't understand the power and beauty of mathematics and physics and they're working hard to make sure that their students won't understand them, either.

### ZapperZ - Physics and Physicists

Earth Day 2017 - March For Science Day
Today is the March for Science day to coincide with Earth Day 2017.

Unfortunately, I will not be participating in it, because I'm flying off to start my vacation. However, I have the March for Science t-shirt, and will be wearing it all day. So I may not be with all of you who will be participating it in today, but I'll be there in spirit.

And yes, I have written to my elected officials in Washington DC to let them know how devastating the Trump budget proposal is to science and the economic future of this country. Unfortunately, I may be preaching to the choir, because all 3 of them (2 Senators and 1 Representative of my district) are all Democrats who I expect to oppose the Trump budget as it is anyway.

Anyhow, to those of you who will be marching, YOU GO, BOYS AND GIRLS!

Zz.

## April 21, 2017

### Clifford V. Johnson - Asymptotia

Silicon Valley

I’ll be at Silicon Valley Comic Con this weekend, talking on two panels about science and its intersection with film on the one hand (tonight at 7pm if my flight is not too delayed), and non-fiction comics (see my book to come) on the other (Saturday at 12:30 or so). … Click to continue reading this post

The post Silicon Valley appeared first on Asymptotia.

### ZapperZ - Physics and Physicists

"Physics For Poets" And "Poetry For Physicists"?
Chad Orzel has a very interesting and thought-provoking article that you should read.

What he is arguing is that scientists should learn the mindset of the arts and literature, while those in the humanities and the arts should learn the mindset of science. College courses should not be tailored in such a way that the mindset of the home department is lost, and that a course in math, let's say, has been devolved into something palatable to an arts major.

I especially like his summary at the end:

One of the few good reasons is that a mindset that embraces ambiguity is something useful for scientists to see and explore a bit. By the same token, though, the more rigorous and abstract scientific mindset is something that is equally worthy of being experienced and explored by the more literarily inclined. A world in which physics majors are more comfortable embracing divergent perspectives, and English majors are more comfortable with systematic problem solving would be a better world for everyone.

I think we need to differentiate between changing the mindset versus tailoring a course for a specific need. I've taught a physics class for mainly life science majors. The topics that we covered is almost identical to that offered to engineering/physics majors, with the exception that they do not contain any calculus. But other than that, it has the same rigor and coverage. The thing that made it specific to the group of students is that many of the examples that I used came out of biology and medicine. These were what I used to keep the students' interest, and to show them the relevance of what they were studying to their major area. But the systematic and analytical approach to the subject are still there. In fact, I consciously emphasized the technique and skills in analyzing and solving a problem, and made them as important as the material itself. In other words, this is the "mindset" that Chad Orzel was referring to that we should not lose when the subject is being taught to non-STEM majors.

Zz.

### Clifford V. Johnson - Asymptotia

Advising on Genius: Helping Bring a Real Scientist to Screen

Well, I've been meaning to tell you about this for some time, but I've been distracted by many other things. Last year I had the pleasure of working closely with the writers and producers on the forthcoming series on National Geographic entitled "Genius". (Promotional photo above borrowed from the show's website.)The first season, starting on Tuesday, is about Einstein - his life and work. It is a ten episode arc. I'm going to venture that this is a rather new kind of TV show that I really hope does well, because it could open the door to longer more careful treatments of subjects that usually are considered too "difficult" for general audiences, or just get badly handled in the short duration of a two-hour movie.

Since reviews are already coming out, let me urge you to keep an open mind, and bear in mind that the reviewers (at the time of writing) have only seen the two or three episodes that have been sent to them for review. A review based on two or three episodes of a series like this (which is more like a ten hour movie - you know how these newer forms of "long form TV" work) is akin to a review based on watching the first 25-35 minutes of a two hour film. You can get a sense of tone and so forth from such a short sample, but not much can be gleaned about content to come. So remember that when the various opinion pieces appear in the next few weeks.

So... content. That's what I spent a lot of time helping them with. I do this sort of thing for movies and TV a lot, as you know, but this was a far [...] Click to continue reading this post

The post Advising on Genius: Helping Bring a Real Scientist to Screen appeared first on Asymptotia.

## April 19, 2017

### ZapperZ - Physics and Physicists

The Mystery Of The Proton Spin
If you are not familiar with the issues surrounding the origin of the proton's spin quantum number, then this article might help.

It explains the reason why we don't believe that the proton spin is due just to the 3 quarks that make up the proton, and in the process, you get an idea how complicated things can be inside a proton.

There are three good reasons that these three components might not add up so simply.
1. The quarks aren't free, but are bound together inside a small structure: the proton. Confining an object can shift its spin, and all three quarks are very much confined.
2. There are gluons inside, and gluons spin, too. The gluon spin can effectively "screen" the quark spin over the span of the proton, reducing its effects.
3. And finally, there are quantum effects that delocalize the quarks, preventing them from being in exactly one place like particles and requiring a more wave-like analysis. These effects can also reduce or alter the proton's overall spin.
Expect the same with a neutron.

Zz.

### The n-Category Cafe

Functional Equations, Entropy and Diversity: A Seminar Course

I’ve just finished teaching a seminar course officially called “Functional Equations”, but really more about the concepts of entropy and diversity.

I’m grateful to the participants — from many parts of mathematics, biology and physics, at levels from undergraduate to professor — who kept coming and contributing, week after week. It was lots of fun, and I learned a great deal.

This post collects together all the material in one place. First, the notes:

Now, the posts I wrote every week:

### The n-Category Cafe

The Diversity of a Metacommunity

The eleventh and final installment of the functional equations course can be described in two ways:

• From one perspective, I talked about conditional entropy, mutual information, and a very appealing analogy between these concepts and the most basic primary-school Venn diagrams.

• From another, it was about diversity across a metacommunity, that is, an ecological community divided into smaller communities (e.g. geographical sites).

The notes begin on page 44 here.

### Emily Lakdawalla - The Planetary Society Blog

This weekend, it's the beginning of the end for Cassini
NASA's long-lived Cassini spacecraft is about to buzz Titan for the final time, putting it on course for a spectacular mission finale that concludes in September.

### Lubos Motl - string vacua and pheno

All of string theory's power, beauty depends on quantum mechanics
Wednesday papers: Arkani-Hamed et al. show that the amplituhedron is all about sign flips. Maldacena et al. study the double-trace deformations that make a wormhole traversable. Among other things, they argue that the cloning is avoided because the extraction (by "Bob") eliminates the interior copy of the quantum information.
String/M-theory is the most beautiful, powerful, and predictive theory we know – and, most likely, the #1 with these adjectives among those that are mathematically possible – but the degree of one's appreciation for its exceptional credentials depends on one's general knowledge of physics, especially quantum mechanics.

Click to see an animation (info).

Quantum mechanics was basically discovered at one point in the mid 1920s and forced physics to make a one-time quantum jump. On the other hand, it also defines a trend because the novelties of quantum mechanics may be taken more or less seriously, exploited more or less cleverly and completely, and as physics was evolving towards more advanced, stringy theories and explanations of things, the role of the quantum mechanical thinking was undoubtedly increasing.

When we say "classical string theory", it is a slightly ambiguous term. We can take various classical limits of various theories that emerge from string theory, e.g. the classical field theory limit of some effective field theories in the spacetime. But the most typical representation of "classical string theory" is given by the dull yellow animation above. A classical string is literally a curve in a pre-existing spacetime that oscillates according to a wave equation of a sort.

OK, on that picture, you see a vibrating rope. It is not better or more exceptional than an oscillating membrane, a Chladni pattern, a little green man with Parkinson's disease, or anything else that moves and jiggles. The power of string theory only emerges once you consider the real, adult theory where all the observables such as the positions of points along the string are given by non-commuting operators.

Just to be sure, the rule that "observable = measurable quantities are associated with non-commuting operators" is what I mean by quantum mechanics.

What does quantum mechanics do for a humble string like the yellow string above?

First, it makes the spectrum of vibrations discrete.

Classically, you may change the initial state of the vibrating string arbitrarily and continuously, and the energy carried by the string is therefore continuous, too. That's not the case in quantum mechanics. Quantum mechanics got its name from the quantized, discrete eigenvalues of the energy. A vibrating string is basically equivalent to a collection of infinitely many harmonic oscillators. Each quantum mechanical harmonic oscillator only carries an integer number of excitations, not a continuous amount of energy.

The discreteness of the spectrum – which depends on quantum mechanics for understandable reasons – is obviously needed for strings in string theory to coincide with a finite number of particle species we know in particle physics – or a countable one that we may know in the future. Without the quantization, the number of species would be uncountably infinite. The species would form a continuum. There would be not just an electron and a muon but also elemuon and all other things in between, in an infinite-dimensional space.

Quantum mechanics is needed for some vibrating strings to act as gravitons and other exceptional particles.

String theory predicts gravity. It makes Einstein's general relativity – and the curved spacetime and gravitational waves that result from it – unavoidable. Why is it so? It's because some of the low-energy vibrating strings, when they're added into the spacetime, have exactly the same effect as a deformation of the underlying geometry – or other low-energy fields defining the background.

Why is it so? It's ultimately because of the state-operator correspondence. The internal dynamics of a string depends on the underlying spacetime geometry. And the spacetime geometry may be changed. But the infinitesimal change of the action etc. for a string is equivalent to the interaction of the string with another, "tiny" string that is equivalent to the geometry change.

We may determine the right vibration of the "tiny" string that makes the previous sentence work because for every operator on the world sheet (2D history of a fundamental string), there exists a state of the string in the Hilbert space of the stringy vibrations. And this state-operator correspondence totally depends on quantum mechanics, too.

In classical physics, the number of observables – any function $$f(x_i,p_i)$$ on a phase space – is vastly greater than the number of states. The states are just points given by the coordinates $$(x_i,p_i)$$ themselves. It's not hard to see that the first set is much greater – an infinite-dimensional vector space – than the second. However, quantum mechanics increases the number of states (by allowing all the superpositions) and reduces the number of observables (by making them quantized, or respectful towards the quantization of the phase space) and the two numbers become equivalent up to a simple tensoring with the functions of the parameter $$\sigma$$ along the string.

I don't want to explain the state-operator correspondence, other blog posts have tried it and it is a rather technical issue in conformal field theory that you should study once you are really serious about learning string theory. But here, I want to emphasize that it wouldn't be possible in any classical world.

Let me point out that the world of the "interpreters" of quantum mechanics who imagine that the wave function is on par with a classical wave is a classical world, so it is exactly as impotent as any other world.

T-duality depends on quantum mechanics

A nice elementary symmetry that you discover in string theory compactified on tori is the so-called T-duality. The compactified string theory on a circle of radius $$R$$ is the same as the theory on a circle of radius $$\alpha' / R$$ where $$T=1/2 \pi \alpha'$$ is the string tension (energy or mass per unit length of the string). Well, this property depends on quantum mechanics as well because the T-duality map exchanges the momentum $$n$$ with the winding $$w$$ which are two integers.

But in a classical string theory, the winding number $$w\in \ZZ$$ would still be integer (it counts how many times a closed string is wrapped around the circle) while the momentum would be continuous, $$n\in\RR$$. So they couldn't be related by a permutation symmetry. The T-duality couldn't exist.

Enhanced gauge symmetry on a self-dual radius depends on quantum mechanics

The fancier features of string theory you look at, the more obviously unavoidable quantum mechanics becomes. One of the funny things of bosonic string theory compactified on a circle is that the generic gauge group $$U(1)\times U(1)$$ gets enhanced to $$SU(2)\times SU(2)$$ on the self-dual radius. Even though you start with a theory where everything is "Abelian" or "linear" in some simple sense – a string propagating on a circle – you discover that the non-Abelian $$SU(2)$$ automatically arises if the radius obeys $$R = \alpha' / R$$, if it is self-dual.

I have discussed the enhanced symmetries in string theory some years ago but let's shorten the story. Why does the group get enhanced?

First, one must understand that for a generic radius, the unbroken gauge group is $$U(1)\times U(1)$$. One gets two $$U(1)$$ gauge groups because the gauge fields are basically $$g_{\mu,25}$$ and $$B_{\mu,25}$$. They arise as "last columns" of a symmetric tensor, the metric tensor, and an antisymmetric tensor, the $$B$$-field. The first (metric tensor-based) $$U(1)$$ group is the standard Kaluza-Klein gauge group and it is $$U(1)$$ because $$U(1)$$ is the isometry group of the compactification manifold. There is another gauge group arising from the gauge field that you get from a pre-existing 2-index gauge field $$B_{\mu\nu}$$, a two-form, if you set the second index equal to the compactified direction.

These two gauge fields are permuted by the T-duality symmetry (just like the momentum and winding are permuted, because the momentum and winding are really the charges under these two symmetries).

OK, how do you get the $$SU(2)$$? The funny thing is that the $$U(1)$$ gauge bosons are associated, via the operator-state correspondence mentioned above, with the operators on the world sheet$(\partial_z X^{25}, \quad \partial_{\bar z} X^{25}).$ One of them is holomorphic, the other one is anti-holomorphic, we say. T-duality maps these operators to$(\partial_z X^{25}, \quad -\partial_{\bar z} X^{25}).$ so it may be understood as a mirror reflection of the $$X^{25}$$ coordinate of the spacetime except that it only acts on the anti-holomorphic (or right-moving) oscillations propagating along the string. That's great. You have something like a discrete T-duality which is just some sign flip or, equivalently, the exchange of the momentum and winding. How do you get a continuous $$SU(2)$$, I ask again?

The funny thing is that at the self-dual radius, there are not just two operators like that but six. The holomorphic one, $$\partial_z X^{25}$$, becomes just one component of a three-dimensional vector$(\partial_z X_L^{25},\,\, :\exp(+i X_L^{25}):, :\exp(-i X_L^{25}):)$ Classically, the first operator looks nothing like the last two. If you have a holomorphic function $$X_L^{25}(z)$$ of some coordinate $$z$$, its $$z$$-derivative seems to be something completely different than its exponential, right? But quantum mechanically, they are almost the same thing! Why is it so?

If you want to describe all physically meaningful properties of three operators like that, the algebra of all their commutators encodes all the information. Just like string theory has the state-operator correspondence that allows you to translate between states and operators, it also has the OPEs – operator-product expansions – that allow you to extract the commutators of operators from the singularities in a decomposition of their products etc.

And it just happens that the singularities in the OPEs of any such operators are compatible with the statement that these three operators are components of a triplet that transforms under an $$SU(2)$$ symmetry. So you get one $$SU(2)$$ from the left-moving, $$z$$-dependent part $$X_L^{25}$$, and one $$SU(2)$$ from the $$\bar z$$-dependent $$X_R^{25}$$.

All other non-Abelian and sporadic or otherwise cool groups that you get from perturbative string theory arise similarly, and are therefore similarly dependent on quantum mechanics. For example, the monster group in the string theory model explaining the monstrous moonshine only exists because of a similar "equivalence" that is only true at the quantum level.

Spacetime dimension and sizes of group are only predictable in quantum mechanics

String theory is so predictive that it forces you to choose a preferred dimension of the spacetime. The simple bosonic string theory has $$D=26$$ and superstring theory, the more realistic and fancy one, similarly demands $$D=10$$. This contrasts with the relatively unconstrained, "anything goes" theories of the pre-stringy era.

Polchinski's book contains "seven" ways to calculate the critical dimension, according to the counting by the author. But here, what is important is that all of them depend on a cancellation of some quantum anomalies.

In the covariant quantization, $$D=26$$ basically arises as the number of bosonic fields $$X^\mu$$ whose conformal anomaly cancels that from the $$bc$$ ghost system. The latter has $$c=1-3k^2=-26$$ because some constant is $$k=3$$: the central charge describes a coefficient in front of a standard term to the conformal anomaly. Well, you need to add $$c=+26$$ – from 26 bosons – to get zero. And you need to get zero for the conformal symmetry to hold, even in the quantum theory. And the conformal symmetry is needed for the state-operator correspondence and other things – it is a basic axioms of covariant perturbative string theory.

Alternatively, you may define string theory in the light-cone gauge. The full Lorentz symmetry won't be obvious anymore. You will find out that some commutators$[j^{i-},j^{j-}] = \dots$ in the light-cone coordinates behaves almost correctly. Except that when you substitute the "bilinear in stringy oscillators" expressions for the generators $$j^{i-}$$, the calculation of the commutator will contain not only the "single contractions" – this part of the calculation is basically copying a classical calculation – but also the "double contraction" terms. And those don't trivially cancel. You will find out that they only cancel for 24 transverse coordinates. Needless to say, the "double contraction" is something invisible at the level of the Poisson brackets. You really need to talk about the "full commutators" – and therefore full quantum mechanics, not just some Poisson-bracket-like approximation – to get these terms at all.

Again, the correct spacetime dimension $$D=26$$ or $$D=10$$ arises from the cancellation of some quantum anomaly – some new quantum mechanical effects that have the potential of spoiling some symmetries that "trivially" hold in the classical limit that may have inspired you. The prediction couldn't be there if you ignored quantum mechanics.

The field equations in the spacetime result from an anomaly cancellation, too.

If you order perturbative strings to propagate on a curved spacetime background, you may derive Einstein's equations (plus stringy short-distance corrections), which in the vacuum simply demand the Ricci-flatness $R_{\mu\nu} = 0.$ A century ago, Einstein had to discover that this is what the geometry has to obey in the vacuum. It's an elegant equation and among similarly simple ones, it's basically unique that is diffeomorphism-symmetric. And you may derive it from the extremization of the Einstein-Hilbert action, too.

However, string theory is capable of doing all this guesswork for you. In other words, string theory is capable of replacing Einstein's 10 years of work. You may derive the Ricci-flatness from the cancellation of the conformal anomaly, too. You need the world sheet theory to stay invariant under the scaling of the world sheet coordinates, even at the quantum level.

But the world sheet theory depends on the functions$g_{\mu\nu} (X^\lambda(\sigma,\tau))$ and for every point in the spacetime given by the numbers $$\{X^\lambda\}$$, you have a whole symmetric tensor $$g_{\mu\nu}$$ of parameters that behave like "coupling constants" in the theory. But in a quantum field theory, and the world sheet theory is a quantum field theory, every coupling constant generically "runs". Its value depends on the chosen energy scale $$E$$. And the derivative with respect to the scale$\frac{dg_{\mu\nu}(X^\lambda)}{d (\ln E)} = \beta_{\mu\nu}(X^\lambda)$ is known as the beta-function. Here you have as many beta-functions as you have the numbers that determine the metric tensor at each spacetime point. The beta-functions have to vanish for the theory to remain scale-invariant on the world sheet – and you need it. And you will find out that$\beta_{\mu\nu}(X^\lambda) = R_{\mu\nu} (X^\lambda).$ The beta-function is nothing else than the Ricci tensor. Well, it could be the Einstein tensor and there could be extra constants and corrections. But I want to please you with the cool stuff; I hope that you don't doubt that if you want to work with these things, you have to take care of many details that make the exact answers deviate from the most elegant, naive Ansatz with the given amount of beauty.

So Einstein's equations result from the cancellation of the conformal anomaly as well. The very requirement that the theory remains consistent at the quantum level – and the preservation of gauge symmetries is indeed needed for the consistency – is enough to derive the equations for the metric tensor in the spacetime.

Needless to say, this rule generalizes to all the fields that you may get from particular vibrating strings in the spacetime. Dirac, Weyl, Maxwell, Yang-Mills, Proca, Higgs, and other equations of motions for the fields in the spacetime (including all their desirable interactions) may be derived from the scale-invariance of the world sheet theory, too.

In this sense, the logical consistency of the quantum mechanical theory dictates not only the right spacetime dimension and other numbers of degrees of freedom, sizes of groups such as $$E_8\times E_8$$ or $$SO(32)$$ for the heterotic string (the rank must be $$16$$ and the dimension has to be $$496$$, among other conditions), but the consistency also determines all the dynamical equations of motion.

S-duality, T-duality, mirror symmetry, AdS/CFT and holography, ER-EPR, and so on

And I could continue. S-duality – the symmetry of the theories under the $$g\to 1/g$$ maps of the coupling constant – also depend on quantum mechanics. It's absolutely obvious that no S-duality could ever work in a classical world, not even in quantum field theory. Among other things, S-dualities exchange the elementary electrically charged particles such as electrons with the magnetically charged ones, the magnetic monopoles. But classically, those are very different: electrons are point-like objects with an "intrinsic" charge while the magnetic monopoles are solitonic solutions where the charge is spread over the solution and quantized because of topological considerations.

However, quantum mechanically, they may be related by a permutation symmetry.

Mirror symmetry is an application of T-duality in the Calabi-Yau context, so everything I said about the quantum mechanical dependence of T-duality obviously holds for mirror symmetry, too.

Holography in quantum gravity – as seen in AdS/CFT and elsewhere – obviously depends on quantum mechanics, too. The extra holographic dimension morally arises from the "energy scale" in the boundary theory. But the AdS space has an isometry relating all these dimensions. Classically, "energy scale" cannot be indistinguishable from a "spacetime coordinate". Classically, the energy and momentum live in a spacetime, they have different roles.

Quantum mechanically, there may be such symmetries between energy/momentum and position/timing. The harmonic oscillator is a basic template for such a symmetry: $$x$$ and $$p$$ may be rotated to each other.

ER-EPR talks about the quantum entanglement so it's obvious that it would be impossible in a classical world.

I could make the same point about basically anything that is attractive about string theory – and even about comparably but less intriguing features of quantum field theories. All these things depend on quantum mechanics. They would be impossible in a classical world.

Summary: quantum mechanics erases qualitative differences, creates new symmetries, merges concepts, magnifies new degrees of freedom to make singularities harmless.

Quantum mechanics does a lot of things. You have seen many examples – and there are many others – that quantum mechanics generally allows you to find symmetries between objects that look classically totally different. Like the momentum and winding of a string. Or the derivative of $$X$$ with the exponential of $$X$$ – at the self-dual radius. Or the states and operators. Or elementary particles and composite objects such as magnetic monopoles. And so on, and so on.

Sometimes, the spectrum of a quantity becomes discrete in order for the map or symmetry to be possible.

Sometimes, just the qualitative differences are erased. Sometimes, all the differences are erased and quantum mechanics enables the emergence of exact new symmetries that would be totally crazy within classical physics. Sometimes, these symmetries are combined with some naive ones that already exist classically. $$U(1)\times U(1)$$ may be extended to $$SU(2)\times SU(2)$$ quantum mechanically. Similarly, $$SO(16)\times SO(16)$$ in the fermionic definition or $$U(1)^{16}$$ in the bosonic formulation of the heterotic string gets extended to $$E_8\times E_8$$. A much smaller, classically visible discrete group gets extended to the monster group in the full quantum string theory explaining the monstrous moonshine.

Whenever a classical theory would be getting dangerously singular, quantum mechanics changes the situation so that either the dangerous states disappear or they're supplemented with new degrees of freedom or another cure. In many typical cases, the "potentially dangerous regime" of a theory – where you could be afraid of an inconsistency – is protected and consistent because quantum mechanics makes all the modifications and additions needed for that regime to be exactly equivalent to another theory that you have known – or whose classical limit you have encountered. Quantum mechanics is what allows all the dualities and the continuous connection of all seemingly inequivalent vacua of string/M-theory into one master theory.

All the constraints - on the number of dimensions, sizes of gauge groups, and even equations of motion for the fields in spacetime – arise from the quantum mechanical consistency, e.g. from the anomaly cancellation conditions.

When you become familiar with all these amazing effects of string theory and others, you are forced to start to think quantum mechanically. You will understand that the interesting theory – with the uniqueness, predictive power, consistency, symmetries, unification of concepts – is unavoidably just the quantum mechanical one. There is really no cool classical theory. The classical theories that you encounter anywhere in string theory are the classical limits of the full theory.

You will unavoidably get rid of the bad habit of thinking of a classical theory as the "primary one", while the quantum mechanical theory is often considered "derived" from it by the beginners (including permanent beginners). Within string/M-theory, it's spectacularly clear that the right relationship is going in the opposite direction. The quantum mechanical theory – with its quantum rules, objects, statements, and relationships – is the primary one while classical theories are just approximations and caricatures that lack the full glory of the quantum mechanical theory.

### John Baez - Azimuth

Stanford Complexity Group

Aaron Goodman of the Stanford Complexity Group invited me to give a talk there on Thursday April 20th. If you’re nearby—like in Silicon Valley—please drop by! It will be in Clark S361 at 4:20 pm.

Here’s the idea. Everyone likes to say that biology is all about information. There’s something true about this—just think about DNA. But what does this insight actually do for us, quantitatively speaking? To figure this out, we need to do some work.

Biology is also about things that make copies of themselves. So it makes sense to figure out how information theory is connected to the replicator equation—a simple model of population dynamics for self-replicating entities.

To see the connection, we need to use ‘relative information’: the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus.

It turns out that free energy—energy in forms that can actually be used, not just waste heat—is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information.

But there’s a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher’s fundamental theorem of natural selection, and more.

So this what I’ll talk about! You can see my slides here:

• John Baez, Biology as information dynamics.

but my talk will be videotaped, and it’ll eventually be put here:

You can already see lots of cool talks at this location!

## April 18, 2017

### Symmetrybreaking - Fermilab/SLAC

A new search to watch from LHCb

A new result from the LHCb experiment could be an early indicator of an inconsistency in the Standard Model.

The subatomic universe is an intricate mosaic of particles and forces. The Standard Model of particle physics is a time-tested instruction manual that precisely predicts how particles and forces behave. But it’s incomplete, ignoring phenomena such as gravity and dark matter.

Today the LHCb experiment at CERN European research center released a result that could be an early indication of new, undiscovered physics beyond the Standard Model.

However, more data is needed before LHCb scientists can definitively claim they’ve found a crack in the world’s most robust roadmap to the subatomic universe.

“In particle physics, you can’t just snap your fingers and claim a discovery,” says Marie-Hélène Schune, a researcher on the LHCb experiment from Le Centre National de la Recherche Scientifique in Orsay, France. “It’s not magic. It’s long, hard work and you must be obstinate when facing problems. We always question everything and never take anything for granted.”

The LHCb experiment records and analyzes the decay patterns of rare hadrons—particles made of quarks—that are produced in the Large Hadron Collider’s energetic proton-proton collisions. By comparing the experimental results to the Standard Model’s predictions, scientists can search for discrepancies. Significant deviations between the theory and experimental results could be an early indication of an undiscovered particle or force at play.

This new result looks at hadrons containing a bottom quark as they transform into hadrons containing a strange quark. This rare decay pattern can generate either two electrons or two muons as byproducts. Electrons and muons are different types or “flavors” of particles called leptons. The Standard Model predicts that the production of electrons and muons should be equally favorable—essentially a subatomic coin toss every time this transformation occurs.

“As far as the Standard Model is concerned, electrons, muons and tau leptons are completely interchangeable,” Schune says. “It’s completely blind to lepton flavors; only the large mass difference of the tau lepton plays a role in certain processes. This 50-50 prediction for muons and electrons is very precise.”

But instead of finding a 50-50 ratio between muons and electrons, the latest results from the LHCb experiment show that it’s more like 40 muons generated for every 60 electrons.

“If this initial result becomes stronger with more data, it could mean that there are other, invisible particles involved in this process that see flavor,” Schune says. “We’ll leave it up to the theorists’ imaginations to figure out what’s going on.”

However, just like any coin-toss, it’s difficult to know if this discrepancy is the result of an unknown favoritism or the consequence of chance. To delineate between these two possibilities, scientists wait until they hit a certain statistical threshold before claiming a discovery, often 5 sigma.

“Five sigma is a measurement of statistical deviation and means there is only a 1-in-3.5-million chance that the Standard Model is correct and our result is just an unlucky statistical fluke,” Schune says. “That’s a pretty good indication that it’s not chance, but rather the first sightings of a new subatomic process.”

Currently, this new result is at approximately 2.5 standard deviations, which means there is about a 1-in-125 possibility that there’s no new physics at play and the experimenters are just the unfortunate victims of statistical fluctuation.

This isn’t the first time that the LHCb experiment has seen unexpected behavior in related processes. Hassan Jawahery from the University of Maryland also works on the LHCb experiment and is studying another particle decay involving bottom quarks transforming into charm quarks. He and his colleagues are measuring the ratio of muons to tau leptons generated during this decay.

“Correcting for the large mass differences between muons and tau leptons, we’d expect to see about 25 taus produced for every 100 muons,” Jawahery says. “We measured a ratio of 34 taus for every 100 muons.”

On its own, this measurement is below the line of statistical significance needed to raise an eyebrow. However, two other experiments—the BaBar experiment at SLAC and the Belle experiment in Japan—also measured this process and saw something similar.

“We might be seeing the first hints of a new particle or force throwing its weight around during two independent subatomic processes,” Jawahery says. “It’s tantalizing, but as experimentalists we are still waiting for all these individual results to grow in significance before we get too excited.”

More data and improved experimental techniques will help the LHCb experiment and its counterparts narrow in on these processes and confirm if there really is something funny happening behind the scenes in the subatomic universe.

“Conceptually, these measurements are very simple,” Schune says. “But practically, they are very challenging to perform. These first results are all from data collected between 2011 and 2012 during Run 1 of the LHC. It will be intriguing to see if data from Run 2 shows the same thing.”

### ZapperZ - Physics and Physicists

Testing For The Unruh Effect
A new paper that is to appear in Phys. Rev. Lett. is already getting quite a bit of advanced publicity. In it, the authors proposed a rather simple way to test for the existence of the long-proposed Unruh effect.

Things get even weirder if one observer accelerates. Any observer traveling at a constant speed will measure the temperature of empty space as absolute zero. But an accelerated observer will find the vacuum hotter. At least that's what William Unruh, a theorist at the University British Columbia in Vancouver, Canada, argued in 1976. To a nonaccelerating observer, the vacuum is devoid of particles—so that if he holds a particle detector it will register no clicks. In contrast, Unruh argued, an accelerated observer will detect a fog of photons and other particles, as the number of quantum particles flitting about depends on an observer's motion. The greater the acceleration, the higher the temperature of that fog or "bath."

So obviously, this is a very difficult effect to detect, which explains why we haven't had any evidence for it since it was first proposed in 1976. That is why this new paper is causing heads to turn, because the authors are proposing a test using our existing technology. You may read the two links above to see what they are proposing using our current particle accelerators.

But what is a bit amusing is that there are already skeptics about this methodology of testing, but each camp is arguing it for different reasons.

Skeptics say the experiment won’t work, but they disagree on why. If the situation isproperly analyzed, there is no fog of photons in the accelerated frame, says Detlev Buchholz, a theorist at the University of Göttingen in Germany. "The Unruh gas does not exist!" he says. Nevertheless, Buchholz says, the vacuum will appear hot to an accelerated observer, but because of a kind of friction that arises through the interplay of quantum uncertainty and acceleration. So,the experiment might show the desired effect, but that wouldn't reveal the supposed fog of photons in the accelerating frame.

In contrast, Robert O'Connell, a theorist at Louisiana State University in Baton Rouge, insists that in the accelerated frame there is a fog of photons. However, he contends, it is not possible to draw energy out of that fog to produce extra radiation in the lab frame. O'Connell cites a basic bit of physics called the fluctuation-dissipation theorem, which states that a particle interacting with a heat bath will pump as much energy into the bath as it pulls out. Thus, he argues, Unruh's fog of photons exists, but the experiment should not produce the supposed signal anyway.

If there's one thing that experimenters like, it is to prove theorists wrong! :) So which ever way an experiment on this turns out, it will bound to disprove one group of theorists or another. It's a win-win situation! :)

Zz.

### Emily Lakdawalla - The Planetary Society Blog

Spring 2017 issue of The Planetary Report now available
The Spring 2017 issue of The Planetary Report is in the mail and available online now to our members!

### Tommaso Dorigo - Scientificblogging

LHCb Measures Unity, Finds 0.6
With a slightly anti-climatic timing if we consider the just ended orgy of new results presented at winter conferences in particle physics (which I touched on here), the LHCb collaboration outed today the results of a measurement of unity, drawing attention on the fact that unity was found to be not equal to 1.0.

### Symmetrybreaking - Fermilab/SLAC

How blue-sky research shapes the future

While driven by the desire to pursue curiosity, fundamental investigations are the crucial first step to innovation.

When scientists announced their discovery of gravitational waves in 2016, it made headlines all over the world. The existence of these invisible ripples in space-time had finally been confirmed.

It was a momentous feat in basic research, the curiosity-driven search for fundamental knowledge about the universe and the elements within it. Basic (or “blue-sky”) research is distinct from applied research, which is targeted toward developing or advancing technologies to solve a specific problem or to create a new product.

But the two are deeply connected.

“Applied research is exploring the continents you know, whereas basic research is setting off in a ship and seeing where you get,” says Frank Wilczek, a theoretical physicist at MIT. “You might just have to return, or sink at sea, or you might discover a whole new continent. So it’s much more long-term, it’s riskier and it doesn’t always pay dividends.”

When it does, he says, it opens up entirely new possibilities available only to those who set sail into uncharted waters.

Most of physics—especially particle physics—falls under the umbrella of basic research. In particle physics “we’re asking some of the deepest questions that are accessible by observations about the nature of matter and energy—and ultimately about space and time also, because all of these things are tied together,” says Jim Gates, a theoretical physicist at the University of Maryland.

Physicists seek answers to questions about the early universe, the nature of dark energy, and theoretical phenomena, such as supersymmetry, string theory and extra dimensions.

Perhaps one of the most well-known basic researchers was the physicist who predicted the existence of gravitational waves: Albert Einstein.

Einstein devoted his life to elucidating elementary concepts such as the nature of gravity and the relationship between space and time. According to Wilczek, “it was clear that what drove what he did was not the desire to produce a product, or anything so worldly, but to resolve puzzles and perceived imperfections in our understanding.”

In addition to advancing our understanding of the world, Einstein’s work led to important technological developments. The Global Positioning System, for instance, would not have been possible without the theories of special and general relativity. A GPS receiver, like the one in your smart phone, determines its location based on timed signals it receives from the nearest four of a collection of GPS satellites orbiting Earth. Because the satellites are moving so quickly while also orbiting at a great distance from the gravitational pull of Earth, they experience time differently from the receiver on Earth’s surface. Thanks to Einstein’s theories, engineers can calculate and correct for this difference.

Illustration by Corinne Mucha

There’s a long history of serendipitous output from basic research. For example, in 1989 at CERN European research center, computer scientist Tim Berners-Lee was looking for a way to facilitate information-sharing between researchers. He invented the World Wide Web.

While investigating the properties of nuclei within a magnetic field at Columbia University in the 1930s, physicist Isidor Isaac Rabi discovered the basic principles of nuclear magnetic resonance. These principles eventually formed the basis of Magnetic Resonance Imaging, MRI.

It would be another 50 years before MRI machines were widely used—again with the help of basic research. MRI machines require big, superconducting magnets to function. Luckily, around the same time that Rabi’s discovery was being investigated for medical imaging, scientists and engineers at the US Department of Energy’s Fermi National Accelerator Laboratory began building the Tevatron particle accelerator to enable research into the fundamental nature of particles, a task that called for huge amounts of superconducting wire.

“We were the first large, demanding customer for superconducting cable,” says Chris Quigg, a theoretical physicist at Fermilab. “We were spending a lot of money to get the performance that we needed.” The Tevatron created a commercial market for superconducting wire, making it practical for companies to build MRI machines on a large scale for places like hospitals.

Doctors now use MRI to produce detailed images of the insides of the human body, helpful tools in diagnosing and treating a variety of medical complications, including cancer, heart problems, and diseases in organs such as the liver, pancreas and bowels.

Another tool of particle physics, the particle detector, has also been adopted for uses in various industries. In the 1980s, for example, particle physicists developed technology precise enough to detect a single photon. Today doctors use this same technology to detect tumors, heart disease and central nervous system disorders. They do this by conducting positron emission tomography scans, or PET scans. Before undergoing a PET scan, the patient is given a dye containing radioactive tracers, either through an injection or by ingesting or inhaling. The tracers emit antimatter particles, which interact with matter particles and release photons, which are picked up by the PET scanner to create a picture detailed enough to reveal problems at the cellular level.

As Gates says, “a lot of the devices and concepts that you see in science fiction stories will never come into existence unless we pursue the concept of basic research. You’re not going to be able to construct starships unless you do the research now in order to build these in the future.”

It’s unclear what applications could come of humanity’s new knowledge of the existence of gravitational waves.

It could be enough that we have learned something new about how our universe works. But if history gives us any indication, continued exploration will also provide additional benefits along the way.

### Lubos Motl - string vacua and pheno

LHCb insists on tension with lepton universality in $$1$$-$$6\GeV^2$$
The number of references to B-mesons on this blog significantly exceeds my degree of excitement about these bound states of quarks and antiquarks but what can I do? They are among the leaders of the revolt against the Standard Model.

Various physicists have mentioned a new announcement by the LHCb collaboration which is smaller than ATLAS and CMS but at least equally assertive.

Another physicist has embedded the key graph where you should notice that the black crosses sit well below the dotted line where they're predicted to sit

and we were told about the LHCb PowerPoint presentation where this graph was taken from.

To make the story short, some ratio describing the decays of B-mesons that should be one according to the Standard Model if the electron, muon, and tau are equally behaved – except for their differing masses which are rather irrelevant here – ends up being $\Large {\mathcal R}_{K^{*0}} = 0.69 + 0.12 - 0.08$ especially in the interval of momentum transfer $$q^2 \in (1,6)\GeV^2$$.

There are some similar deviations at higher values of $$q^2$$, it's always about 2.2-2.5 standard deviations below the Standard Model. Sadly, it seems that neither BaBar nor Belle saw these deficits: their mean values are slightly greater than one although their error margin was greater than that of the LHCb collaboration. On the other hand, the deficit seems rather compatible with the LHCb's recent announcements based on a (hopefully) disjoint set of decays.

An obvious reaction is that the deviation in this low-energy range isn't too exciting, anyway, because

Well, unless it's some new physics (new even for Jester) that affects this energy range. ;-)

I find this deviation rather small and our survival of the 4-sigma excess at $$750\GeV$$ should have made us a little bit more demanding when it comes to the significance level that is needed to make us aroused. But those who are interested in the existing or potentially emerging experimental anomalies should be aware of this deviation because the competition in this field is very limited.

## April 17, 2017

### ZapperZ - Physics and Physicists

Hot Atoms Interferometer
This work will not catch media attention because it isn't "sexy", but damn, it is astonishing nevertheless.

Quantum behavior are clearly seen at the macroscopic level because of the problem in maintaining coherence over a substantial length and time scales. One of the ways one can extend such scales is by cooling things down to extremely low temperatures so that decoherence due to thermal scattering is minimized.

So it is with great interest that I read this new paper on atoms interferometer that has been accomplished with "warm" atomic vapor[1]! You also have access to the actual paper from that link.

While the sensitivity of this technique is significantly and unsurprisingly low when compared to cold atoms, it has 2 major advantages:

However, sensitivity is not the only parameter of relevance for applications, and the new scheme offers two important advantages over cold schemes. The first is that it can acquire data at a rate of 10 kHz, in contrast to the typical 1-Hz rate of cold-atom LPAIs. The second advantage is the broader range of accelerations that can be measured with the same setup. This vapor-cell sensor remains operational over an acceleration range of 88g, several times larger than the typical range of cold LPAIs.

The large bandwidth and dynamic range of the instrument built by Biedermann and co-workers may enable applications like inertial navigation in highly vibrating environments, such as spacecraft or airplanes. What’s more, the new scheme, like all LPAIs, has an important advantage over devices like laser or electromechanical gyroscopes: it delivers acceleration measurements that are absolute, without requiring a reference signal. This opens new possibilities for drift-free inertial navigation devices that work even when signals provided by global satellite positioning systems are not available, such as in underwater navigation.

And again, let me highlight the direct and clear application of something that started out as simply appearing to be a purely academic and knowledge-driven curiosity. This really is an application of the principle of superposition in quantum mechanics, i.e. the Schrodinger Cat.

This is an amazing experimental accomplishment.

Zz.

[1] G. W. Biedermann et al., Phys. Rev. Lett. 118, 163601 (2017).

### Emily Lakdawalla - The Planetary Society Blog

Our asteroid hunters are trying to save the world. Here’s what they’ve been up to
Here are some recent reports from our NEO Shoemaker Grant program asteroid observers, who are quite literally trying to save the world.

## April 15, 2017

### The n-Category Cafe

Value

What is the value of the whole in terms of the values of the parts?

More specifically, given a finite set whose elements have assigned “values” ${v}_{1},\dots ,{v}_{n}v_1, \ldots, v_n$ and assigned “sizes” ${p}_{1},\dots ,{p}_{n}p_1, \ldots, p_n$ (normalized to sum to $11$), how can we assign a value $\sigma \left(p,v\right)\sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right)$ to the set in a coherent way?

This seems like a very general question. But in fact, just a few sensible requirements on the function $\sigma \sigma$ are enough to pin it down almost uniquely. And the answer turns out to be closely connected to existing mathematical concepts that you probably already know.

Let’s write

${\Delta }_{n}=\left\{\left({p}_{1},\dots ,{p}_{n}\right)\in {ℝ}^{n}:{p}_{i}\ge 0,\sum {p}_{i}=1\right\} \Delta_n = \Bigl\\left\{ \left(p_1, \ldots, p_n\right) \in \mathbb\left\{R\right\}^n : p_i \geq 0, \sum p_i = 1 \Bigr\\right\} $

for the set of probability distributions on $\left\{1,\dots ,n\right\}\\left\{1, \ldots, n\\right\}$. Assuming that our “values” are positive real numbers, we’re interested in sequences of functions

$\left(\sigma :{\Delta }_{n}×\left(0,\infty {\right)}^{n}\to \left(0,\infty \right){\right)}_{n\ge 1} \Bigl\left( \sigma \colon \Delta_n \times \left(0, \infty\right)^n \to \left(0, \infty\right) \Bigr\right)_\left\{n \geq 1\right\} $

that aggregate the values of the elements to give a value to the whole set. So, if the elements of the set have relative sizes $p=\left({p}_{1},\dots ,{p}_{n}\right)\mathbf\left\{p\right\} = \left(p_1, \ldots, p_n\right)$ and values $v=\left({v}_{1},\dots ,{v}_{n}\right)\mathbf\left\{v\right\} = \left(v_1, \ldots, v_n\right)$, then the value assigned to the whole set is $\sigma \left(p,v\right)\sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right)$.

Here are some properties that it would be reasonable for $\sigma \sigma$ to satisfy.

Homogeneity  The idea is that whatever “value” means, the value of the set and the value of the elements should be measured in the same units. For instance, if the elements are valued in kilograms then the set should be valued in kilograms too. A switch from kilograms to grams would then multiply both values by 1000. So, in general, we ask that

$\sigma \left(p,cv\right)=c\sigma \left(p,v\right) \sigma\left(\mathbf\left\{p\right\}, c\mathbf\left\{v\right\}\right) = c \sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) $

for all $p\in {\Delta }_{n}\mathbf\left\{p\right\} \in \Delta_n$, $v\in \left(0,\infty {\right)}^{n}\mathbf\left\{v\right\} \in \left(0, \infty\right)^n$ and $c\in \left(0,\infty \right)c \in \left(0, \infty\right)$.

Monotonicity  The values of the elements are supposed to make a positive contribution to the value of the whole, so we ask that if ${v}_{i}\le v{\prime }_{i}v_i \leq v\text{'}_i$ for all $ii$ then

$\sigma \left(p,v\right)\le \sigma \left(p,v\prime \right) \sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) \leq \sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\text{'}\right) $

for all $p\in {\Delta }_{n}\mathbf\left\{p\right\} \in \Delta_n$.

Replication  Suppose that our $nn$ elements have the same size and the same value, $vv$. Then the value of the whole set should be $nvn v$. This property says, among other things, that $\sigma \sigma$ isn’t an average: putting in more elements of value $vv$ increases the value of the whole set!

If $\sigma \sigma$ is homogeneous, we might as well assume that $v=1v = 1$, in which case the requirement is that

$\sigma \left(\left(1/n,\dots ,1/n\right),\left(1,\dots ,1\right)\right)=n. \sigma\bigl\left( \left(1/n, \ldots, 1/n\right), \left(1, \ldots, 1\right) \bigr\right) = n. $

Modularity  This one’s a basic logical axiom, best illustrated by an example.

Imagine that we’re very ambitious and wish to evaluate the entire planet — or at least, the part that’s land. And suppose we already know the values and relative sizes of every country.

We could, of course, simply put this data into $\sigma \sigma$ and get an answer immediately. But we could instead begin by evaluating each continent, and then compute the value of the planet using the values and sizes of the continents. If $\sigma \sigma$ is sensible, this should give the same answer.

The notation needed to express this formally is a bit heavy. Let $w\in {\Delta }_{n}\mathbf\left\{w\right\} \in \Delta_n$; in our example, $n=7n = 7$ (or however many continents there are) and $w=\left({w}_{1},\dots ,{w}_{7}\right)\mathbf\left\{w\right\} = \left(w_1, \ldots, w_7\right)$ encodes their relative sizes. For each $i=1,\dots ,ni = 1, \ldots, n$, let ${p}^{i}\in {\Delta }_{{k}_{i}}\mathbf\left\{p\right\}^i \in \Delta_\left\{k_i\right\}$; in our example, ${p}^{i}\mathbf\left\{p\right\}^i$ encodes the relative sizes of the countries on the $ii$th continent. Then we get a probability distribution

$w\circ \left({p}^{1},\dots ,{p}^{n}\right)=\left({w}_{1}{p}_{1}^{1},\dots ,{w}_{1}{p}_{{k}_{1}}^{1},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\dots ,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{w}_{n}{p}_{1}^{n},\dots ,{w}_{n}{p}_{{k}_{n}}^{n}\right)\in {\Delta }_{{k}_{1}+\cdots +{k}_{n}}, \mathbf\left\{w\right\} \circ \left(\mathbf\left\{p\right\}^1, \ldots, \mathbf\left\{p\right\}^n\right) = \left(w_1 p^1_1, \ldots, w_1 p^1_\left\{k_1\right\}, \,\,\ldots, \,\, w_n p^n_1, \ldots, w_n p^n_\left\{k_n\right\}\right) \in \Delta_\left\{k_1 + \cdots + k_n\right\}, $

which in our example encodes the relative sizes of all the countries on the planet. (Incidentally, this composition makes $\left({\Delta }_{n}\right)\left(\Delta_n\right)$ into an operad, a fact that we’ve discussed many times before on this blog.) Also let

${v}^{1}=\left({v}_{1}^{1},\dots ,{v}_{{k}_{1}}^{1}\right)\in \left(0,\infty {\right)}^{{k}_{1}},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\dots ,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{v}^{n}=\left({v}_{1}^{n},\dots ,{v}_{{k}_{n}}^{n}\right)\in \left(0,\infty {\right)}^{{k}_{n}}. \mathbf\left\{v\right\}^1 = \left(v^1_1, \ldots, v^1_\left\{k_1\right\}\right) \in \left(0, \infty\right)^\left\{k_1\right\}, \,\,\ldots,\,\, \mathbf\left\{v\right\}^n = \left(v^n_1, \ldots, v^n_\left\{k_n\right\}\right) \in \left(0, \infty\right)^\left\{k_n\right\}. $

In the example, ${v}_{j}^{i}v^i_j$ is the value of the $jj$th country on the $ii$th continent. Then the value of the $ii$th continent is $\sigma \left({p}^{i},{v}^{i}\right)\sigma\left(\mathbf\left\{p\right\}^i, \mathbf\left\{v\right\}^i\right)$, so the axiom is that

$\sigma \left(w\circ \left({p}^{1},\dots ,{p}^{n}\right),\left({v}_{1}^{1},\dots ,{v}_{{k}_{1}}^{1},\dots ,{v}_{1}^{n},\dots ,{v}_{{k}_{n}}^{n}\right)\right)=\sigma \left(w,\left(\sigma \left({p}^{1},{v}^{1}\right),\dots ,\sigma \left({p}^{n},{v}^{n}\right)\right)\right). \sigma \bigl\left( \mathbf\left\{w\right\} \circ \left(\mathbf\left\{p\right\}^1, \ldots, \mathbf\left\{p\right\}^n\right), \left(v^1_1, \ldots, v^1_\left\{k_1\right\}, \ldots, v^n_1, \ldots, v^n_\left\{k_n\right\}\right) \bigr\right) = \sigma \Bigl\left( \mathbf\left\{w\right\}, \bigl\left( \sigma\left(\mathbf\left\{p\right\}^1, \mathbf\left\{v\right\}^1\right), \ldots, \sigma\left(\mathbf\left\{p\right\}^n, \mathbf\left\{v\right\}^n\right) \bigr\right) \Bigr\right). $

The left-hand side is the value of the planet calculated in a single step, and the right-hand side is its value when calculated in two steps, with continents as the intermediate stage.

Symmetry  It shouldn’t matter what order we list the elements in. So it’s natural to ask that

$\sigma \left(p,v\right)=\sigma \left(p\tau ,v\tau \right) \sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = \sigma\left(\mathbf\left\{p\right\} \tau, \mathbf\left\{v\right\} \tau\right) $

for any $\tau \tau$ in the symmetric group ${S}_{n}S_n$, where the right-hand side refers to the obvious ${S}_{n}S_n$-actions.

Absent elements should count for nothing! In other words, if ${p}_{1}=0p_1 = 0$ then we should have

$\sigma \left(\left({p}_{1},\dots ,{p}_{n}\right),\left({v}_{1},\dots ,{v}_{n}\right)\right)=\sigma \left(\left({p}_{2},\dots ,{p}_{n}\right),\left({v}_{2},\dots ,{v}_{n}\right)\right). \sigma\bigl\left( \left(p_1, \ldots, p_n\right), \left(v_1, \ldots, v_n\right)\bigr\right) = \sigma\bigl\left( \left(p_2, \ldots, p_n\right), \left(v_2, \ldots, v_n\right)\bigr\right). $

This isn’t quite triival. I haven’t yet given you any examples of the kind of function that $\sigma \sigma$ might be, but perhaps you already have in mind a simple one like this:

$\sigma \left(p,v\right)={v}_{1}+\cdots +{v}_{n}. \sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = v_1 + \cdots + v_n. $

In words, the value of the whole is simply the sum of the values of the parts, regardless of their sizes. But if $\sigma \sigma$ is to have the “absent elements” property, this won’t do. (Intuitively, if ${p}_{i}=0p_i = 0$ then we shouldn’t count ${v}_{i}v_i$ in the sum, because the $ii$th element isn’t actually there.) So we’d better modify this example slightly, instead taking

$\sigma \left(p,v\right)=\sum _{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{v}_{i}. \sigma\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = \sum_\left\{i \,:\, p_i \gt 0\right\} v_i. $

This function (or rather, sequence of functions) does have the “absent elements” property.

Continuity in positive probabilities  Finally, we ask that for each $v\in \left(0,\infty {\right)}^{n}\mathbf\left\{v\right\} \in \left(0, \infty\right)^n$, the function $\sigma \left(-,v\right)\sigma\left(-, \mathbf\left\{v\right\}\right)$ is continuous on the interior of the simplex ${\Delta }_{n}\Delta_n$, that is, continuous over those probability distributions $p\mathbf\left\{p\right\}$ such that ${p}_{1},\dots ,{p}_{n}>0p_1, \ldots, p_n \gt 0$.

Why only over the interior of the simplex? Basically because of natural examples of $\sigma \sigma$ like the one just given, which is continuous on the interior of the simplex but not the boundary. Generally, it’s sometimes useful to make a sharp, discontinuous distinction between the cases ${p}_{i}>0p_i \gt 0$ (presence) and ${p}_{i}=0p_i = 0$ (absence).

Arrow’s famous theorem states that a few apparently mild conditions on a voting system are, in fact, mutually contradictory. The mild conditions above are not mutually contradictory. In fact, there’s a one-parameter family ${\sigma }_{q}\sigma_q$ of functions each of which satisfies these conditions. For real $q\ne 1q \neq 1$, the definition is

${\sigma }_{q}\left(p,v\right)=\left(\sum _{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{p}_{i}^{q}{v}_{i}^{1-q}{\right)}^{1/\left(1-q\right)}. \sigma_q\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = \Bigl\left( \sum_\left\{i \,:\, p_i \gt 0\right\} p_i^q v_i^\left\{1 - q\right\} \Bigr\right)^\left\{1/\left(1 - q\right)\right\}. $

For instance, ${\sigma }_{0}\sigma_0$ is the example of $\sigma \sigma$ given above.

The formula for ${\sigma }_{q}\sigma_q$ is obviously invalid at $q=1q = 1$, but it converges to a limit as $q\to 1q \to 1$, and we define ${\sigma }_{1}\left(p,v\right)\sigma_1\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right)$ to be that limit. Explicitly, this gives

${\sigma }_{1}\left(p,v\right)=\prod _{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}\left({v}_{i}/{p}_{i}{\right)}^{{p}_{i}}. \sigma_1\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = \prod_\left\{i \,:\, p_i \gt 0\right\} \left(v_i/p_i\right)^\left\{p_i\right\}. $

In the same way, we can define ${\sigma }_{-\infty }\sigma_\left\{-\infty\right\}$ and ${\sigma }_{\infty }\sigma_\infty$ as the appropriate limits:

${\sigma }_{-\infty }\left(p,v\right)=\underset{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{\mathrm{max}}{v}_{i}/{p}_{i},\phantom{\rule{2em}{0ex}}{\sigma }_{\infty }\left(p,v\right)=\underset{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{\mathrm{min}}{v}_{i}/{p}_{i}. \sigma_\left\{-\infty\right\}\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = \max_\left\{i \,:\, p_i \gt 0\right\} v_i/p_i, \qquad \sigma_\left\{\infty\right\}\left(\mathbf\left\{p\right\}, \mathbf\left\{v\right\}\right) = \min_\left\{i \,:\, p_i \gt 0\right\} v_i/p_i. $

And it’s easy to check that for each $q\in \left[-\infty ,\infty \right]q \in \left[-\infty, \infty\right]$, the function ${\sigma }_{q}\sigma_q$ satisfies all the natural conditions listed above.

These functions ${\sigma }_{q}\sigma_q$ might be unfamiliar to you, but they have some special cases that are quite well-explored. In particular:

• Suppose you’re in a situation where the elements don’t have “sizes”. Then it would be natural to take $p\mathbf\left\{p\right\}$ to be the uniform distribution ${u}_{n}=\left(1/n,\dots ,1/n\right)\mathbf\left\{u\right\}_n = \left(1/n, \ldots, 1/n\right)$. In that case, ${\sigma }_{q}\left({u}_{n},v\right)=\mathrm{const}\cdot \left(\sum {v}_{i}^{1-q}{\right)}^{1/\left(1-q\right)}, \sigma_q\left(\mathbf\left\{u\right\}_n, \mathbf\left\{v\right\}\right) = const \cdot \bigl\left( \sum v_i^\left\{1 - q\right\} \bigr\right)^\left\{1/\left(1 - q\right)\right\}, $ where the constant is a certain power of $nn$. When $q\le 0q \leq 0$, this is exactly a constant times $‖v{‖}_{1-q}\|\mathbf\left\{v\right\}\|_\left\{1 - q\right\}$, the $\left(1-q\right)\left(1 - q\right)$-norm of the vector $v\mathbf\left\{v\right\}$.

• Suppose you’re in a situation where the elements don’t have “values”. Then it would be natural to take $v\mathbf\left\{v\right\}$ to be $1=\left(1,\dots ,1\right)\mathbf\left\{1\right\} = \left(1, \ldots, 1\right)$. In that case, ${\sigma }_{q}\left(p,1\right)=\left(\sum {p}_{i}^{q}{\right)}^{1/\left(1-q\right)}. \sigma_q\left(\mathbf\left\{p\right\}, \mathbf\left\{1\right\}\right) = \bigl\left( \sum p_i^q \bigr\right)^\left\{1/\left(1 - q\right)\right\}. $ This is the quantity that ecologists know as the Hill number of order $qq$ and use as a measure of biological diversity. Information theorists know it as the exponential of the Rényi entropy of order $qq$, the special case $q=1q = 1$ being Shannon entropy. And actually, the general formula for ${\sigma }_{q}\sigma_q$ is very closely related to Rényi relative entropy (which Wikipedia calls Rényi divergence).

Anyway, the big — and as far as I know, new — result is:

Theorem  The functions ${\sigma }_{q}\sigma_q$ are the only functions $\sigma \sigma$ with the seven properties above.

So although the properties above don’t seem that demanding, they actually force our notion of “aggregate value” to be given by one of the functions in the family $\left({\sigma }_{q}{\right)}_{q\in \left[-\infty ,\infty \right]}\left(\sigma_q\right)_\left\{q \in \left[-\infty, \infty\right]\right\}$. And although I didn’t even mention the notions of diversity or entropy in my justification of the axioms, they come out anyway as special cases.

I covered all this yesterday in the tenth and penultimate installment of the functional equations course that I’m giving. It’s written up on pages 38–42 of the notes so far. There you can also read how this relates to more realistic measures of biodiversity than the Hill numbers. Plus, you can see an outline of the (quite substantial) proof of the theorem above.

## April 14, 2017

### Tommaso Dorigo - Scientificblogging

Waiting For Jupiter
This evening I am blogging from a residence in Sesto val Pusteria, a beautiful mountain village in the Italian Alps. I came here for a few days of rest after a crazy work schedule in the past few days -the reason why my blogging has been intermittent. Sesto is surrounded by glorious mountains, and hiking around here is marvelous. But right now, as I sip a non-alcoholic beer (pretty good), chilling off after a day out, my thoughts are focused 500,000,000 kilometers away.

### Marco Frasca - The Gauge Connection

Well below 1%

When a theory is too hard to solve people try to consider lower dimensional cases. This also happened for Yang-Mills theory. The four dimensional case is notoriously difficult to manage due to the large coupling and the three dimensional case has been treated both theoretically and by lattice computations. In this latter case, the ground state energy of the theory is known very precisely (see here). So, a sound theoretical approach from first principles should be able to get that number at the same level of precision. We know that this is the situation for Standard Model with respect to some experimental results but a pure Yang-Mills theory has not been seen in nature and we have to content ourselves with computer data. The reason is that a Yang-Mills theory is realized in nature just in interaction with other kind of fields being these scalars, fermions or vector-like.

In these days, I have received the news that my paper on three dimensional Yang-Mills theory has been accepted for publication in the European Physical Journal C. Here is tha table for the ground state for SU(N) at different values of N compared to lattice data

N Lattice     Theoretical Error

2 4.7367(55) 4.744262871 0.16%

3 4.3683(73) 4.357883714 0.2%

4 4.242(9)     4.243397712 0.03%

4.116(6)    4.108652166 0.18%

These results are strikingly good and the agreement is well below 1%. This in turn implies that the underlying theoretical derivation is sound. Besides, the approach proves to be successful both also in four dimensions (see here). My hope is that this means the beginning of the era of high precision theoretical computations in strong interactions.

Andreas Athenodorou, & Michael Teper (2017). SU(N) gauge theories in 2+1 dimensions: glueball spectra and k-string tensions J. High Energ. Phys. (2017) 2017: 15 arXiv: 1609.03873v1

Marco Frasca (2016). Confinement in a three-dimensional Yang-Mills theory arXiv arXiv: 1611.08182v2

Marco Frasca (2015). Quantum Yang-Mills field theory Eur. Phys. J. Plus (2017) 132: 38 arXiv: 1509.05292v2

Filed under: Particle Physics, Physics, QCD Tagged: Ground state, Lattice Gauge Theories, Mass Gap, Millenium prize, Yang-Mills theory

## April 13, 2017

### Clifford V. Johnson - Asymptotia

Quick Oceanside Art…

So an unexpected but very welcome message from my publisher a while back was a query to see if I'd be interested in doing the cover for my forthcoming book. Of course, the answer was a very definite yes! (I knew that publishers often want to control that aspect of a book themselves, and while some time ago I made a deliberately vague suggestion about what I thought the cover might be like, I was careful not to try to insert myself into that aspect of production, so this was a genuine surprise.) I'm focusing on physics research during this part of my sabbatical, so this would have to be primarily an "after hours" sort of operation, but should not take long since I had a clear idea of what to do. I worked up two or three versions of an idea and sent it along to see that they liked where I was going and once they picked one (happily, the one I liked most) I set it aside as a thing to work on once I got finished with a paper (see last post) and the (prep for as well as the actual) trip East to give a physics colloquium (see the post I never got around to doing about that trip).

Then I had terrible delays on the way back that cost me the better part of an extra day getting back. So I worked up some of the nearly final art and layout [...] Click to continue reading this post

The post Quick Oceanside Art… appeared first on Asymptotia.

## April 11, 2017

### Symmetrybreaking - Fermilab/SLAC

What’s left to learn about antimatter?

Experiments at CERN investigate antiparticles.

What do shrimp, tennis balls and pulsars all have in common? They are all made from matter.

Admittedly, that answer is a cop-out, but it highlights a big, persistent quandary for scientists: Why is everything made from matter when there is a perfectly good substitute—antimatter?

The European laboratory CERN hosts several experiments to ascertain the properties of antimatter particles, which almost never survive in our matter-dominated world.

Particles (such as the proton and electron) have oppositely charged antimatter doppelgangers (such as the antiproton and antielectron). Because they are opposite but equal, a matter particle and its antimatter partner annihilate when they meet.

Antimatter wasn’t always rare. Theoretical and experimental research suggests that there was an equal amount of matter and antimatter right after the birth of our universe. But 13.8 billion years later, only matter-made structures remain in the visible universe.

Scientists have found small differences between the behavior of matter and antimatter particles, but not enough to explain the imbalance that led antimatter to disappear while matter perseveres. Experiments at CERN are working to solve that riddle using three different strategies.

Illustration by Sandbox Studio, Chicago

### Antimatter under the microscope

It’s well known that CERN is home to Large Hadron Collider, the world’s highest-energy particle accelerator. Less known is that CERN also hosts the world’s most powerful particle decelerator—a machine that slows down antiparticles to a near standstill.

The antiproton decelerator is fed by CERN’s accelerator complex. A beam of energetic protons is diverted from CERN’s Proton Synchrotron and into a metal wall, spawning a multitude of new particles, including some antiprotons. The antiprotons are focused into a particle beam and slowed by electric fields inside the antiproton decelerator. From here they are fed into various antimatter experiments, which trap the antiprotons inside powerful magnetic fields.

“All these experiments are trying to find differences between matter and antimatter that are not predicted by theory,” says Will Bertsche, a researcher at University of Manchester, who works in CERN’s antimatter factory. “We’re all trying to address the big question: Why is the universe made up of matter these days and not antimatter?”

By cooling and trapping antimatter, scientists can intimately examine its properties without worrying that their particles will spontaneously encounter a matter companion and disappear. Some of the traps can preserve antiprotons for more than a year. Scientists can also combine antiprotons with positrons (antielectrons) to make antihydrogen.

“Antihydrogen is fascinating because it lets us see how antimatter interacts with itself,” Bertsche says. “We’re getting a glimpse at how a mirror antimatter universe would behave.”

Scientists in CERN’s antimatter factory have measured the mass, charge, light spectrum, and magnetic properties of antiprotons and antihydrogen to high precision. They also look at how antihydrogen atoms are affected by gravity; that is, do the anti-atoms fall up or down? One experiment is even trying to make an assortment of matter-antimatter hybrids, such as a helium atom in which one of the electrons is replaced with an orbiting antiproton.

So far, all their measurements of trapped antimatter match the theory: Except for the opposite charge and spin, antimatter appears completely identical to matter. But these affirmative results don’t deter Bertsche from looking for antimatter surprises. There must be unpredicted disparities between these particle twins that can explain why matter won its battle with antimatter in the early universe.

“There’s something missing in this model,” Bertsche says. “And nobody is sure what that is.”

### Antimatter in motion

The LHCb experiment wants to answer this same question, but they are looking at antimatter particles that are not trapped. Instead, LHCb scientists study how free-range antimatter particles behave as they travel and transform inside the detector.

“We’re recording how unstable matter and antimatter particles decay into showers of particles and the patterns they leave behind when they do,” says Sheldon Stone, a professor at Syracuse University working on the LHCb Experiment. “We can’t make these measurements if the particles aren’t moving.”

The particles-in-motion experiments have already observed some small differences between matter and antimatter particles. In 1964 scientists at Brookhaven National Laboratory noticed that neutral kaons (a particle containing a strange and down quark) decay into matter and antimatter particles at slightly different rates, an observation that won them the Nobel Prize in 1980.

The LHCb experiment continues this legacy, looking for even more discrepancies between the metamorphoses of matter and antimatter particles. They recently observed that the daughter particles of certain antimatter baryons (particles containing three quarks) have a slightly different spatial orientation than their matter contemporaries.

But even with the success of uncovering these discrepancies, scientists are still very far from understanding why antimatter all but disappeared.

“Theory tells us that we’re still off by nine orders of magnitude,” Stone says, “so we’re left asking, where is it? What is antimatter’s Achilles heel that precipitated its disappearance?”

Illustration by Sandbox Studio, Chicago

### Antimatter in space

Most antimatter experiments based at CERN produce antiparticles by accelerating and colliding protons. But one experiment is looking for feral antimatter freely roaming through outer space.

The Alpha Magnetic Spectrometer is an international experiment supported by the US Department of Energy and NASA. This particle detector was assembled at CERN and is now installed on the International Space Station, where it orbits Earth 400 kilometers above the surface. It records the momentum and trajectory of roughly a billion vagabond particles every month, including a million antimatter particles.

Nomadic antimatter nuclei could be lonely relics from the Big Bang or the rambling residue of nuclear fusion in antimatter stars.

But AMS searches for phenomena not explained by our current models of the cosmos. One of its missions is to look for antimatter that is so complex and robust, there is no way it could have been produced through normal particle collisions in space.

“Most scientists accept that antimatter disappeared from our universe because it is somehow less resilient than matter,” says Mike Capell, a researcher at MIT and a deputy spokesperson of the AMS experiment. “But we’re asking, what if all the antimatter never disappeared? What if it’s still out there?”

If an antimatter kingdom exists, astronomers expect that they would observe mass particle-annihilation fizzing and shimmering at its boundary with our matter-dominated space—which they don’t. Not yet, at least. Because our universe is so immense (and still expanding), researchers on AMS hypothesize that maybe these intersections are too dim or distant for our telescopes.

“We already have trouble seeing deep into our universe,” Capell says. “Because we’ve never seen a domain where matter meets antimatter, we don’t know what it would look like.”

AMS has been collecting data for six years. From about 100 billion cosmic rays, they’ve identified a few strange events with characteristics of antihelium. Because the sample is so tiny, it’s impossible to say whether these anomalous events are the first messengers from an antimatter galaxy or simply part of the chaotic background.

“It’s an exciting result,” Capell says. “However, we remain skeptical. We need data from many more cosmic rays before we can determine the identities of these anomalous particles.”

## April 10, 2017

### Axel Maas - Looking Inside the Standard Model

Last time I wrote about our research on neutron stars. In that case we were concerned with the properties of neutron stars - its mass and size. But these are determined by the particles inside the star, the quarks and gluons and how they influence each other by the strong force.

However, a neutron star is much more than just quarks and gluons bound by gravity and the strong force.

Neutron stars are also affected by the weak force. This happens in a quite subtle way. The weak force can transform a neutron into a proton, an electron and an (anti)neutrino, and back. In a neutron star, this happens all the time. Still, the neutron are neutrons most of the time, hence the name neutron stars. Looking into this process more microscopically, the protons and neutrons consist out of quarks. The proton out of two up quarks and a down quark, and the neutron out of one up quark and two down quarks. Thus, what really happens is that a down quark changes into an up quark and an electron and an (anti)neutrino and back.

As noted, this does not happen too often. But this is actually only true for a neutron star just hanging around. When neutron stars are created in a supernova, this happens very often. In particular, the star which becomes a supernova is mostly protons, which have to be converted to neutrons for the neutron star. Another case is when two neutron stars collide. Then this process becomes much more important, and more rapid. The latter is quite exciting, as the consequences maybe observable in astronomy in the next few years.

So, how can the process be described? Usually, the weak force is weak, as the name says. Thus, it is usually possible to consider it a small effect. Such small effects are well described by perturbation theory. This is OK, if the neutron star just hangs around. But for collisions, or forming, the effect is no longer small. And then other methods are necessary. For the same reasons as in the case of inert neutron stars we cannot use simulations to do so. But our third possibility, the so-called equations of motion, work.

Therefore Walid Mian, a PhD student of mine, and myself used these equations to study how quarks behave, if we offer to them a background of electrons and (anti)neutrinos. We have published a paper about our results, and I would like to outline what we found.

Unfortunately, we still cannot do the calculations exactly. So, in a sense, we cannot independently vary the amount of electrons and (anti)neutrinos, and the strength of their coupling to the quarks. Thus, we can only estimate what a more intense combination of both together means. Since this is qualitatively what we expect to happen during the collision of two neutron stars, this should be a reasonable approximation.

For a very small intensity we do not see anything but what we expect in perturbation theory. But the first surprise was already when we cranked up the intensity. Much earlier than expected new effects which showed up. In fact, they started to be there at intensities some factor 10-1000 smaller than expected. Thus, the weak interaction could play a much larger role in such environments than usually assumed. That was the first insight.

The second was that the type of quarks - whether it is an up or a down quark is more relevant than expected. In particular, whether they have a different mass, like it is in nature, or the same mass makes a big difference. If the mass is different qualitatively new effects arise, which was not expected in this form.

The observed effects themselves are actually quite interesting: They make the quarks, depending on their type, either more sensitive or less sensitive to the weak force. This is important. When neutron stars are created or collide, they become very hot. The main way to get cooler is by dumping (anti)neutrinos into space. This becomes more efficient if the quarks react less to the weak force. Thus, our findings could have consequences on how quickly neutron stars could become colder.

We also saw that these effects only start to play a role if the quark can move inside the neutron star over a sufficiently large distance. Where sufficiently large is here about the size of a neutron. Thus the environment of a neutron star shows itself already when the quarks start to feel that they do not live in a single neutron, but rather in a neutron star, where there neutrons touch each other. All of the qualitative new effects then started to appear.

Unfortunately, to estimate how important these new effects for the neutron star really are, we first have to understand what it means for the neutrons. Essentially, we have to somehow pull our results on a larger scale - what does this mean for the whole neutron - before we can recreate our investigation of the full neutron star with these effects included. Not even to mention the impact for a collision, which is even more complicated.

Thus, our current next step is to understand what the weak interaction implies for hadrons, i.e. states of multiple quarks like the neutron. The first step is to understand how the hadron can decay and reform by the weak force, as I described earlier. The decay itself can be described already quite well using perturbation theory. But decay and reforming, or even an endless chain of these processes, cannot yet. To become able to do so is where we head next.

### Symmetrybreaking - Fermilab/SLAC

Urban Sketchers visit Fermilab

The group brought their on-site drawing practice to the particle physics laboratory.

In March, about 30 participants in the Chicago chapter of the artist network Urban Sketchers visited Fermi National Accelerator Laboratory, located in west Chicagoland, and sketched their hearts out. They drew buildings, interiors and scenes of nature from the laboratory environment, capturing the laboratory's most iconic building, Wilson Hall, along with restored prairie land and the popular bison herd on site.

Urban Sketchers holds monthly “sketch crawls,” as they’re called. Their mission is to “show the world, one drawing at a time.”

Sketcher Harold Goldfus drew scenes of art and architecture.

“I regard myself as primarily a figurative artist. At the Urban Sketchers Chicago outing, I expected to sketch figures at Fermilab with hints of the environment in the background,” Goldfus said. “Instead, I found myself taken with the architecture and aesthetics of the interior of Wilson Hall, and decided on a more unconventional approach.”

The sketch crawl was organized by Peggy Condon and Wes Douglas from Urban Sketchers Chicago along with Fermilab Art Gallery curator Georgia Schwender.

“I was very inspired by Fermilab’s strong commitment to the arts. I didn’t expect this for a world-renowned scientific research institution,” said sketcher Lynne Fairchild. “I really appreciated that they found so many ways to honor the arts and culture: the art gallery, lecture series, the awe-inspiring sculptures on the campus, and the design of Wilson Hall, especially the beauty of the atrium.”

## April 06, 2017

### John Baez - Azimuth

Periodic Patterns in Peptide Masses

Gheorghe Craciun is a mathematician at the University of Wisconsin who recently proved the Global Attractor Conjecture, which since 1974 was the most famous conjecture in mathematical chemistry. This week he visited U. C. Riverside and gave a talk on this subject. But he also told me about something else—something quite remarkable.

### The mystery

A peptide is basically a small protein: a chain of made of fewer than 50 amino acids. If you plot the number of peptides of different masses found in various organisms, you see peculiar oscillations:

These oscillations have a frequency of about 14 daltons, where a ‘dalton’ is roughly the mass of a hydrogen atom—or more precisely, 1/12 the mass of a carbon atom.

Biologists had noticed these oscillations in databases of peptide masses. But they didn’t understand them.

Can you figure out what causes these oscillations?

It’s a math puzzle, actually.

Next I’ll give you the answer, so stop looking if you want to think about it first.

### The solution

Almost all peptides are made of 20 different amino acids, which have different masses, which are almost integers. So, to a reasonably good approximation, the puzzle amounts to this: if you have 20 natural numbers $m_1, ... , m_{20},$ how many ways can you write any natural number $N$ as a finite ordered sum of these numbers? Call it $F(N)$ and graph it. It oscillates! Why?

(We count ordered sums because the amino acids are stuck together in a linear way to form a protein.)

There’s a well-known way to write down a formula for $F(N)$. It obeys a linear recurrence:

$F(N) = F(N - m_1) + \cdots + F(N - m_{20})$

and we can solve this using the ansatz

$F(N) = x^N$

Then the recurrence relation will hold if

$x^N = x^{N - m_1} + x^{N - m_2} + \dots + x^{N - m_{20}}$

for all $N.$ But this is fairly easy to achieve! If $m_{20}$ is the biggest mass, we just need this polynomial equation to hold:

$x^{m_{20}} = x^{m_{20} - m_1} + x^{m_{20} - m_2} + \dots + 1$

There will be a bunch of solutions, about $m_{20}$ of them. (If there are repeated roots things get a bit more subtle, but let’s not worry about.) To get the actual formula for $F(N)$ we need to find the right linear combination of functions $x^N$ where $x$ ranges over all the roots. That takes some work. Craciun and his collaborator Shane Hubler did that work.

But we can get a pretty good understanding with a lot less work. In particular, the root $x$ with the largest magnitude will make $x^N$ grow the fastest.

If you haven’t thought about this sort of recurrence relation it’s good to look at the simplest case, where we just have two masses $m_1 = 1, m_2 = 2.$ Then the numbers $F(N)$ are the Fibonacci numbers. I hope you know this: the $N$th Fibonacci number is the number of ways to write $N$ as the sum of an ordered list of 1’s and 2’s!

1

1+1,   2

1+1+1,   1+2,   2+1

1+1+1+1,   1+1+2,   1+2+1,   2+1+1,   2+2

If I drew edges between these sums in the right way, forming a ‘family tree’, you’d see the connection to Fibonacci’s original rabbit puzzle.

In this example the recurrence gives the polynomial equation

$x^2 = x + 1$

and the root with largest magnitude is the golden ratio:

$\Phi = 1.6180339...$

The other root is

$1 - \Phi = -0.6180339...$

With a little more work you get an explicit formula for the Fibonacci numbers in terms of the golden ratio:

$\displaystyle{ F(N) = \frac{1}{\sqrt{5}} \left( \Phi^{N+1} - (1-\Phi)^{N+1} \right) }$

But right now I’m more interested in the qualitative aspects! In this example both roots are real. The example from biology is different.

Puzzle 1. For which lists of natural numbers $m_1 < \cdots < m_k$ are all the roots of

$x^{m_k} = x^{m_k - m_1} + x^{m_k - m_2} + \cdots + 1$

real?

I don’t know the answer. But apparently this kind of polynomial equation always one root with the largest possible magnitude, which is real and has multiplicity one. I think it turns out that $F(N)$ is asymptotically proportional to $x^N$ where $x$ is this root.

But in the case that’s relevant to biology, there’s also a pair of roots with the second largest magnitude, which are not real: they’re complex conjugates of each other. And these give rise to the oscillations!

For the masses of the 20 amino acids most common in life, the roots look like this:

The aqua root at right has the largest magnitude and gives the dominant contribution to the exponential growth of $F(N).$ The red roots have the second largest magnitude. These give the main oscillations in $F(N),$ which have period 14.28.

For the full story, read this:

• Shane Hubler and Gheorghe Craciun, Periodic patterns in distributions of peptide masses, BioSystems 109 (2012), 179–185.

Most of the pictures here are from this paper.

My main question is this:

Puzzle 2. Suppose we take many lists of natural numbers $m_1 < \cdots < m_k$ and draw all the roots of the equations

$x^{m_k} = x^{m_k - m_1} + x^{m_k - m_2} + \cdots + 1$

What pattern do we get in the complex plane?

I suspect that this picture is an approximation to the answer you’d get to Puzzle 2:

If you stare carefully at this picture, you’ll see some patterns, and I’m guessing those are hints of something very beautiful.

Earlier on this blog we looked at roots of polynomials whose coefficients are all 1 or -1:

The pattern is very nice, and it repays deep mathematical study. Here it is, drawn by Sam Derbyshire:

But now we’re looking at polynomials where the leading coefficient is 1 and all the rest are -1 or 0. How does that change things? A lot, it seems!

By the way, the 20 amino acids we commonly see in biology have masses ranging between 57 and 186. It’s not really true that all their masses are different. Here are their masses:

57, 71, 87, 97, 99, 101, 103, 113, 113, 114, 115, 128, 128, 129, 131, 137, 147, 156, 163, 186

I pretended that none of the masses $m_i$ are equal in Puzzle 2, and I left out the fact that only about 1/9th of the coefficients of our polynomial are nonzero. This may affect the picture you get!

### The n-Category Cafe

Applied Category Theory

The American Mathematical Society is having a meeting here at U. C. Riverside during the weekend of November 4th and 5th, 2017. I’m organizing a session on Applied Category Theory, and I’m looking for people to give talks.

The goal is to start a conversation about applications of category theory, not within pure math or fundamental physics, but to other branches of science and engineering — especially those where the use of category theory is not already well-established! For example, my students and I have been applying category theory to chemistry, electrical engineering, control theory and Markov processes.

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

More precisely, please read the information there and then click on the link in blue to submit an abstract. It should then magically fly through cyberspace to me! Abstracts are due September 12th, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

We’ll be having some interesting plenary talks:

• Paul Balmer, UCLA, An invitation to tensor-triangular geometry.

• Pavel Etingof, MIT, Double affine Hecke algebras and their applications.

• Monica Vazirani, U.C. Davis, Combinatorics, categorification, and crystals.

### John Baez - Azimuth

Applied Category Theory

The American Mathematical Society is having a meeting here at U. C. Riverside during the weekend of November 4th and 5th, 2017. I’m organizing a session on Applied Category Theory, and I’m looking for people to give talks.

The goal is to start a conversation about applications of category theory, not within pure math or fundamental physics, but to other branches of science and engineering—especially those where the use of category theory is not already well-established! For example, my students and I have been applying category theory to chemistry, electrical engineering, control theory and Markov processes.

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

General information about abstracts, American Mathematical Society.

More precisely, please read the information there and then click on the link on that page to submit an abstract. It should then magically fly through cyberspace to me! Abstracts are due September 12th, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

Fall Western Sectional Meeting, U. C. Riverside, Riverside, California, 4–5 November 2017.

We’ll be having some interesting plenary talks:

• Paul Balmer, UCLA, An invitation to tensor-triangular geometry.

• Pavel Etingof, MIT, Double affine Hecke algebras and their applications.

• Monica Vazirani, U.C. Davis, Combinatorics, categorification, and crystals.

## April 04, 2017

### Tommaso Dorigo - Scientificblogging

Winter 2017 LHC Results: The Higgs Is Still There, But...
Snow is melting in the Alps, and particle physicists, who have flocked to La Thuile for exciting ski conferences in the past weeks, are now back to their usual occupations. The pressure of the deadline is over: results have been finalized and approved, preliminary conference notes have been submitted, talks have been given. The period starting now, the one immediately following presentation of new results, when the next deadline (summer conferences!) is still far away, is more productive in terms of real thought and new ideas. Hopefully we'll come up with some new way to probe the standard model or to squeeze more information from those proton-proton collisions, lest we start to look like accountants!

### Symmetrybreaking - Fermilab/SLAC

WIMPs in the dark matter wind

We know which way the dark matter wind should blow. Now we just have to find it.

Picture yourself in a car, your hand surfing the breeze through the open window. Hold your palm perpendicular to the wind and you can feel its force. Now picture the car slowing, rolling up to a stop sign, and feel the force of the wind lessen until it—and the car—stop.

This wind isn’t due to the weather. It arises because of your motion relative to air molecules. Simple enough to understand and known to kids, dogs and road-trippers the world over.

This wind has an analogue in the rarefied world of particle astrophysics called the “dark matter wind,” and scientists are hoping it will someday become a valuable tool in their investigations into that elusive stuff that apparently makes up about 85 percent of the mass in the universe.

In the analogy above, the air molecules are dark matter particles called WIMPs, or weakly interacting massive particles. Our sun is the car, racing around the Milky Way at about 220 kilometers per second, with the Earth riding shotgun. Together, we move through a halo of dark matter that encompasses our galaxy. But our planet is a rowdy passenger; it moves from one side of the sun to the other in its orbit.

When you add the Earth’s velocity of 30 kilometers per second to the sun’s, as happens when both are traveling in the same direction (toward the constellation Cygnus), then the dark matter wind feels stronger. More WIMPs are moving through the planet than if it were at rest, resulting in greater number of detections by experiments. Subtract that velocity when the Earth is on the other side of its orbit, and the wind feels weaker, resulting in fewer detections.

Astrophysicists have been thinking about the dark matter wind for decades. Among the first, way back in 1986, were theorist David Spergel of Princeton University and colleagues Katherine Freese of the University of Michigan and Andrzej K. Drukier (now in private industry, but still looking for WIMPs).

“We looked at how the Earth’s motion around the sun should cause the number of dark matter particles detected to vary on a regular basis by about 10 percent a year,” Spergel says.

At least that’s what should happen—if our galaxy really is embedded in a circular, basically homogeneous halo of dark matter, and if dark matter is really made up of WIMPs.

Illustration by Corinne Mucha

The Italian experiment DAMA/NaI and its upgrade DAMA/Libra claim to have been seeing this seasonal modulation for decades, a claim that has yet to be conclusively supported by any other experiments. CoGeNT, an experiment in the Soudan Underground Laboratory in South Dakota, seemed to back them up for a time, but now the signals are thought to be caused by other sources such as high-energy gamma rays hitting a layer of material just outside the germanium of the detector, resulting in a signal that looks much like a WIMP.

Actually confirming the existence of the dark matter wind is important for one simple reason: the pattern of modulation can’t be explained by anything but the presence of dark matter. It’s what’s called a “model-independent” phenomenon. No natural backgrounds—no cosmic rays, no solar neutrinos, no radioactive decays—would show a similar modulation. The dark matter wind could provide a way to continue exploring dark matter, even if the particles are light enough that experiments cannot distinguish them from almost massless particles called neutrinos, which are constantly streaming from the sun and other sources.

“It’s a big, big prize to go after,” says Jocelyn Monroe, a physics professor at Royal Holloway University of London, who currently works on two dark matter detection experiments, DEAP-3600 at SNOLAB, in Canada, and DMTPC. “If you could correlate detections with the direction in which the planet is moving you would have unambiguous proof” of dark matter.

At the same time Spergel and his colleagues were exploring the wind’s seasonal modulation, he also realized that this correlation could extend far beyond a twice-per-year variation in detection levels. The location of the Earth in its orbit would affect the direction in which nucleons, the particles that make up the nucleus of an atom, recoil when struck by WIMPs. A sensitive-enough detector should see not only the twice-yearly variations, but even daily variations, since the detector constantly changes its orientation to the dark matter wind as the Earth rotates.

“I had initially thought that it wasn’t worth writing up the paper because no experiment had the sensitivity to detect the recoil direction,” he says. “However, I realized that if I pointed out the effect, clever experimentalists would eventually figure out a way to detect it.”

Monroe, as the leader of the DMTPC collaboration, is a member of the clever experimentalist set. The DMTPC, or Dark Matter Time-Projection Chamber, is one of a small number of direct detection experiments that are designed to track the actual movements of recoiling atoms.

Instead of semiconductor crystals or liquefied noble gases, these experiments use low-pressure gases as their target material. DMTPC, for example, uses carbon tetrafluoride. If a WIMP hits a molecule of carbon tetrafluoride, the low pressure in the chamber means that molecule has room to move—up to about 2 millimeters.

“Making the detector is super hard,” Monroe says. “It has to map a 2-millimeter track in 3D.” Not to mention reducing the number of molecules in a detector chamber reduces the chances for a dark matter particle to hit one. According to Monroe, DMTPC will deal with that issue by fabricating an array of 1-cubic-meter-sized modules. The first module has already been constructed and a worldwide collaboration of scientists from five different directional dark matter experiments (including DMTPC) are working on the next step together: a much larger directional dark matter array called the CYGNUS (for CosmoloGY with NUclear recoilS) experiment.

When and if such directional dark matter detectors raise their metaphorical fingers to test the direction of the dark matter wind, Monroe says they’ll be able to see far more than just seasonal variations in detections. Scientists will be able to see variations in atomic recoils not on a seasonal basis, but on a daily basis. Monroe envisions a sort of dark matter telescope with which to study the structure of the halo in our little corner of the Milky Way.

Or not.

There’s always a chance that this next generation of dark matter detectors, or the generation after, still won’t see anything.

Even that, Monroe says, is progress.

“If we’re still looking in 10 years we might be able to say it’s not WIMPs but something even more exotic  As far as we can tell right now, dark matter has got to be something new out there.”

### Lubos Motl - string vacua and pheno

ATLAS: locally 3.3-sigma $$ZH$$ evidence for a new $$3\TeV$$ boson
About two dozens of new ATLAS and CMS papers seem absolutely well-behaved. It's hard to find even a glimpse of an emerging deviation from the Standard Model. A week ago, I mentioned an outstanding B-meson anomaly which is 4.9-sigma strong.

Here I want to mention this Figure 3 upper-left on Page 12 of ATLAS'
Search for Heavy Resonances Decaying to a $$W$$ or $$Z$$ Boson and a Higgs Boson in the $$q\bar q^{(\prime)} b\bar b$$ Final State in $$pp$$ Collisions at $$\sqrt s = 13\TeV$$ with the ATLAS Detector
You may also look at Page 14 of the paper, Figure 4, where the Brazilian bands show a wide 3-sigmaish excess near $$m_{Z'}\sim 3\TeV$$.

The local significance is 3.3 sigma, the global significance is quantified as 2.2 sigma. So it's nowhere near a discovery but it's still among the strongest deviations from the Standard Model that you may find in any new LHC paper published in 2017 so far.

As the picture embedded at the top shows, about 3 events were predicted in the interval of masses $$3,000-3,050\GeV$$ but 10 events were observed. Not bad. Correct me if you can read the numbers more accurately.

Clearly, if this apparent fluke were a real signal, there could be a new $$Z'$$-boson whose mass would be close to $$3\TeV$$. That's amusing especially because in September 2015, CMS announced an electron-positron pair whose invariant mass was $$m\sim 2.9\TeV$$ – it was the energy record-breaker at that moment and a higher scorer than what was expected – and that one could have been a new $$Z'$$-boson, too.

I think that the probability is some 98% that this ATLAS excess is a fluke but if you want to be intrigued by some existing – not yet outdated – deviations from the Standard Model, this could be one of your choices.

## April 03, 2017

### Symmetrybreaking - Fermilab/SLAC

Art intimates physics

Artist Chris Henschke’s latest piece inspired by particle physics mixes constancy with unpredictability, the natural with the synthetic.

Artist Chris Henschke has spent more than a decade exploring the intersection of art and physics. His pieces bring invisible properties and theoretical concepts to light through still images, sound and video.

His latest piece, called “Song of the Phenomena,” gives new life to a retired piece of equipment once used by a long-time collaborator of Henschke, University of Melbourne and Australian Synchrotron physicist Mark Boland.

### Crossing paths

The story of “Song of the Phenomena” begins in the 1990s. In 1991, Henschke enrolled in the University of Melbourne to study science, but he turned to sound design instead. Boland entered the same university to study physics.

Personal computers were just entering the market. Sound designers and animators began coding basic programs, and Henschke joined in. “I was always interested in making sounds and music, interested in light and art and physics and nature and how it all combines—either in our heads or the devices that mediate between us and nature,” he says.

Boland completed his thesis in physics at the Australian Radiation Laboratory (now called the Australian Radiation Protection and Nuclear Safety Agency). He was testing a new type of electron detector in a linear accelerator, or linac. The linac used radio waves to guide electrons through a series of accelerator cavities, which imparted more and more energy to the particles as they moved through.

That particular linac spent more than 20 years with the Australian Radiation Protection and Nuclear Safety Agency, where medical physics professionals used it to accelerate electrons to different energies to create calibration standards for radiation oncology treatments. Once they no longer needed it, Boland’s former advisor contacted him to ask if he’d like the accelerator or any of its still-working parts. He said yes, though he was unsure what he would do with it.

### An artist’s view

In 2007 Henschke came to the Australian Synchrotron as part of an artist-in-residence program. Boland was familiar with his artwork; he had seen Henschke’s first piece exploring particle physics in the pages of Symmetry. Boland grew up with an appreciation for art; he says his parents made sure of that by “dragging” him through many galleries in his youth.

When Henschke and Boland met, they got into an hours-long conversation about physics. “We hit it off, we resonated,” Boland says, “and we’ve been working together ever since.”

Since that first residency program, Henschke has spent significant time at the Australian Synchrotron facility and at CERN European research center and has taken shorter trips to the DESY German national research center.

His process of creating artwork echoes the scientific process and the setup of an experiment, Boland says. Henschke thinks through the role that each piece of the artwork plays. Everything is where it is for a reason.

“He’s a perfectionist, he doesn't settle for second best,” Boland says. “He has the same level of professionalism and tenacity as an artist as a physicist does. It’s as if there’s a five-sigma quality test on his work as well.”

## Song of the Phenomena

Video of Song of the Phenomena

### Once accelerator, now art

Boland mentioned the linac he had to Henschke during a conversation in early 2016. “Chris ran with it,” Boland says. “He took it and made it into his installation.”

Henschke discovered the machine hums at 220 hertz—the musical note of A—as it produces its resonant waves. “In a sense, particle accelerators are gigantic, high-energy synthesizers because they are creating high-energy waves at very specific frequencies and amplitudes,” Henschke says.

Henschke explored different aspects of the machine, still unsure how each part would come together as a final piece of art. “I have to let it speak to me, I have to let it speak for itself,” he says.

Finally it dawned on him; the art could be an echo of the accelerator’s past.

The accelerator no longer accelerates electrons. Instead Henschke feeds it a steady supply of electrons and their antimatter partners, positrons. He does this by placing it beside a pile of bananas, which release the particles as their potassium decays. (Using decaying fruit was a nod to Dutch still-life vanitas paintings, Henschke says.)

Observers cannot see the electrons and positrons in the piece, but they can hear them. Henschke ensured this by adding a Geiger counter, which emits a chirp each time it detects a particle.

Visitors can also hear the accelerator itself. Henschke attached speakers and pumped up the sound of the machine’s natural hum with a stereo amp (a bit too much at first; they blew up an oscilloscope they were using to measure the frequency). He used an AM radio coil to amplify the sound of the accelerator’s electromagnetic field.

“Song of the Phenomena” plays upon resonance, amplification and decay, Henschke says. “It creates this tension between the constant hum of the device versus the unpredictability of the subatomic emission.”

The idea of playing with the analogy between the linac’s resonance and sound resonance is one that Australian Synchrotron Director Andrew Peele appreciates. “A lot of science communication is about how you find analogies that people can engage with, and this is a great example,” Peele says.

Henschke displayed “Song of the Phenomena” at the Royal Melbourne Institute of Technology Gallery from November 17, 2016, to February 18, 2017. Since then, the apparatus has returned to the Australian Synchrotron, where it sits in a vast, open room where some of the facility’s synchrotron beamline stations used to stand. Scientists meet nearby for a weekly social coffee break.

Henschke is currently writing his thesis for his PhD in experimental art (with Boland as his advisor). In his next project, he hopes to tackle the subject of quantum entanglement.

## March 30, 2017

### John Baez - Azimuth

Jobs at U.C. Riverside

The Mathematics Department of the University of California at Riverside is trying to hire some visiting assistant professors. We plan to make decisions quite soon!

The positions are open to applicants who have PhD or will have a PhD by the beginning of the term from all research areas in mathematics. The teaching load is six courses per year (i.e. 2 per quarter). In addition to teaching, the applicants will be responsible for attending advanced seminars and working on research projects.

This is initially a one-year appointment, and with successful annual teaching review, it is renewable for up to a third year term.

For more details, including how to apply, go here:

https://www.mathjobs.org/jobs/jobs/10162

### Axel Maas - Looking Inside the Standard Model

I have written previously about how we investigate QCD to learn about neutron stars. Neutron stars are the extremely dense and small objects left over after a medium-sized star became a supernova.

For that, we have decided to take a detour. To do so, we have slightly modified the strong interactions. The reason for this modification was to do numerical simulations. In the original version of the theory, this is yet impossible. Mainly, because we have not yet been able to develop an algorithm, which is fast enough to get a result within our lifetime. With the small changes we did to our theory, this changes. And therefore, we have now a (rough) idea of how this theory behaves at densities relevant for neutron stars.

Now Ouraman Hajizadeh, a PhD student of mine, and I went all the way. We used these results to construct a neutron star from it. What we found is written up in a paper. And I will describe here what we learned.

The first insight is that we needed a baseline. Of course, we could compare to what we have on neutron star from astrophysics. But we do not yet know too much about their internal structure. This may change with the newly established gravitational wave astronomy, but this will take a few years. Thus, we decided to use neutrons, which do not interact with each other, as the baseline. A neutron star of such particles is only held together by the gravitational pull and the so-called Pauli principle. This principle forbids certain types of particles, so-called fermions, to occupy the same spots. Neutrons are such fermions. Any difference from such a neutron star has therefore to be attributed to interactions.

The observed neutron stars show the existence of interactions. This is exemplified by their mass. A neutron star made out of non-interacting neutrons can have only masses which are somewhat below the mass of our sun. The heaviest neutron stars we have observed so far are more than twice the mass of our sun. The heaviest possible neutron stars could be a little bit heavier than three times our sun. Everything which is heavier would collapse further, either to a different object unknown to us, or to a black hole.

Now, the theory we investigated is different from the true strong-interactions by two effects. One is that we had only one type of quarks, rather than the real number. Also, our quarks was heavier than the lightest quark in nature. Finally, we have more colors and also more gluons than in nature. Thus, our neutron has a somewhat different structure than the real one. But we used this modified version of the neutron to create our baseline, so that we can still see the effect of interactions.

Then, we cranked the machinery. This machinery is a little bit of general relativity, and thermodynamics. The prior is not modified, but our theory determines the latter. What we got was a quite interesting result. First, our heaviest neutron star was much heavier than our baseline. Roughly 20 to 50 percent heaver than our sun, depending on details and uncertainties. Also, a typical neutron star of this mass had much less variation of its size than the baseline. For non-interacting neutrons, changing the maximum mass by ten percent changes the radius by a kilometer, or so. In our case, this changed the radius almost not at all. So, our heaviest neutron stars are much more reluctant to change. So interactions indeed change the structure of a neutron star considerably.

Another long-standing question is, what the internal structure of a neutron star is. Especially, whether they are a, more or less, monolithic block, except for a a very thin layer close to the surface. Or whether they are composed of many different layers, like our earth. In our case, we find indeed a layered structure. There is an outer surface, a kilometer or so thick, and then a different state of matter down to the core. However, the change appears to be quite soft, and there is no hard distinction. Still, our results signal that there a light neutron stars, which only consist out of the 'surface' material, and only heavier neutron stars have such a core of different stuff. Thus, there could be two classes of neutron stars, with different properties. However, the single-type class is lighter than those which have been observed so far. Such light neutron stars, while apparently stable, seem not, or rarely, be formed during the supernovas giving birth to neutron stars.

Of course, the question is, to which extent such qualitative features can be translated to the real case. We can learn more about this by doing the same in other theories. If features turn out to be generic, this points at something which may also happen for the real case. But even our case, which in a certain sense is the simplest possibility, was not trivial. It may take some time to repeat it for other theories.

## March 29, 2017

### Tommaso Dorigo - Scientificblogging

The Way I See It
Where by "It" I really mean the Future of mankind. The human race is facing huge new challenges in the XXI century, and we are only starting to get equipped to face them.

The biggest drama of the past century was arguably caused by the two world conflicts and the subsequent transition to nuclear warfare: humanity had to learn to coexist with the impending threat of global annihilation by thermonuclear war. But today, in addition to that dreadful scenario there are now others we have to cope with.

## March 28, 2017

### Symmetrybreaking - Fermilab/SLAC

How to make a discovery

Particle physics is a dance between theory and experiment.

Meenakshi Narain, a professor of physics at Brown University, remembers working on the DZero experiment at Fermi National Accelerator Laboratory near Chicago in the winter of 1994. She would bring blankets up to her fifth-floor office to keep warm as she sat at her computer going through data in search of the then-undiscovered top quark.

For weeks, her group had been working on deciphering some extra background that originally had not been accounted for. Their conclusions contradicted the collaboration’s original assumptions.

Narain, who was a postdoctoral researcher at the time, talked to her advisor about sharing the group’s result. Her advisor told her that if she had followed the scientific method and was confident in her result, she should talk about it.

“I had a whole sequence of logic and explanation prepared,” Narain says. “When I presented it, I remember everybody was very supportive. I had expected some pushback or some criticism and nothing like that happened.”

This, she says, is the scientific process: A multitude of steps designed to help us explore the world we live in.

“In the end the process wins. It’s not about you or me, because we’re all going after the same thing. We want to discover that particle or phenomenon or whatever else is out there collaboratively. That’s the goal.”

Narain’s group’s analysis was essential to the collaboration’s understanding of a signal that turned out to be the elusive top quark.

Artwork by Sandbox Studio, Chicago

### The modern hypothesis

“The scientific method was not invented overnight,” says Joseph Incandela, vice chancellor for research at the University of California, Santa Barbara. “People used to think completely differently. They thought if it was beautiful it had to be true. It took many centuries for people to realize that this is how you must approach the acquisition of true knowledge that you can verify.”

For particle physicists, says Robert Cahn, a senior scientist at Lawrence Berkeley National Laboratory, the scientific method isn’t so much going from hypothesis to conclusion, but rather “an exploration in which we measure with as much precision as possible a variety of quantities that we hope will reveal something new.

“We build a big accelerator and we might have some ideas of what we might discover, but it’s not as if we say, ‘Here’s the hypothesis and we’re going to prove or disprove it. If there’s a scientific method, it’s something much broader than that.”

Scientific inquiry is more of a continuing conversation between theorists and experimentalists, says Chris Quigg, a distinguished scientist emeritus at Fermilab.

“Theorists in particular spend a lot of time telling stories, making up ideas or elaborating ideas about how something might happen,” he says. “There’s an evolution of our ideas as we engage in dialogue with experiments.”

An important part of the process, he adds, is that the scientists are trained never to believe their own stories until they have experimental support.

“We are often reluctant to take our ideas too seriously because we’re schooled to think about ideas as tentative,” Quigg says. “It’s a very good thing to be tentative and to have doubt. Otherwise you think you know all the answers, and you should be doing something else.”

It’s also good to be tentative because “sometimes we see something that looks tantalizingly like a great discovery, and then it turns out not to be,” Cahn says.

At the end of 2015, hints appeared in the data of the two general-purpose experiments at the Large Hadron Collider that scientists had stumbled upon a particle 750 times as massive as a proton. The hints prompted more than 500 scientific papers, each trying to tell the story behind the bump in the data.

“It’s true that if you simply want to minimize wasting your time, you will ignore all such hints until they [reach the traditional uncertainty threshold of] 5 sigma,” Quigg said. “But it’s also true that as long as they’re not totally flaky, as long as it looks possibly true, then it can be a mind-expanding exercise.”

In the case of the 750-GeV bump, Quigg says, you could tell a story in which such a thing might exist and wouldn’t contradict other things that we knew.

“It helps to take it from just an unconnected observation to something that’s linked to everything else,” Quigg says. “That’s really one of the beauties of scientific theories, and specifically the current state of particle physics. Every new observation is linked to everything else we know, including all the old observations. It’s important that we have enough of a network of observation and interpretation that any new thing has to make sense in the context of other things.”

After collecting more data, physicists eventually ruled out the hints, and the theorists moved on to other ideas.

### The importance of uncertainty

But sometimes an idea makes it further than that. Much of the work scientists put into publishing a scientific result involves figuring out how well they know it: What’s the uncertainty and how do we quantify it?

“If there’s any hallmark to the scientific method in particle physics and in closely related fields like cosmology, it’s that our results always come with an error bar,” Cahn says. “A result that doesn’t have an uncertainty attached to it has no value.”

In a particle physics experiment, some uncertainty comes from background, like the data Narain’s group found that mimicked the kind of signal they were looking for from the top quark.

This is called systematic uncertainty, which is typically introduced by aspects of the experiment that cannot be completely known.

“When you build a detector, you must make sure that for whatever signal you’re going to see, there is not much possibility to confuse it with the background,” says Helio Takai, a physicist at Brookhaven National Laboratory. “All the elements and sensors and electronics are designed having that in mind. You have to use your previous knowledge from all the experiments that came before.”

Careful study of your systematic uncertainties is the best way to eliminate bias and get reliable results.

“If you underestimate your systematic uncertainty, then you can overestimate the significance of the signal,” Narain says. “But if you overestimate the systematic uncertainty, then you can kill your signal. So, you really are walking this fine line in understanding where the issues may be. There are various ways the data can fool you. Trying to be aware of those ways is an art in itself and it really defines the thinking process.”

Physicists also must think about statistical uncertainty which, unlike systematic uncertainty, is simply the consequence having a limited amount of data.

“For every measurement we do, there’s a possibility that the measurement is a wrong measurement just because of all the events that happen at random while we are doing the experiment,” Takai says. “In particle physics, you’re producing many particles, so a lot of these particles may conspire and make it appear like the event you’re looking for.”

You can think of it as putting your hand inside a bag of M&Ms, Takai says. If the first few M&Ms you picked were brown and you didn’t know there were other colors, you would think the entire bag was brown. It wouldn’t be until you finally pulled out a blue M&M that you realized that the bag had more than one color.

Particle physicists generally want their results to have a statistical significance corresponding to at least 5 sigma, a measure that means that there is only a 0.00003 percent chance of a statistical fluctuation giving an excess as big or bigger than the one observed.

Artwork by Sandbox Studio, Chicago

### The scientific method at work

One of the most stunning recent examples of the scientific method – careful consideration of statistical and systematic uncertainties coming together – was announced in 2012 at the moment the spokespersons for the ATLAS and CMS experiments at the LHC revealed the discovery of the Higgs boson.

More than half a century of theory and experimentation led up to that moment. Experiments from the 1950s on had accumulated a wealth of information on particle interactions, but the interactions were only partially understood and seemed to come from disconnected sources.

“But brilliant theoretical physicists found a way to make a single model that gave them a good description of all the known phenomena, says Incandela, who was spokesperson for the CMS experiment during the Higgs discovery. “It wasn’t guaranteed that the Higgs field existed. It was only guaranteed that this model works for everything we do and have already seen, and we needed to see if there really was a boson that we could find that could tell us in fact that that field is there.”

This led to a generation-long effort to build an accelerator that would reach the extremely high energies needed to produce the Higgs boson, a particle born of the Higgs field, and then two gigantic detectors that could detect the Higgs boson if it appeared.

Building two different detectors would allow scientists to double-check their work. If an identical signal appeared in two separate experiments run by two separate groups of physicists, chances were quite good that it was the real thing.

“So there you saw a really beautiful application of the scientific method where we confirmed something that was incredibly difficult to confirm, but we did it incredibly well with a lot of fail-safes and a lot of outstanding experimental approaches,” Incandela says. “The scientific method was already deeply engrained in everything we did to the greatest extreme. And so we knew when we saw these things that they were real, and we had to take them seriously.”

The scientific method is so engrained that scientists don’t often talk about it by name anymore, but implementing it “is what separates the great scientists from the average scientists from the poor scientists,” Incandela says. “It takes a lot of scrutiny and a deep understanding of what you’re doing.”

### Lubos Motl - string vacua and pheno

$$B$$-meson $$b$$-$$s$$-$$\mu$$-$$\mu$$ anomaly remains at 4.9 sigma after Moriond
There was no obvious announcement of new physics at Moriond 2017, one that would have settled supersymmetry or other bets in a groundbreaking direction, but that doesn't mean that the Standard Model is absolutely consistent with all observations.

In recent years, the LHCb collaboration has claimed various deviations of their observations of mostly $$B$$-meson decays from the Standard Model predictions. A new paper was released yesterday, summarizing the situation after Moriond 2017:
Status of the $$B\to K^*\mu^+\mu^−$$ anomaly after Moriond 2017
Wolfgang Altmannshofer, Christoph Niehoff, Peter Stangl, David M. Straub (the German language is so effective with these one-syllable surnames, isn't it?) and Matthias Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz have looked at the tension with the newest data.

The Good-lookers, Matterhorn (1975): In the morning, they started their journey at CERN (or in Bern). I've made the would-be witty replacement of Bern with CERN so many times that I am not capable of singing this verse reliably correctly anymore!

The new data include the angular distribution of the decay mentioned in the title, as measured by the major (ATLAS and CMS) detectors.

Microscopically, at the level of quarks and leptons, these decays of the $$B$$-mesons correspond to the$b\to s + \mu^+ + \mu^-$ transformation of the bottom-quark.

There seems to be a deviation from the Standard Model. But they see that the deviation doesn't seem to visibly depend on $$q^2$$ and it's independent of the helicities, too. The first fact encourages them to explain the "extra processes" by an extra four-fermion interaction including the fermions $$b,s,\mu,\mu$$. There are various tensor structures that allow you to contract the four spinors in the four-fermion interactions and once they look carefully, the deviation from the Standard Model seems to be maximally hiding in the new physics (NP) term in the Hamiltonian:$\eq{ \HH_{\rm eff} &= -\frac{4 G_F}{\sqrt{2}} V_{tb} V^*_{ts} \frac{e^2}{16\pi^2} \cdot C_9 O_9 + {\rm h.c.},\\ O_9 &= (\bar s \gamma_\mu P_L b) (\bar \ell \gamma^\mu \ell) }$ There are numerous other possible terms a priori, up to $$O_{10}$$. Also, analogous operators may have primes and the prime indicates the replacement of $$P_L$$ with $$P_R$$.

If you memorize this song about quarks, you should understand all the four-fermion interactions unless you will conclude that the song is about cheese, as one of the singers did. The ladies from the girl band – those on the first photograph ever posted on the web – are planning a comeback and look for donations.

At any rate, only the evidence in favor of a nonzero coefficient $$O_9$$ from new physics seems strong enough to deserve the paper – and the TRF blog post – and the best fit value of $$C_9$$ seems to be negative and$C_9 = -1.21 \pm 0.22$ which means that the experimental data indicate that $$C_9$$ is nonzero (it should be zero in the Standard Model) at the 4.9-sigma level. Not bad. Well, there is also a similar but weaker anomaly for $$C_{10}$$ that multiplies a similar operator with an extra $$\gamma_5$$ and whose best fit is:$\eq{ O_{10} &= (\bar s \gamma_\mu P_L b) (\bar \ell \gamma^\mu \gamma_5 \ell)\\ C_{10} &= +0.69\pm 0.25 }$ which differs from the Standard Model's zero by 2.9 sigma. The numbers make it clear that the hypothesis that $$C_{9}=-C_{10}$$ is rather compatible with the data, too, within one sigma, and the best fit for this $$C_{9}=-C_{10}$$ is $$-0.62\pm 0.14$$ or so, a 4.2-sigma deviation from zero (I believe that $$-0.62\pm 0.14$$ should really be multiplied by $$\sqrt{2}$$ but let me not make this confusion too visible).

The German/Ohio authors translate this effect to various other parameterizations of the LFUs (lepton flavor universality parameters) and if I understand the ultimate claim well, they basically say that similar anomalies from ATLAS+CMS, LHCb, and Belle seem to be consistent with each other and with the extra new physics term that was proposed above.

Some skeptics could say that these anomalies could be due to some difficult QCD effects. But the bottom-quark is pretty heavy and therefore "ignoring" the gluy, sticky environment around itself so I tend to think that the deviation from the Standard Model is rather exciting.

I've made fun of the German language so I want to make sure that the U.S. readers don't think that they're untouchable. ;-)

If it exists, the authors say, the clear deviations from the Standard Model could be made very strong by the experiments very soon.

Theoretically, I would try to explain this four-fermion interaction by the exchange of a new gauge boson or a scalar particle but I am not capable of giving you a more refined let alone stringy inspired detailed story about this new effect at this moment.

## March 24, 2017

### Symmetrybreaking - Fermilab/SLAC

A new gem inside the CMS detector

This month scientists embedded sophisticated new instruments in the heart of a Large Hadron Collider experiment.

Sometimes big questions require big tools. That’s why a global community of scientists designed and built gigantic detectors to monitor the high-energy particle collisions generated by CERN’s Large Hadron Collider in Geneva, Switzerland. From these collisions, scientists can retrace the footsteps of the Big Bang and search for new properties of nature.

The CMS experiment is one such detector. In 2012, it co-discovered the elusive Higgs boson with its sister experiment, ATLAS. Now, scientists want CMS to push beyond the known laws of physics and search for new phenomena that could help answer fundamental questions about our universe. But to do this, the CMS detector needed an upgrade.

“Just like any other electronic device, over time parts of our detector wear down,” says Steve Nahn, a researcher in the US Department of Energy’s Fermi National Accelerator Laboratory and the US project manager for the CMS detector upgrades. “We’ve been planning and designing this upgrade since shortly after our experiment first started collecting data in 2010.”

The CMS detector is built like a giant onion. It contains layers of instruments that track the trajectory, energy and momentum of particles produced in the LHC’s collisions. The vast majority of the sensors in the massive detector are packed into its center, within what is called the pixel detector. The CMS pixel detector uses sensors like those inside digital cameras but with a lightning fast shutter speed: In three dimensions, they take 40 million pictures every second.

For the last several years, scientists and engineers at Fermilab and 21 US universities have been assembling and testing a new pixel detector to replace the current one as part of the CMS upgrade, with funding provided by the Department of Energy Office of Science and National Science Foundation.

The pixel detector consists of three sections: the innermost barrel section and two end caps called the forward pixel detectors. The tiered and can-like structure gives scientists a near-complete sphere of coverage around the collision point. Because the three pixel detectors fit on the beam pipe like three bulky bracelets, engineers designed each component as two half-moons, which latch together to form a ring around the beam pipe during the insertion process.

Over time, scientists have increased the rate of particle collisions at the LHC. In 2016 alone, the LHC produced about as many collisions as it had in the three years of its first run together. To be able to differentiate between dozens of simultaneous collisions, CMS needed a brand new pixel detector.

The upgrade packs even more sensors into the heart of the CMS detector. It’s as if CMS graduated from a 66-megapixel camera to a 124-megapixel camera.

Each of the two forward pixel detectors is a mosaic of 672 silicon sensors, robust electronics and bundles of cables and optical fibers that feed electricity and instructions in and carry raw data out, according to Marco Verzocchi, a Fermilab researcher on the CMS experiment.

The multipart, 6.5-meter-long pixel detector is as delicate as raw spaghetti. Installing the new components into a gap the size of a manhole required more than just finesse. It required months of planning and extreme coordination.

“We practiced this installation on mock-ups of our detector many times,” says Greg Derylo, an engineer at Fermilab. “By the time we got to the actual installation, we knew exactly how we needed to slide this new component into the heart of CMS.”

The most difficult part was maneuvering the delicate components around the pre-existing structures inside the CMS experiment.

“In total, the full three-part pixel detector consists of six separate segments, which fit together like a three-dimensional cylindrical puzzle around the beam pipe,” says Stephanie Timpone, a Fermilab engineer. “Inserting the pieces in the right positions and right order without touching any of the pre-existing supports and protections was a well-choreographed dance.”

For engineers like Timpone and Derylo, installing the pixel detector was the last step of a six-year process. But for the scientists working on the CMS experiment, it was just the beginning.

“Now we have to make it work,” says Stefanos Leontsinis, a postdoctoral researcher at the University of Colorado, Boulder. “We’ll spend the next several weeks testing the components and preparing for the LHC restart.”

## March 21, 2017

### Symmetrybreaking - Fermilab/SLAC

High-energy visionary

Meet Hernán Quintana Godoy, a scientist who helped make Chile central to international astronomy.

Professor Hernán Quintana Godoy has a way of taking the long view, peering back into the past through distant stars while looking ahead to the future of astronomy in his home, Chile.

For three decades, Quintana has helped shape the landscape of astronomy in Chile, host to some of the largest ground-based observatories in the world.

In January he became the first recipient of the Education Prize of the American Astronomical Society from a country other than the United States or Canada.

“Training the next generation of astronomers should not be limited to just a few countries,” says Keely Finkelstein, former chair of the AAS Education Prize Committee. “[Quintana] has been a tireless advocate for establishing excellent education and research programs in Chile.”

Quintana earned his doctorate from the University of Cambridge in the United Kingdom in 1973. The same year, a military junta headed by General Augusto Pinochet took power in a coup d’état.

Quintana came home and secured a teaching position at the University of Chile. At the time, Chilean researchers mainly focused on the fundamentals of astronomy—measuring the radiation from stars and calculating the coordinates of celestial objects. By contrast, Quintana’s dissertation on high-energy phenomena seemed downright radical.

A year and a half after taking his new job, Quintana was granted a leave of absence to complete a post-doc abroad. Writing from the United States, Quintana published an article encouraging Chile to take better advantage of its existing international observatories. He urged the government to provide more funding and to create an environment that would encourage foreign-educated astronomers to return home to Chile after their postgraduate studies. The article did not go over well with the administration at his university.

“I wrote it for a magazine that was clearly against Pinochet,” Quintana says. “The magazine cover was a black page with a big ‘NO’ in red” related to an upcoming referendum.

UCh dissolved Quintana’s teaching position.

Quintana became a wandering postdoc and research associate in Europe, the US and Canada. It wasn’t until 1981 that Quintana returned to teach at the Physics Institute at Pontifical Catholic University of Chile.

He continued to push the envelope at PUC. He created elective courses on general astronomy, extragalactic astrophysics and cluster dynamics. He revived and directed a small astronomy group. He encouraged students to expand their horizons by hiring both Chilean and foreign teachers and sending students to study abroad.

“Because of him I took advantage of most of the big observatories in Chile and had an international perspective of research from the very beginning of my career,” says Amelia Ramirez, who studied with Quintana in 1983. A specialist in interacting elliptical galaxies, she is now head of Research and Development in University of La Serena.

In mid-1980s Quintana became the scriptwriter for a set of distance learning astronomy classes produced by the educational division of his university’s public TV channel, TELEDUC. He challenged his viewers to take on advanced topics—and they responded.

Illustration by Corinne Mucha

“I even introduced two episodes on relativity theory,” Quintana says. “This shocked them. The reception was so good that I wrote a whole book on the subject.”

The station partnered with universities and institutions across Chile to provide viewers the opportunity to earn a diploma by taking a written test based on the televised material. More than 5000 people enrolled during the four-year broadcasting period.

“What stands out [about Quintana] is his strategic vision and his creativity to materialize projects,” says Alejandro Clocchiatti, a professor at PUC who worked with Quintana for 20 years. “All he does is with dedication and enthusiasm, even if things don’t go according to plan. He’s got an unbeatable optimism.”

Over the years, Quintana has had a hand in planning the locations of multiple new telescopes in Chile. In 1994 he guided an expedition to identify the location of the Atacama Large Millimeter Array, a collection of 66 high-precision antennae.

In 1998, PUC finally responded to decades of advocating by Quintana and his colleagues and opened a new major in astronomy. Gradually more universities followed suit.

Quintana retired three years ago. He is optimistic about the future of Chilean astronomy. It has grown from a collection of 25 professors and their students in the late ’90s to a community of more than 800 hundred students, teachers and researchers.

He says he is looking out for new discoveries forthcoming instruments will bring. The European Extremely Large Telescope, under construction on Cerro Armazones in the Atacama Desert of northern Chile, is expected to produce images 16 times sharper than Hubble’s. The southern facilities of the Cherenkov Telescope Array, a planned collection of 99 telescopes in Chile, will complement a northern array to complete the world’s most sensitive high-energy gamma-ray observatory. Both arrangements will peer into super-massive black holes, the atmospheres of extra-solar planets, and the origin of relativistic cosmic particles.

“Everything in our universe is constantly changing,” Quintana says. “We are all heirs of that structural evolution.”

### Clifford V. Johnson - Asymptotia

News from the Front, XIII: Holographic Heat Engines for Fun and Profit

I put a set of new results out on to the arxiv recently. They were fun to work out. They represent some of my continued fascination with holographic heat engines, those things I came up with back in 2014 that I think I've written about here before (here and here). For various reasons (that I've explained in various papers) I like to think of them as an answer waiting for the right question, and I've been refining my understanding of them in various projects, trying to get clues to what the question or questions might be.

As I've said elsewhere, I seem to have got into the habit of using 21st Century techniques to tackle problems of a 19th Century flavour! The title of the paper is "Approaching the Carnot limit at finite power: An exact solution". As you may know, the Carnot engine, whose efficiency is the best a heat engine can do (for specified temperatures of exchange with the hot and cold reservoirs), is itself not a useful practical engine. It is a perfectly reversible engine and as such takes infinite time to run a cycle. A zero power engine is not much practical use. So you might wonder how close a real engine can come to the Carnot efficiency... the answer should be that it can come arbitrarily close, but most engines don't, and so people who care about this sort of thing spend a lot of time thinking about how to design special engines that can come close. And there are various arguments you can make for how to do it in various special systems and so forth. It's all very interesting and there's been some important work done.

What I realized recently is that my old friends the holographic heat engines are a very good tool for tackling this problem. Part of the reason is that the underlying working substance that I've been using is a black hole (or, if you prefer, is defined by a black hole), and such things are often captured as exact [...] Click to continue reading this post

The post News from the Front, XIII: Holographic Heat Engines for Fun and Profit appeared first on Asymptotia.

## March 19, 2017

### Jaques Distler - Musings

Responsibility

Many years ago, when I was an assistant professor at Princeton, there was a cocktail party at Curt Callan’s house to mark the beginning of the semester. There, I found myself in the kitchen, chatting with Sacha Polyakov. I asked him what he was going to be teaching that semester, and he replied that he was very nervous because — for the first time in his life — he would be teaching an undergraduate course. After my initial surprise that he had gotten this far in life without ever having taught an undergraduate course, I asked which course it was. He said it was the advanced undergraduate Mechanics course (chaos, etc.) and we agreed that would be a fun subject to teach. We chatted some more, and then he said that, on reflection, he probably shouldn’t be quite so worried. After all, it wasn’t as if he was going to teach Quantum Field Theory, “That’s a subject I’d feel responsible for.”

This remark stuck with me, but it never seemed quite so poignant until this semester, when I find myself teaching the undergraduate particle physics course.

The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air).

This drives me up the #@%^ing wall. It is precisely wrong.

There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions. In Special Relativity, there is no interaction-at-a-distance; all forces are necessarily mediated by fields. Those fields fluctuate and, when you want to study the quantum theory, you end up having to quantize them.

But the free particle is just fine. Of course it has to be: free field theory is just the theory of an (indefinite number of) free particles. So it better be true that the quantum theory of a single relativistic free particle makes sense.

So what is that theory?

1. It has a Hilbert space, $\mathcal\left\{H\right\}$, of states. To make the action of Lorentz transformations as simple as possible, it behoves us to use a Lorentz-invariant inner product on that Hilbert space. This is most easily done in the momentum representation $⟨\chi |\varphi ⟩=\int \frac{{d}^{3}\stackrel{⇀}{k}}{{\left(2\pi \right)}^{3}2\sqrt{{\stackrel{⇀}{k}}^{2}+{m}^{2}}}\phantom{\rule{thinmathspace}{0ex}}\chi \left(\stackrel{⇀}{k}{\right)}^{*}\varphi \left(\stackrel{⇀}{k}\right) \langle\chi|\phi\rangle = \int \frac\left\{d^3\vec\left\{k\right\}\right\}\left\{\left\{\left(2\pi\right)\right\}^3 2\sqrt\left\{\vec\left\{k\right\}^2+m^2\right\}\right\}\, \chi\left(\vec\left\{k\right\}\right)^* \phi\left(\vec\left\{k\right\}\right) $
2. As usual, the time-evolution is given by a Schrödinger equation
(1)$i{\partial }_{t}|\psi ⟩={H}_{0}|\psi ⟩i\partial_t |\psi\rangle = H_0 |\psi\rangle $

where ${H}_{0}=\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}H_0 = \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\}$. Now, you might object that it is hard to make sense of a pseudo-differential operator like ${H}_{0}H_0$. Perhaps. But it’s not any harder than making sense of $U\left(t\right)={e}^{-i{\stackrel{⇀}{p}}^{2}t/2m}U\left(t\right)= e^\left\{-i \vec\left\{p\right\}^2 t/2m\right\}$, which we routinely pretend to do in elementary quantum. In both cases, we use the fact that, in the momentum representation, the operator $\stackrel{⇀}{p}\vec\left\{p\right\}$ is represented as multiplication by $\stackrel{⇀}{k}\vec\left\{k\right\}$.

I could go on, but let me leave the rest of the development of the theory as a series of questions.

1. The self-adjoint operator, $\stackrel{⇀}{x}\vec\left\{x\right\}$, satisfies $\left[{x}^{i},{p}_{j}\right]=i{\delta }_{j}^{i} \left[x^i,p_j\right] = i \delta^\left\{i\right\}_j $ Thus it can be written in the form ${x}^{i}=i\left(\frac{\partial }{\partial {k}_{i}}+{f}_{i}\left(\stackrel{⇀}{k}\right)\right) x^i = i\left\left(\frac\left\{\partial\right\}\left\{\partial k_i\right\} + f_i\left(\vec\left\{k\right\}\right)\right\right) $ for some real function ${f}_{i}f_i$. What is ${f}_{i}\left(\stackrel{⇀}{k}\right)f_i\left(\vec\left\{k\right\}\right)$?
2. Define ${J}^{0}\left(\stackrel{⇀}{r}\right)J^0\left(\vec\left\{r\right\}\right)$ to be the probability density. That is, when the particle is in state $|\varphi ⟩|\phi\rangle$, the probability for finding it in some Borel subset $S\subset {ℝ}^{3}S\subset\mathbb\left\{R\right\}^3$ is given by $\text{Prob}\left(S\right)={\int }_{S}{d}^{3}\stackrel{⇀}{r}{J}^{0}\left(\stackrel{⇀}{r}\right) \text\left\{Prob\right\}\left(S\right) = \int_S d^3\vec\left\{r\right\} J^0\left(\vec\left\{r\right\}\right) $ Obviously, ${J}^{0}\left(\stackrel{⇀}{r}\right)J^0\left(\vec\left\{r\right\}\right)$ must take the form ${J}^{0}\left(\stackrel{⇀}{r}\right)=\int \frac{{d}^{3}\stackrel{⇀}{k}{d}^{3}\stackrel{⇀}{k}\prime }{{\left(2\pi \right)}^{6}4\sqrt{{\stackrel{⇀}{k}}^{2}+{m}^{2}}\sqrt{{\stackrel{⇀}{k}\prime }^{2}+{m}^{2}}}g\left(\stackrel{⇀}{k},\stackrel{⇀}{k}\prime \right){e}^{i\left(\stackrel{⇀}{k}-\stackrel{⇀}{k\prime }\right)\cdot \stackrel{⇀}{r}}\varphi \left(\stackrel{⇀}{k}\right)\varphi \left(\stackrel{⇀}{k}\prime {\right)}^{*} J^0\left(\vec\left\{r\right\}\right) = \int\frac\left\{d^3\vec\left\{k\right\}d^3\vec\left\{k\right\}\text{'}\right\}\left\{\left\{\left(2\pi\right)\right\}^6 4\sqrt\left\{\vec\left\{k\right\}^2+m^2\right\}\sqrt\left\{\left\{\vec\left\{k\right\}\text{'}\right\}^2+m^2\right\}\right\} g\left(\vec\left\{k\right\},\vec\left\{k\right\}\text{'}\right) e^\left\{i\left(\vec\left\{k\right\}-\vec\left\{k\text{'}\right\}\right)\cdot\vec\left\{r\right\}\right\}\phi\left(\vec\left\{k\right\}\right)\phi\left(\vec\left\{k\right\}\text{'}\right)^* $ Find $g\left(\stackrel{⇀}{k},\stackrel{⇀}{k}\prime \right)g\left(\vec\left\{k\right\},\vec\left\{k\right\}\text{'}\right)$. (Hint: you need to diagonalize the operator $\stackrel{⇀}{x}\vec\left\{x\right\}$ that you found in problem 1.)
3. The conservation of probability says $0={\partial }_{t}{J}^{0}+{\partial }_{i}{J}^{i} 0=\partial_t J^0 + \partial_i J^i $ Use the Schrödinger equation (1) to find ${J}^{i}\left(\stackrel{⇀}{r}\right)J^i\left(\vec\left\{r\right\}\right)$.
4. Under Lorentz transformations, ${H}_{0}H_0$ and $\stackrel{⇀}{p}\vec\left\{p\right\}$ transform as the components of a 4-vector. For a boost in the $zz$-direction, of rapidity $\lambda \lambda$, we should have $\begin{array}{rl}{U}_{\lambda }\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}{U}_{\lambda }^{-1}& =\mathrm{cosh}\left(\lambda \right)\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}+\mathrm{sinh}\left(\lambda \right){p}_{3}\\ {U}_{\lambda }{p}_{1}{U}_{\lambda }^{-1}& ={p}_{1}\\ {U}_{\lambda }{p}_{2}{U}_{\lambda }^{-1}& ={p}_{3}\\ {U}_{\lambda }{p}_{3}{U}_{\lambda }^{-1}& =\mathrm{sinh}\left(\lambda \right)\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}+\mathrm{cosh}\left(\lambda \right){p}_{3}\end{array} \begin\left\{split\right\} U_\lambda \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\} U_\lambda^\left\{-1\right\} &= \cosh\left(\lambda\right) \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\} + \sinh\left(\lambda\right) p_3\\ U_\lambda p_1 U_\lambda^\left\{-1\right\} &= p_1\\ U_\lambda p_2 U_\lambda^\left\{-1\right\} &= p_3\\ U_\lambda p_3 U_\lambda^\left\{-1\right\} &= \sinh\left(\lambda\right) \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\} + \cosh\left(\lambda\right) p_3 \end\left\{split\right\} $ and we should be able to write ${U}_{\lambda }={e}^{i\lambda B}U_\lambda = e^\left\{i\lambda B\right\}$ for some self-adjoint operator, $BB$. What is $BB$? (N.B.: by contrast the ${x}^{i}x^i$, introduced above, do not transform in a simple way under Lorentz transformations.)

The Hilbert space of a free scalar field is now ${⨁}_{n=0}^{\infty }{\text{Sym}}^{n}ℋ\bigoplus_\left\{n=0\right\}^\infty \text\left\{Sym\right\}^n\mathcal\left\{H\right\}$. That’s perhaps not the easiest way to get there. But it is a way …

#### Update:

Yike! Well, that went south pretty fast. For the first time (ever, I think) I’m closing comments on this one, and calling it a day. To summarize, for those who still care,

1. There is a decomposition of the Hilbert space of a Free Scalar field as ${ℋ}_{\varphi }=\underset{n=0}{\overset{\infty }{⨁}}{ℋ}_{n} \mathcal\left\{H\right\}_\phi = \bigoplus_\left\{n=0\right\}^\infty \mathcal\left\{H\right\}_n $ where ${ℋ}_{n}={\text{Sym}}^{n}ℋ \mathcal\left\{H\right\}_n = \text\left\{Sym\right\}^n \mathcal\left\{H\right\} $ and $\mathcal\left\{H\right\}$ is 1-particle Hilbert space described above (also known as the spin-$00$, mass-$mm$, irreducible unitary representation of Poincaré).
2. The Hamiltonian of the Free Scalar field is the direct sum of the induced Hamiltonia on ${ℋ}_{n}\mathcal\left\{H\right\}_n$, induced from the Hamiltonian, $H=\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}H=\sqrt\left\{\vec\left\{p\right\}^2+m^2\right\}$, on $\mathcal\left\{H\right\}$. In particular, it (along with the other Poincaré generators) is block-diagonal with respect to this decomposition.
3. There are other interesting observables which are also block-diagonal, with respect to this decomposition (i.e., don’t change the particle number) and hence we can discuss their restriction to ${ℋ}_{n}\mathcal\left\{H\right\}_n$.

Gotta keep reminding myself why I decided to foreswear blogging…

## March 18, 2017

### Clifford V. Johnson - Asymptotia

BBC CrowdScience SXSW Panel!

They recorded one of the panels I was on at SXSW as a 30 minute episode of the BBC World Service programme CrowdScience! The subject was science and the movies, and it was a lot of fun, with some illuminating exchanges, I had some fantastic co-panellists: Dr. Mae Jemison (the astronaut, doctor, and chemical engineer), Professor Polina Anikeeva (she researches in materials science and engineering at MIT), and Rick Loverd (director of the Science and Entertainment Exchange), and we had an excellent host, Marnie Chesterton. It has aired now, but in case you missed it, here is a link to the site where you can listen to our discussion.

The post BBC CrowdScience SXSW Panel! appeared first on Asymptotia.

## March 17, 2017

### Symmetrybreaking - Fermilab/SLAC

Q&A: Dark matter next door?

Astrophysicists Eric Charles and Mattia Di Mauro discuss the surprising glow of our neighbor galaxy.

Astronomers recently discovered a stronger-than-expected glow of gamma rays at the center of the Andromeda galaxy, the nearest major galaxy to the Milky Way. The signal has fueled hopes that scientists are zeroing in on a sign of dark matter, which is five times more prevalent than normal matter but has never been detected directly.

Researchers believe that gamma rays—a very energetic form of light—could be produced when hypothetical dark matter particles decay or collide and destroy each other. However, dark matter isn’t the only possible source of the gamma rays. A number of other cosmic processes are known to produce them.

So what do Andromeda’s gamma rays really tell us about dark matter? To find out, Symmetry’s Manuel Gnida talked with Eric Charles and Mattia Di Mauro, two members of the Fermi-LAT collaboration—an international team of researchers that found the Andromeda gamma-ray signal using the Large Area Telescope, a sensitive “eye” for gamma rays on NASA’s Fermi Gamma-ray Space Telescope.

Both researchers are based at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. The LAT was conceived of and assembled at SLAC, which also hosts its operations center.

KIPAC researchers Eric Charles and Mattia Di Mauro

Dawn Harmer, SLAC National Accelerator Laboratory

### Have you discovered dark matter?

MD:

No, we haven’t. In the study, the LAT team looked at the gamma-ray emissions of the Andromeda galaxy and found something unexpected, something we don’t fully understand yet. But there are other potential astrophysical explanations than dark matter.

It’s also not the first time that the LAT collaboration has studied Andromeda with Fermi, but in the old data the galaxy only looked like a big blob. With more data and improved data processing, we have now obtained a much clearer picture of the galaxy’s gamma-ray glow and how it’s distributed.

### What’s so unusual about the results?

EC:

As a spiral galaxy, Andromeda is similar to the Milky Way. Therefore, we expected the emissions of both galaxies to look similar. What we discovered is that they are, in fact, quite different.

In our galaxy, gamma rays come from all kinds of locations—from the center and the spiral arms in the outer regions. For Andromeda, on the other hand, the signal is concentrated at the center.

### Why do galaxies glow in gamma rays?

EC:

The answer depends on the type of galaxy. There are active galaxies called blazars. They emit gamma rays when matter in close orbit around supermassive black holes generates jets of plasma. And then there are “normal” galaxies like Andromeda and the Milky Way that produce gamma rays in other ways.

When we look at the emissions of the Milky Way, the galaxy appears like a bright disk, with the somewhat brighter galactic center at the center of the disk. Most of this glow is diffuse and comes from the gas between the stars that lights up when it’s hit by cosmic rays—energetic particles spit out by star explosions or supernovae.

Other gamma-ray sources are the remnants of such supernovae and pulsars—extremely dense, magnetized, rapidly rotating neutron stars. These sources show up as bright dots in the gamma-ray map of the Milky Way, except at the center where the density of gamma-ray sources is high and the diffuse glow of the Milky Way is brightest, which prevents the LAT from detecting individual sources.

Andromeda is too far away to see individual gamma-ray sources, so it only has a diffuse glow in our images. But we expected to see most of the emissions to come from the disk as well. Its absence suggests that there is less interaction between gas and cosmic rays in our neighbor galaxy. Since this interaction is tied to the formation of stars, this also suggests that Andromeda had a different history of star formation than the Milky Way.

The sky in gamma rays with energies greater than 1 gigaelectronvolts, based on eight years of data from the LAT on NASA’s Fermi Gamma-ray Space Telescope.

NASA/DOE/Fermi LAT Collaboration

### What does all this have to do with dark matter?

MD:

When we carefully analyze the gamma-ray emissions of the Milky Way and model all the gas and point-like sources to the best of our knowledge, then we’re left with an excess of gamma rays at the galactic center. Some people have argued this excess could be a telltale sign of dark matter particles.

We know that the concentration of dark matter is largest at the galactic center, so if there were a dark matter signal, we would expect it to come from there. The localization of gamma-ray emissions at Andromeda’s center seems to have renewed the interest in the dark matter interpretation in the media.

### Is dark matter the most likely interpretation?

EC:

No, there are other explanations. There are so many gamma-ray sources at the galactic center that we can’t really see them individually. This means that their light merges into an extended, diffuse glow.

In fact, two recent studies from the US and the Netherlands have suggested that this glow in the Milky Way could be due to unresolved point sources such as pulsars. The same interpretation could also be true for Andromeda’s signal.

### What would it take to know for certain?

MD:

To identify a dark matter signal, we would need to exclude all other possibilities. This is very difficult for a complex region like the galactic center, for which we don’t even know all the astrophysical processes. Of course, this also means that, for the same reason, we can’t completely rule out the dark matter interpretation.

But what’s really important is that we would want to see the same signal in a few different places. However, we haven’t detected any gamma-ray excesses in other galaxies that are consistent with the ones in the Milky Way and Andromeda.

This is particularly striking for dwarf galaxies, small companion galaxies of the Milky Way that only have few stars. These objects are only held together because they are dominated by dark matter. If the gamma-ray excess at the galactic center were due to dark matter, then we should have already seen similar signatures in the dwarf galaxies. But we don’t.

## March 14, 2017

### Symmetrybreaking - Fermilab/SLAC

The life of an accelerator

As it evolves, the SLAC linear accelerator illustrates some important technologies from the history of accelerator science.

Tens of thousands of accelerators exist around the world, producing powerful particle beams for the benefit of medical diagnostics, cancer therapy, industrial manufacturing, material analysis, national security, and nuclear as well as fundamental particle physics. Particle beams can also be used to produce powerful beams of X-rays.

Many of these particle accelerators rely on artfully crafted components called cavities.

The world’s longest linear accelerator (also known as a linac) sits at the Department of Energy’s SLAC National Accelerator Laboratory. It stretches two miles and accelerates bunches of electrons to very high energies.

The SLAC linac has undergone changes in its 50 years of operation that illustrate the evolution of the science of accelerator cavities. That evolution continues and will determine what the linac does next.

Illustration by Corinne Mucha

### Robust copper

An accelerator cavity is a mostly closed, hollow chamber with an opening on each side for particles to pass through. As a particle moves through the cavity, it picks up energy from an electromagnetic field stored inside. Many cavities can be lined up like beads on a string to generate higher and higher particle energies.

When SLAC’s linac first started operations, each of its cavities was made exclusively from copper. Each tube-like cavity consisted of a 1-inch-long, 4-inch-wide cylinder with disks on either side. Technicians brazed together more than 80,000 cavities to form a straight particle racetrack.

Scientists generate radiofrequency waves in an apparatus called a klystron that distributes them to the cavities. Each SLAC klystron serves a 10-foot section of the beam line. The arrival of the electron bunch inside the cavity is timed to match the peak in the accelerating electric field. When a particle arrives inside the cavity at the same time as the peak in the electric field, then that bunch is optimally accelerated.

“Particles only gain energy if the variable electric field precisely matches the particle motion along the length of the accelerator,” says Sami Tantawi, an accelerator physicist at Stanford University and SLAC. “The copper must be very clean and the shape and size of each cavity must be machined very carefully for this to happen.”

In its original form, SLAC’s linac boosted electrons and their antimatter siblings, positrons, to an energy of 50 billion electronvolts. Researchers used these beams of accelerated particles to study the inner structure of the proton, which led to the discovery of fundamental particles known as quarks.

Today almost all accelerators in the world—including smaller systems for medical and industrial applications—are made of copper. Copper is a good electric conductor, which is important because the radiofrequency waves build up an accelerating field by creating electric currents in the cavity walls. Copper can be machined very smoothly and is cheaper than other options, such as silver.

“Copper accelerators are very robust systems that produce high acceleration gradients of tens of millions of electronvolts per meter, which makes them very attractive for many applications,” says SLAC accelerator scientist Chris Adolphsen.

Today, one-third of SLAC’s original copper linac is used to accelerate electrons for the Linac Coherent Light Source, a facility that turns energy from the electron beam into what is currently the world’s brightest X-ray laser light.

Researchers continue to push the technology to higher and higher gradients—that is, larger and larger amounts of acceleration over a given distance.

“Using sophisticated computer programs on powerful supercomputers, we were able to develop new cavity geometries that support almost 10 times larger gradients,” Tantawi says. “Mixing small amounts of silver into the copper further pushes the technology toward its natural limits.” Cooling the copper to very low temperatures helps as well. Tests at 45 Kelvin—negative 384 degrees Fahrenheit—have shown to increase acceleration gradients 20-fold compared to SLAC’s old linac.

Copper accelerators have their limitations, though. SLAC’s historic linac produces 120 bunches of particles per second, and recent developments have led to copper structures capable of firing 80 times faster. But for applications that need much higher rates, Adolphsen says, “copper cavities don’t work because they would melt.”

Illustration by Corinne Mucha

### Chill niobium

For this reason, crews at SLAC are in the process of replacing one-third of the original copper linac with cavities made of niobium.

Niobium can support very large bunch rates, as long as it is cooled. At very low temperatures, it is what’s known as a superconductor.

“Below the critical temperature of 9.2 Kelvin, the cavity walls conduct electricity without losses, and electromagnetic waves can travel up and down the cavity many, many times, like a pendulum that goes on swinging for a very long time,” says Anna Grassellino, an accelerator scientist at Fermi National Accelerator Laboratory. “That’s why niobium cavities can store electromagnetic energy very efficiently and can operate continuously.”

You can find superconducting niobium cavities in modern particle accelerators such as the Large Hadron Collider at CERN and the CEBAF accelerator at Thomas Jefferson National Accelerator Facility. The European X-ray Free-Electron Laser in Germany, the European Spallation Source at CERN, and the Facility for Rare Isotope Beams at Michigan State University are all being built using niobium technology. Niobium cavities also appear in designs for the next-generation International Linear Collider.

At SLAC, the niobium cavities will support LCLS-II, an X-ray laser that will produce up to a million ultrabright light flashes per second. The accelerator will have 280 cavities, each about three feet long with a 3-inch opening for the electron beam to fly through. Sets of eight cavities will be strung together into cryomodules that keep the cavities at a chilly 2 Kelvin, which is colder than interstellar space.

Each niobium cavity is made by fusing together two halves stamped from a sheet of pure metal. The cavities are then cleaned very thoroughly because even the tiniest impurities would degrade their performance.

The shape of the cavities is reminiscent of a stack of shiny donuts. This is to maximize the cavity volume for energy storage and to minimize its surface area to cut down on energy dissipation. The exact size and shape also depends on the type of accelerated particle.

“We’ve come a long way since the first development of superconducting cavities decades ago,” Grassellino says. “Today’s niobium cavities produce acceleration gradients of up to about 50 million electronvolts per meter, and R&D work at Fermilab and elsewhere is further pushing the limits.”

Illustration by Corinne Mucha

### Hot plasma

Over the past few years, SLAC accelerator scientists have been working on a way to push the limits of particle acceleration even further: accelerating particles using bubbles of ionized gas called plasma.

Plasma wakefield acceleration is capable of creating acceleration gradients that are up to 1000 times larger than those of copper and niobium cavities, promising to drastically shrink the size of particle accelerators and make them much more powerful.

“These plasma bubbles have certain properties that are very similar to conventional metal cavities,” says SLAC accelerator physicist Mark Hogan. “But because they don’t have a solid surface, they can support extremely high acceleration gradients without breaking down.”

Hogan’s team at SLAC and collaborators from the University of California, Los Angeles, have been developing their plasma acceleration method at the Facility for Advanced Accelerator Experimental Tests, using an oven of hot lithium gas for the plasma and an electron beam from SLAC’s copper linac.

Researchers create bubbles by sending either intense laser light or a high-energy beam of charged particles through plasma. They then send beams of particles through the bubbles to be accelerated.

When, for example, an electron bunch enters a plasma, its negative charge expels plasma electrons from its flight path, creating a football-shaped cavity filled with positively charged lithium ions. The expelled electrons form a negatively charged sheath around the cavity.

This plasma bubble, which is only a few hundred microns in size, travels at nearly the speed of light and is very short-lived. On the inside, it has an extremely strong electric field. A second electron bunch enters that field and experiences a tremendous energy gain. Recent data show possible energy boosts of billions of electronvolts in a plasma column of just a little over a meter.

“In addition to much higher acceleration gradients, the plasma technique has another advantage,” says UCLA researcher Chris Clayton. “Copper and niobium cavities don’t keep particle beams tightly bundled and require the use of focusing magnets along the accelerator. Plasma cavities, on the other hand, also focus the beam.”

Much more R&D work is needed before plasma wakefield accelerator technology can be turned into real applications. But it could represent the future of particle acceleration at SLAC and of accelerator science as a whole.

## March 10, 2017

### Symmetrybreaking - Fermilab/SLAC

A strength test for the strong force

New research could tell us about particle interactions in the early universe and even hint at new physics.

Much of the matter in the universe is made up of tiny particles called quarks. Normally it’s impossible to see a quark on its own because they are always bound tightly together in groups. Quarks only separate in extreme conditions, such as immediately after the Big Bang or in the center of stars or during high-energy particle collisions generated in particle colliders.

Scientists at Louisiana Tech University are working on a study of quarks and the force that binds them by analyzing data from the ATLAS experiment at the LHC. Their measurements could tell us more about the conditions of the early universe and could even hint at new, undiscovered principles of physics.

The particles that stick quarks together are aptly named “gluons.” Gluons carry the strong force, one of four fundamental forces in the universe that govern how particles interact and behave. The strong force binds quarks into particles such as protons, neutrons and atomic nuclei.

As its name suggests, the strong force is the strongest—it’s 100 times stronger than the electromagnetic force (which binds electrons into atoms), 10,000 times stronger than the weak force (which governs radioactive decay), and a hundred million million million million million million (1039) times stronger than gravity (which attracts you to the Earth and the Earth to the sun).

But this ratio shifts when the particles are pumped full of energy. Just as real glue loses its stickiness when overheated, the strong force carried by gluons becomes weaker at higher energies.

“Particles play by an evolving set of rules,” says Markus Wobisch from Louisiana Tech University. “The strength of the forces and their influence within the subatomic world changes as the particles’ energies increase. This is a fundamental parameter in our understanding of matter, yet has not been fully investigated by scientists at high energies.”

Characterizing the cohesiveness of the strong force is one of the key ingredients to understanding the formation of particles after the Big Bang and could even provide hints of new physics, such as hidden extra dimensions.

“Extra dimensions could help explain why the fundamental forces vary dramatically in strength,” says Lee Sawyer, a professor at Louisiana Tech University. “For instance, some of the fundamental forces could only appear weak because they live in hidden extra dimensions and we can’t measure their full strength. If the strong force is weaker or stronger than expected at high energies, this tells us that there’s something missing from our basic model of the universe.”

By studying the high-energy collisions produced by the LHC, the research team at Louisiana Tech University is characterizing how the strong force pulls energetic quarks into encumbered particles. The challenge they face is that quarks are rambunctious and caper around inside the particle detectors. This subatomic soirée involves hundreds of particles, often arising from about 20 proton-proton collisions happening simultaneously. It leaves a messy signal, which scientists must then reconstruct and categorize.

Wobisch and his colleagues innovated a new method to study these rowdy groups of quarks called jets. By measuring the angles and orientations of the jets, he and his colleagues are learning important new information about what transpired during the collisions—more than what they can deduce by simple counting the jets.

The average number of jets produced by proton-proton collisions directly corresponds to the strength of the strong force in the LHC’s energetic environment.

“If the strong force is stronger than predicted, then we should see an increase in the number of proton-protons collisions that generate three jets. But if the strong force is actually weaker than predicted, then we’d expect to see relatively more collisions that produce only two jets. The ratio between these two possible outcomes is the key to understanding the strong force.”

After turning on the LHC, scientists doubled their energy reach and have now determined the strength of the strong force up to 1.5 trillion electronvolts, which is roughly the average energy of every particle in the universe just after the Big Bang. Wobisch and his team are hoping to double this number again with more data.

“So far, all our measurements confirm our predictions,” Wobisch says. “More data will help us look at the strong force at even higher energies, giving us a glimpse as to how the first particles formed and the microscopic structure of space-time.”

## March 09, 2017

### Marco Frasca - The Gauge Connection

Quote of the day

“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”

John Stuart Mill

Filed under: Quote

## March 07, 2017

### Symmetrybreaking - Fermilab/SLAC

Researchers face engineering puzzle

How do you transport 70,000 tons of liquid argon nearly a mile underground?

Nearly a mile below the surface of Lead, South Dakota, scientists are preparing for a physics experiment that will probe one of the deepest questions of the universe: Why is there more matter than antimatter?

To search for that answer, the Deep Underground Neutrino Experiment, or DUNE, will look at minuscule particles called neutrinos. A beam of neutrinos will travel 800 miles through the Earth from Fermi National Accelerator Laboratory to the Sanford Underground Research Facility, headed for massive underground detectors that can record traces of the elusive particles.

Because neutrinos interact with matter so rarely and so weakly, DUNE scientists need a lot of material to create a big enough target for the particles to run into. The most widely available (and cost effective) inert substance that can do the job is argon, a colorless, odorless element that makes up about 1 percent of the atmosphere.

The researchers also need to place the detector full of argon far below Earth’s surface, where it will be protected from cosmic rays and other interference.

“We have to transfer almost 70,000 tons of liquid argon underground,” says David Montanari, a Fermilab engineer in charge of the experiment’s cryogenics. “And at this point we have two options: We can either transfer it as a liquid or we can transfer it as a gas.”

Either way, this move will be easier said than done.

### Liquid or gas?

The argon will arrive at the lab in liquid form, carried inside of 20-ton tanker trucks. Montanari says the collaboration initially assumed that it would be easier to transport the argon down in its liquid form—until they ran into several speed bumps.

Transporting liquid vertically is very different from transporting it horizontally for one important reason: pressure. The bottom of a mile-tall pipe full of liquid argon would have a pressure of about 3000 pounds per square inch—equivalent to 200 times the pressure at sea level. According to Montanari, to keep these dangerous pressures from occurring, multiple de-pressurizing stations would have to be installed throughout the pipe.

Even with these depressurizing stations, safety would still be a concern. While argon is non-toxic, if released into the air, it could displace the oxygen. In the event of a leak, pressurized liquid argon would spill out and could potentially break its vacuum-sealed pipe, expanding rapidly to fill the mine as a gas. One liter of liquid argon would become about 800 liters of argon gas, or four bathtubs’ worth.

Even without a leak, perhaps the most important challenge in transporting liquid argon is preventing it from evaporating into a gas along the way, according to Montanari.

To remain a liquid, argon is kept below a brisk temperature of minus 180 degrees Celsius (minus 300 degrees Fahrenheit).

“You need a vacuum-insulated pipe that is a mile long inside a mine shaft,” Montanari says. “Not exactly the most comfortable place to install a vacuum-insulated pipe.”

To avoid these problems, the cryogenics team made the decision to send the argon down as gas instead.

Routing the pipes containing liquid argon through a large bath of water will warm it up enough to turn it into gas, which will be able to travel down through a standard pipe. Re-condensers located underground act as massive air conditioners will then cool the gas until becomes a liquid again.

“The big advantage is we no longer have vacuum insulated pipe,” Montanari says. “It is just straight piece of pipe.”

Argon gas poses much less of a safety hazard because it is about 1000 times less dense than liquid argon. High pressures would be unlikely to build up and necessitate depressurizing stations, and if a leak occurred, it would not expand as much and cause the same kind of oxygen deficiency.

The process of filling the detectors with argon will take place in four stages that will take almost two years, Montanari says. This is due to the amount of available cooling power for re-condensing the argon underground. There is also a limit to the amount of argon produced in the US every year, of which only so much can be acquired by the collaboration and transported to the site at a time.

Illustration by Ana Kova

Once filled, the liquid argon detectors will pick up light and electrons produced by neutrino interactions.

Part of what makes neutrinos so fascinating to physicists is their habit of oscillating from one flavor—electron, muon or tau—to another. The parameters that govern this “flavor change” are tied directly to some of the most fundamental questions in physics, including why there is more matter than antimatter. With careful observation of neutrino oscillations, scientists in the DUNE collaboration hope to unravel these mysteries in the coming years.

“At the time of the Big Bang, in theory, there should have been equal amounts of matter and antimatter in the universe,” says Eric James, DUNE’s technical coordinator. That matter and antimatter should have annihilated, leaving behind an empty universe. “But we became a matter-dominated universe.”

James and other DUNE scientists will be looking to neutrinos for the mechanism behind this matter favoritism. Although the fruits of this labor won’t appear for several years, scientists are looking forward to being able to make use of the massive detectors, which are hundreds of times larger than current detectors that hold only a few hundred tons of liquid argon.

Currently, DUNE scientists and engineers are working at CERN to construct Proto-DUNE, a miniature replica of the DUNE detector filled with only 300 tons of liquid argon that can be used to test the design and components.

“Size is really important here,” James says. “A lot of what we’re doing now is figuring out how to take those original technologies which have already being developed... and taking it to this next level with bigger and bigger detectors.”

## March 02, 2017

### Symmetrybreaking - Fermilab/SLAC

Hey Fermilab, it’s a Monkee

Micky Dolenz, best known as a vocalist and drummer in 1960s pop band The Monkees, turns out to be one of Fermi National Accelerator Laboratory’s original fans.

“Dear Ms. Higgins,” began the email to an employee of Fermi National Accelerator Laboratory. “My name is Micky Dolenz. I am in the entertainment business and probably best known for starring in a ’60s TV show called The Monkees. I have also been a big fan of particle physics for many decades.”

The message, which laboratory archivist Valerie Higgins received in November 2016, was legit. And it turns out Dolenz wasn’t kidding about his love of physics. Dolenz visited Fermilab on February 10 and impressed and amazed the scientists he met with his knowledge of (and genuine affection for) the science of quarks, leptons and bosons. Dolenz was, by all accounts, just as excited to meet with Fermilab scientists as they were to meet with him.

“He was so enthusiastic about the lab,” Higgins says. “It was such a treat to see someone of his stature and popularity be so interested and knowledgeable about our kind of physics.”

Previously unbeknownst to most of the lab’s employees, Dolenz’s association with Fermilab actually stretches back more than 40 years. The last time Dolenz visited Fermilab, the year was 1970. The Monkees TV show had wound down, and Dolenz, then 25, was starring in a play called Remains to Be Seen at the Pheasant Run Playhouse in nearby St. Charles, Illinois. Fermilab wasn’t even called Fermilab yet—it still went by the name National Accelerator Laboratory.

Dolenz says he remembers his first visit well. At the time, the lab consisted of a few trailers and bungalows—Fermilab’s now-iconic high-rise building, Wilson Hall, would not be completed until 1973. Dolenz had lunch with several of the scientists then toured the construction site for the Main Ring, the future home of Fermilab’s first superconducting accelerator, the Tevatron.

Dolenz captured some of his visit on 16mm film, footage he says he still has in storage. Dolenz called his previous tour of Fermilab “wonderful” and “a dream come true.”

Dolenz credits a junior high science teacher with sparking his interest in physics. He spent much of his childhood in Los Angeles building oscilloscopes and transceivers for ham radios and other gadgets. “I was always curious, always building stuff,” he says. “While the other kids were reading Superman comics, I was reading Science News. I loved it all, particularly particle physics and quantum physics.”

Dolenz was in training to be an architect, but at age 20, the Monkees audition offered him the opportunity to catapult to worldwide fame as a TV star and musician instead. (“I’m not an idiot,” he says of accepting the role.) Still, he maintained his interest in science—his first email address, created in the 1990s, was “Higgs137,” referencing both the then-undiscovered Higgs boson and the measure of the fine structure constant.

Fermilab Director Nigel Lockyer, left, and Deputy Director Joe Lykken, right, talk with Monkee Micky Dolenz during his tour.

Photo by Reidar Hahn, Fermilab

That interest in science has remained strong, Fermilab physicists noted during the February tour. Dolenz toured the underground cavern that houses detectors for the MINOS, NOvA and MINERvA neutrino experiments, the Muon g-2 experiment hall (where scientists played the theme from The Monkees when he walked in), and the DZero detector in the long-since completed Main Ring. He also spent time in three control rooms.

In every location, he impressed the scientists he met with his understanding of physics and his full-on joy at seeing science in action.

“Who knew he is a life-long physics aficionado?” says scientist Adam Lyon, who gave Dolenz his Tevatron tour. “I had a great time talking with him.”

Dolenz says he sees plenty of connection between his twin interests of physics and music, noting that Einstein played the violin; Richard Feynman played bongos; and Queen guitarist Brian May is an astrophysicist on several experimental collaborations.

“According to theory the universe is constantly vibrating, down to even the smallest particles,” Dolenz says. “We talked a lot about vibrations in the ’60s, and Eastern philosophy has been talking about the vibration of the universe for thousands of years. Music is vibration and meter and frequency. There’s a lot of overlap.”

Dolenz enjoyed his time at Fermilab so much that he hung out at the lab’s on-site pub until late in the evening, chatting with scientists. And according to Higgins, who spent the most time with him, he’s hoping to return very soon.

“He’s still looking for the footage he shot in 1970, and plans to donate that to the archive,” she says. “But I told him he’s welcome here anytime.”

Monkee Micky Dolenz stands by a model particle accelerator with Fermilab physicist Herman White and Fermilab Director of Communication Katie Yurkewicz.

Photo by Reidar Hahn, Fermilab

## February 28, 2017

### Symmetrybreaking - Fermilab/SLAC

How to build a universe

Our universe should be a formless fog of energy. Why isn’t it?

According to the known laws of physics, the universe we see today should be dark, empty and quiet. There should be no stars, no planets, no galaxies and no life—just energy and simple particles diffusing further and further into an expanding universe.

And yet, here we are.

Cosmologists calculate that roughly 13.8 billion years ago, our universe was a hunk of thick, hot energy with no boundaries and its own rules. But then, in less than a microsecond, it matured, and the fundamental laws and properties of matter arose from the pandemonium. How did our elegant and intricate universe emerge?

Illustration by Corinne Mucha

### The three conditions

The question “How is it here?” alludes to a conundrum that arose during the development of quantum mechanics.

In 1928 Paul Dirac combined quantum theory and special relativity to predict the energy of an electron moving near the speed of light. But his equations produced two equally favorable answers: one positive and one negative. Because energy itself cannot be negative, Dirac mused that perhaps the two answers represented the particle’s two possible electric charges. The idea of oppositely charged matter-antimatter pairs was born.

Meanwhile, about six minutes away from Dirac’s office in Cambridge, physicist Patrick Blackett was studying the patterns etched in cloud chambers by cosmic rays. In 1933 he detected 14 tracks that showed a single particle of light colliding with an air molecule and bursting into two new particles. The spiral tracks of these new particles were mirror images of each other, indicating that they were oppositely charged. This was one of the first observations of what Dirac had predicted five years earlier—the birth of an electron-positron pair.

Today it’s well known that matter and antimatter are the ultimate wonder twins. They’re spontaneously born from raw energy as a team of two and vanish in a silent poof of energy when they merge and annihilate. This appearing-disappearing act spawned one of the most fundamental mysteries in the universe: What is engraved in the laws of nature that saved us from the broth of appearing and annihilating particles of matter and antimatter?

“We know this cosmic asymmetry must exist because here we are,” says Jessie Shelton, a theorist at the University of Illinois. “It’s a puzzling imbalance because theory requires three conditions—which all have to be true at once—to create this cosmic preference for matter.”

In the 1960s physicist Andrei Sakharov proposed this set of three conditions that could explain the appearance of our matter-dominated universe. Scientists continue to look for evidence of these conditions today.

Illustration by Corinne Mucha

### 1. Breaking the tether

The first problem is that matter and antimatter always seem to be born together. Just as Blackett observed in the cloud chambers, uncharged energy transforms into evenly balanced matter-antimatter pairs. Charge is always conserved through any transition. For there to be an imbalance in the amounts of matter and antimatter, there needs to be a process that creates more of one than the other.

“Sakharov’s first criterion essentially says that there must be some new process that converts antimatter into matter, or vice versa,” says Andrew Long, a postdoctoral researcher in cosmology at the University of Chicago. “This is one of the things experimentalists are looking for in the lab.”

In the 1980s, scientists searched for evidence of Sakharov’s first condition by looking for signs of a proton decaying into a positron and two photons. They have yet to find evidence of this modern alchemy, but they continue to search.

“We think that the early universe could have contained a heavy neutral particle that sometimes decayed into matter and sometimes decayed into antimatter, but not necessarily into both at the same time,” Long says.

Illustration by Corinne Mucha

### 2. Picking a favorite

Matter and antimatter cannot co-habitate; they always annihilate when they come into contact. But the creation of just a little more matter than antimatter after the Big Bang—about one part in 10 billion—would leave behind the ingredients needed to build the entire visible universe.

How could this come about? Sakharov’s second criterion dictates that the matter-only process outlined in his first criterion must be more efficient than the opposing antimatter process. And specifically, “we need to see a favoritism for the right kinds of matter to agree with astronomical observations,” Shelton says.

Observations of light left over from the early universe and measurements of the first lightweight elements produced after the Big Bang show that the discrepancy must exist in a class of particles called baryons: protons, antiprotons and other particles constructed from quarks.

“These are snapshots of the early universe,” Shelton says. “From these snapshots, we can derive the density and temperature of the early universe and calculate the slight difference between the number of baryons and antibaryons.”

But this slight difference presents a problem. While there are some tiny discrepancies between the behavior of particles and their antiparticle counterparts, these idiosyncrasies are still consistent with the Standard Model and are not enough to explain the origin of the cosmic imbalance nor the universe’s tenderness towards matter.

Illustration by Corinne Mucha

### 3. Taking a one-way street

In particle physics, any process that runs forward can just as easily run in reverse. A pair of photons can merge and morph into a particle and antiparticle pair. And just as easily, the particle and antiparticle pair can recombine into a pair of photons. This process happens all around us, continually. But because it is cyclical, there is no net gain or loss for a type of matter.

If this were always true, our young universe could have been locked in an infinite loop of creation and destruction. Without something slamming the brakes on these cycles at least for a moment, matter could not have evolved into the complex structures we see today.

“For every stitch that’s knit, there a simultaneous tug on the thread,” Long says. “We need a way to force the reaction to move forward and not simultaneously run in reverse at the same rate.”

Many cosmologists suspect that the gradual expansion and cooling of the universe was enough to lock matter into being, like a supersaturated sweet tea whose sugar crystals drop to the bottom of the glass as it cools (or in the “freezing” interpretation, like a sweet tea that instantly freezes into ice, locking sugar crystals in place without giving them a chance to dissolve).

Other cosmologists think that the plasma of the early universe may have contained bubbles that helped separate matter and antimatter (and then served as incubators for particles to acquire mass).

Several experiments at CERN are looking for evidence that the universe meets Sakharov’s three conditions. For instance, several precision experiments at CERN’s Antimatter Factory are looking for minuscule differences between the intrinsic characteristics of protons and antiprotons. The LHCb experiment at the Large Hadron Collider is examining the decay patterns of unstable matter and antimatter particles.

Shelton and Long both hope that more research from experiments at the LHC will be the key to building a more complete picture of our early universe.

LHC experiments could discover that the Higgs field served as the lock that halted the early universe’s perpetually evolving and devolving particle soup—especially if the field contained bubbles that froze faster than others, providing cosmic petri dishes in which matter and antimatter could evolve differently, Long says. “More measurements of the Higgs boson and the fundamental properties of matter and antimatter will help us develop better theories and a better understanding of what and where we come from.”

What exactly transpired during the birth of our universe may always remain a bit of an enigma, but we continue to seek new pieces of this formidable puzzle.