Particle Physics Planet

October 17, 2018

Robert Helling - atdotde

Bavarian electoral system
Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).

Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.

Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.

Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).

A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.

Sunday's election resulted in the following distribution of seats:

After the whole procedure, there are 205 seats distributed as follows

• CSU 85 (41.5% of seats)
• SPD 22 (10.7% of seats)
• FW 27 (13.2% of seats)
• GREENS 38 (18.5% of seats)
• FDP 11 (5.4% of seats)
• AFD 22 (10.7% of seats)

Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as

• CSU 85 (40.8%)
• SPD 22 (10.6%)
• FW 26 (12.5%)
• GREENS 40 (19.2%)
• FDP 12 (5.8%)
• AFD 23 (11.1%)
You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.

But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:
221 seats
• CSU 91 (41.2%)
• SPD 24 (10.9%)
• FW 28 (12,6%)
• GREENS 42 (19.0%)
• FDP 12 (5.4%)
• AFD 24 (10.9%)
That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!

The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.

The perl script I used to do this analysis is here.

Postscript:
The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:

Seats: 220
• CSU  91 41.4%
• SPD  24 10.9%
• FW  28 12.7%
• GREENS  41 18.6%
• FDP  12 5.4%
• AFD  24 10.9%
Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields

Seats: 217
• CSU  90 41.5%
• SPD  23 10.6%
• FW  28 12.9%
• GREENS  41 18.9%
• FDP  12 5.5%
• AFD  23 10.6%
Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:

Seats: 210

• CSU  87 41.4%
• SPD  22 10.5%
• FW  27 12.9%
• GREENS  40 19.0%
• FDP  11 5.2%
• AFD  23 11.0%

Peter Coles - In the Dark

R.I.P. Michael J. Thompson

I just returned from giving a lecture to find an email in my inbox delivering the awful news that Professor Michael J. Thompson (pictured above) passed away unexpectedly on 15th October 2018.

Thompson’s scientific research activity was primarily in the field of helioseismology, astroseismology, solar physics, and inverse problems. He worked extensively in developing and applying inverse techniques in helioseismology and, in particular, measuring the stratification, rotation, and large-scale flows in the solar interior.

Along with being the Deputy Director and Chief Operating Officer of the National Centre for Atmospheric Research (NCAR) in Boulder, Colorado, Michael Thompson was also an NCAR senior scientist. From 2010 to 2014 he directed the High Altitude Observatory and was an associate director of NCAR. Prior to joining NCAR, Mike Thompson was Head of the School of Mathematics and Statistics in the University of Sheffield, and was formerly a Professor of Physics at Imperial College London. I knew him very well personally from the time before that, as we worked together for many years in the Astronomy Unit in the School of Mathematical Sciences at Queen Mary, University of London. Although we didn’t work in the same topics we had many interesting discussions about inverse problems in our respective fields, as well as many other topics (especially cricket). I recall that he taught an MSc course on cosmology, which he volunteered to do as he was interested in learning more about the subject.

More recently we kept in touch regularly via dinners at the RAS Club, of which he was a member, until he moved to the USA. It is through the Club mailing list that I learnt the sad news of his death.

I know I can speak for everyone who knew him in saying that he was not only a first-rate scientist but also a greatly valued colleague and an extremely nice man. Mike will be greatly missed by everyone who knew him, and I extend my deepest condolences to his family at what must be a very difficult time.

Peter Coles - In the Dark

The Blue of the Night: Giant Steps from Ondine

Time for a quick lunchtime post before I settle down to an afternoon of marking coursework.

On Monday evening after finishing preparing my lectures and things for Tuesday, I decided to tune in for a while to The Blue of the Night on RTÉ Lyric FM which is presented by Bernard Clarke. This is a programme that I listen to quite often in the evenings as I enjoy its eclectic mix of music.

Anyway, the Blue of Monday Night included a recording of the movement Ondine from the piano suite Gaspard de la Nuit by Maurice Ravel. As I listened to it, I started to think of an entirely different piece, the jazz classic Giant Steps, by John Coltrane (which I’ve actually posted on this blog here). Not really expecting anything to come of it, I sent a message on Twitter to Bernard Clarke mentioning the fact that the Ravel piece reminded me of Giant Steps. A few minutes later I was astonished to hear Giant Steps playing. Bernard had not only replied to me on Twitter, but had slipped the Coltrane track into the programme. Which was nice.

That confirmed the similarity in my mind and I did some frantic Googling to see if anyone else had noticed the similarity. Of course they have. In a rather dense article about music theory (most of which I don’t understand, having never really studied this properly) I found this:

I didn’t know at first what the up and down arrows annotating the two pieces were, but they represent the harmonic progression in a very interesting way that I had never thought about it before. The assertion is that in some sense the (sub-dominant) IV and (dominant) V chords which very common in popular music are closely related. To see why, imagine you play C on a piano keyboard. If you go 7 semitones to the right you will arrive at G, which is the root note of the relevant V chord. That’s up a perfect fifth. But if instead you go 7 semitones to the left you get to F which is a fifth down but is also a perfect fourth if looked at from the point of view of C an octave below where you started. In this way up’ arrow represents a perfect fifth up (or a perfect fourth down) while the down’ arrow is a perfect fifth down or a perfect fourth up. This is deemed to be the basic (or simple proper’) chord progression.

Single or double arrows to left or right represent substitutions of various kinds (e.g. a minor third), but I won’t go further into the details. The key point is that while the actual chords differ after the first few changes because of the different substitutions, the chord progression in these two piece is remarkably similar judged by the sequence of arrows. The main exception is a different substitution in bar 3 of the Coltrane excerpt. Both pieces end up achieving the same thing: they complete an entire chromatic cycle through a sequence of basic progressions and substitutions.

I don’t know whether Coltrane was directly inspired by listening to Ravel or whether they both hit on the same idea independently, but I find this totally fascinating. So much so that I’ll probably end up trying to annotate some of the chord changes I’ve worked out from other recordings and see what they look like in the notation outlined above.

Jon Butterworth - Life and Physics

Dark Matter, U2 and NASA wee
A Science Shambles Blog Network podcast special, recorded a couple of weeks ago in a basement in Bloomsbury Maybe all the maths is just a better metaphor than a sock

October 16, 2018

Christian P. Robert - xi'an's og

back to the Bayesian Choice

Surprisingly (or not?!), I received two requests about some exercises from The Bayesian Choice, one from a group of students from McGill having difficulties solving the above, wondering about the properness of the posterior (but missing the integration of x), to whom I sent back this correction. And another one from the Czech Republic about a difficulty with the term “evaluation” by which I meant (pardon my French!) estimation.

Peter Coles - In the Dark

Just finished today’s teaching so I thought I’d chill for a few minutes and pass on a few quick updates about the Open Journal of Astrophysics, which was formally (re)launched last week.

The first thing is that at the weekend I sent an online training video and guide around the members of the Editorial Board and introduced them all to the new platform’s messaging system, which is a very convenient way for us to keep in touch. I had lots of volunteers for the Editorial Board and I couldn’t select everyone but I tried to choose members with a good geographical distribution, spread of expertise, and gender balance. We may add more in due course, as we’re still quite cosmologist-heavy, but I think we have enough to get started: we have editors in Australia, France, Italy, United States of America and Mexico as well as the United Kingdom.

We have received some submissions already and are dealing with them through the new platform, which is requiring the Editors to engage in some on-the-job’ training. Hopefully they’ll get the hang of it soon!

Another relevant piece of news is that we have updated the DOIs associated with the papers we published with the old platform to point to the new site so they are now fully incorporated. For the record these are:

10.21105/astro.1708.00605

10.21105/astro.1603.07299

10.21105/astro.1602.02113

10.21105/astro.1502.04020

I’ll also take this opportunity to remind you that the Open Journal of Astrophysics is open for new submissions, so please feel free to give it a try!

Finally, I’d like to point you to an article about Open Access Publishing in the latest Physics Today, which begins

Publishers of scientific journals are facing renewed threats to their business models from both sides of the Atlantic.

You better believe it!

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

Christian P. Robert - xi'an's og

severe testing or severe sabotage? [not a book review]

Last week, I received this new book of Deborah Mayo, which I was looking forward reading and annotating!, but thrice alas, the book had been sabotaged: except for the preface and acknowledgements, the entire book is printed upside down [a minor issue since the entire book is concerned] and with some part of the text cut on each side [a few letters each time but enough to make reading a chore!]. I am thus waiting for a tested copy of the book to start reading it in earnest!

Emily Lakdawalla - The Planetary Society Blog

Heiligenschein Throughout the Solar System
When planetary scientist Brittney Cooper was scrolling through the downlinked images of Hayabusa2’s approach of asteroid Ryugu, a familiar sight caught her attention.

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois.

La prochaine permanence se tiendra le :

Mardi 30 octobre de 13 h 30 à 16 h 00

Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/

Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

CERN Bulletin

Information du Comité du GAC-EPA aux retraités du CERN concernés par l’impôt à la source en France.

Voici, sur la base des informations dont nous disposons à ce jour, quelques réponses aux questions que nous posent nos collègues retraités qui résident en France, au sujet du prélèvement à la source qui entrera en vigueur le 1er janvier 2019 :

Vous continuerez comme par le passé à effectuer une déclaration de vos revenus entre avril et juin de chaque année ; cette déclaration permet au fisc de calculer votre taux d’imposition.

Le montant total de votre impôt ne sera pas modifié.

Si vous percevez une pension d’origine française, l’organisme payeur aura reçu du fisc votre taux d’imposition et prélèvera directement l’impôt correspondant : le prélèvement à la source sur cette pension française ayant ainsi été effectué, vous toucherez donc moins qu’aujourd’hui.

La Caisse de pensions du CERN – organisme établi hors de France et ne relevant donc pas de la législation française – n’effectuera aucun prélèvement : votre pension CERN vous sera versée, sans changement, comme aujourd’hui sur votre compte suisse. Un douzième de l’impôt correspondant à cette pension CERN et résultant de votre taux d’imposition sera prélevé chaque mois par le fisc sur le compte bancaire français que vous lui aurez communiqué : c’est l’acompte mensuel.

Si vous aviez opté par le passé pour la mensualisation de votre impôt, le changement sera minime: au lieu d’un paiement, depuis votre compte français, étalé sur les 10 premiers mois de l’année, il sera étalé sur 12 mois.

Les ajustements éventuels (crédits et réductions d’impôts, régularisations, etc.) se feront à l’issue de votre déclaration de revenus. Les deux chiffres clés, taux d’imposition et acompte mensuel pour 2019, figurent en page 4 de votre avis d’imposition 2018.

En page 4 également, il est indiqué que vous pouvez opter pour un prélèvement trimestriel en lieu et place du prélèvement mensuel.

Enfin, pour ceux d’entre vous qui doivent acquitter les contributions sociales (CSG, CRDS, Casa) sur leur pension CERN, il semblerait que le paiement de ces contributions soit exclu du prélèvement à la source mensuel, mais que ces contributions doivent être payées en une seule fois à réception de l’avis d’imposition 2019 : donc prudence…

Ces réponses sont indicatives.

Pour toute réponse officielle veuillez-vous adresser au service des impôts des particuliers (SIP) de votre domicile.

October 15, 2018

Christian P. Robert - xi'an's og

unbiased estimation of log-normalising constants

Maxime Rischard, Pierre Jacob, and Natesh Pillai [warning: both of whom are co-authors and friends of mine!] have just arXived a paper on the use of path sampling (a.k.a., thermodynamic integration) for log-constant unbiased approximation and the resulting consequences on Bayesian model comparison by X validation. If the goal is the estimation of the log of a ratio of two constants, creating an artificial path between the corresponding distributions and looking at the derivative at any point of this path of the log-density produces an unbiased estimator. Meaning that random sampling along the path, corrected by the distribution of the sampling still produces an unbiased estimator. From there the authors derive an unbiased estimator for any X validation objective function, CV(V,T)=-log p(V|T), taking m observations T in and leaving n-m observations T out… The marginal conditional log density in the criterion is indeed estimated by an unbiased path sampler, using a powered conditional likelihood. And unbiased MCMC schemes à la Jacob et al. for simulating unbiased MCMC realisations of the intermediary targets on the path. Tuning it towards an approximately constant cost for all powers.

So in all objectivity and fairness (!!!), I am quite excited by this new proposal within my favourite area! Or rather two areas since it brings together the estimation of constants and an alternative to Bayes factors for Bayesian testing. (Although the paper does not broach upon the calibration of the X validation values.)

Emily Lakdawalla - The Planetary Society Blog

A Joyless 'First Man'
Space fans will enjoy the movie for its depictions of early spaceflight itself. But it avoids the richness and complexity of human experience, leaving behind awe and joy in favor of an emotional landscape as uninviting as the Moon.

Clifford V. Johnson - Asymptotia

Mindscape Interview!

And then two come along at once... Following on yesterday, another of the longer interviews I've done recently has appeared. This one was for Sean Carroll's excellent Mindscape podcast. This interview/chat is all about string theory, including some of the core ideas, its history, what that "quantum gravity" thing is anyway, and why it isn't actually a theory of (just) strings. Here's a direct link to the audio, and here's a link to the page about it on Sean's blog.

The whole Mindscape podcast has had some fantastic conversations, by the way, so do check it out on iTunes or your favourite podcast supplier!

I hope you enjoy it!!

The post Mindscape Interview! appeared first on Asymptotia.

The n-Category Cafe

Topoi of G-sets

I’m thinking about finite groups these days, from a Klein geometry perspective where we think of a group $GG$ as a source of $GG$-sets. Since the category of $GG$-sets is a topos, this lets us translate concepts, facts and questions about groups into concepts, facts and questions about topoi. I’m not at all good at this, so here are a bunch of basic questions.

For any group $GG$ the category of $GG$-sets is a Boolean topos, which means basically that its internal logic obeys the principle of excluded middle.

• Which Boolean topoi are equivalent to the category of $GG$-sets for some group $GG$?

• Which are equivalent to the category of $GG$-sets for a finite group $GG$?

It might be easiest to start by characterizing the categories of $GG$-sets where $GG$ is a groupoid, and then add an extra condition to force $GG$ to be a group.

The category $G\mathrm{Set}G Set$ comes with a forgetful functor $U:G\mathrm{Set}\to \mathrm{Set}U: G Set \to Set$.

• Is the group of natural automorphisms of $UU$ just $GG$?

This should be easy to check, I’m just feeling lazy. If some result like this is true, how come people talk so much about the Tannaka–Krein reconstruction theorem and not so much about this simpler thing? (Maybe it’s just too obvious.)

Whenever we have a homomorphism $f:H\to Gf \colon H \to G$ we get an obvious functor

${f}^{*}:G\mathrm{Set}\to H\mathrm{Set} f^\ast \colon G Set \to H Set $

This is part of an essential geometric morphism, which means that it has both a right and left adjoint. By this means we can actually get a 2-functor from the 2-category of groups (yeah, it’s a 2-category since groups can be seen as one-object categories) to the 2-category ${\mathrm{Topos}}_{\mathrm{ess}}Topos_\left\{ess\right\}$ consisting of topoi, essential geometric morphisms and natural transformations. If I’m reading the $nn$Lab correctly, this makes $G\mathrm{Set}G Set$ into a full sub-2-category of ${\mathrm{Topos}}_{\mathrm{ess}}Topos_\left\{ess\right\}$. This makes it all the more interesting to know which topoi are equivalent to categories of $GG$-sets.

• What properties characterize essential geometric morphisms of the form ${i}^{*}:GSet\to HSeti^\ast \colon G \Set \to H \Set$ when $i:H\to Gi \colon H \to G$ is the inclusion of a subgroup?

Whenever we have this, we get a transitive $GG$-set $G/HG/H$, which is thus a special object in $G\mathrm{Set}G Set$. These objects are just the atoms in $G\mathrm{Set}G Set$: that is, the objects whose only subobjects are themselves and the initial object. Indeed $G\mathrm{Set}G Set$ is an atomic topos, meaning that every object is a coproduct of atoms. That’s just a fancy way of saying that every $GG$-set can be broken into orbits, which are transitive $GG$-sets.

Next:

• What properties characterize essential geometric morphisms of the form ${i}^{*}:GSet\to HSeti^\ast \colon G \Set \to H \Set$ when $i:H\to Gi \colon H \to G$ is the inclusion of a normal subgroup?

In this case $G/HG/H$ is a group with a surjection $p:G\to G/Hp \colon G \to G/H$, so we get another topos $\left(G/H\right)\mathrm{Set}\left(G/H\right)Set$ and essential geometric morphisms

$\mathrm{Set}⟶\left(G/H\right)\mathrm{Set}\stackrel{{p}^{*}}{⟶}G\mathrm{Set}\stackrel{{i}^{*}}{⟶}H\mathrm{Set}⟶\mathrm{Set} Set \longrightarrow \left(G/H\right)Set \stackrel\left\{p^\ast\right\}\left\{\longrightarrow\right\} G Set \stackrel\left\{i^\ast\right\}\left\{\longrightarrow\right\} H Set \longrightarrow Set $

• What properties characterize essential geometric morphisms of the form ${p}^{*}p^*$ for $pp$ a surjective homomorphism of groups?

• Is there a concept of ‘short exact sequence’ of essential geometric morphisms such that the above sequence is an example?

Well, my questions could go on all day, but this is enough for now!

Christian P. Robert - xi'an's og

ABC intro for Astrophysics

Today I received in the mail a copy of the short book published by edp sciences after the courses we gave last year at the astrophysics summer school, in Autrans. Which contains a quick introduction to ABC extracted from my notes (which I still hope to turn into a book!). As well as a longer coverage of Bayesian foundations and computations by David Stenning and David van Dyk.

CERN Bulletin

Interview with Ghislain Roy, President of the Staff Association

On the occasion of the 300th edition of Echo, Ghislain Roy, the current President of the Staff Association (SA) answered our questions…

'Ghislain, who was trained as a physicist, joined CERN in 1992 as a fellow. He was hired in 1993 as a staff member in the Accelerator operations group, SL/OP, where he had the opportunity to work as engineer in charge and then as LEP operations coordinator. With the shutdown of LEP, Ghislain became Radiation Safety Officer (RSO) then Departmental Safety Officer (DSO) in the accelerator department, AB then BE. He participated, inter alia, in the implementation of the safety and access systems of the LHC. Just before the “Long Shutdown 1” of LHC, Ghislain shortly returned to the accelerator operations group and then joined the Accelerator and Beam Physics Group (BE/ABP). He participated in various projects, such as the transformation of the heavy ion accelerator, LEIR into an injector to a biomedical facility to measure the effectiveness of different types of light ions in the treatment of cancers. This study, called BioLEIR, was finally discontinued. Already delegate and member of the Executive Committee of the SA between 2001 and 2004, Ghislain decided in 2015, considering the evolution of the Five-yearly review in progress at that time, to return as a delegate to the Association with the affirmed motivation of committing himself to serve and influence. At first, member of the Executive Committee and Secretary of the SA Bureau, he stood for election for the presidency of the SA in 2016 and was elected together with Catherine Laverrière as Vice-President.'

ECHO: Could you remind us what was the atmosphere at the SA when you became President?

GR: The decisions taken during the last five-yearly review, marked by the revision of the career structure and a clear slowdown in advancement, had divided the Association into two parts. It left deep disagreements within the SA, which was detrimental to its proper functioning. At that time, the SA governance was rather presidential. As soon as I arrived, I wanted to put an end to this idea. In my view, the SA should be democratic and the decision making power of the SA is with the Staff Council, while the President is elected by the Staff Council and is subject to the decisions of the latter.

ECHO: How is the SA functioning today?

GR: Finding people who agree to run for election as staff delegates is difficult. The situation today is still sensitive, although the last Staff Council election was a success. The Staff Council was substantially renewed in 2017. The people who arrived to the Council have diverse profiles, which means that the Staff Council is more representative of all CERN professions, with a balance between engineers and technicians but also with the election of four fellows. The Staff Council is solid, mature and very diligent. There was an awakening in this last election. The Staff Council should normally represent the whole of CERN, all categories and all nationalities. In the future, it would also be good to attract Users and Associate members of the personnel, as members of the SA but also as staff delegates.

ECHO: The first edition of ECHO, published in 2006 was entitled “Rupture”. At that time, it was a break in the concertation process between SA and CERN Management, chaired by Robert Aymar. Could you give us your vision on the mechanism of concertation?

GR: The concertation mechanism is very rarely used in the realm of employer-employee relations in general; but at CERN, it is at the heart of relations between the CERN Management and the personnel, represented by the Staff Association. In order to function properly, concertation requires good faith and trust from each party. It should allow discussions without taboos with the sole aim to find a solution which is satisfactory for both parties, and in the best interest of the Organization. Concertation is not negotiation, Concertation is not co-management and Concertation is not consultation! The SA puts forward ideas in the discussions, in order to elaborate the best proposal for CERN and its staff. In general, in the concertation process, an alignment of interests takes place insofar as the Management and the SA share the common goal of ensuring the overall success of CERN. The final decision is always taken by the Director General, Finance Committee or CERN Council.

ECHO: What is your opinion about the status of personnel at CERN?

GR: From a very personal point of view, I am particularly attached to the status of international civil servants, where the most important interests are those of CERN’s mission, whether they are scientific, technological or educational. This goes beyond the interest of each State and it should be set from a long-term perspective. The working conditions at CERN are good. In the world of science, this is a model that should be copied and not weakened. Of course, we have to adapt to the evolution of the world, but we have to remain creative and keep in mind this idea of the greatest interest of our Organization. The temptation to reduce, to cut, which can be perceived as interesting from a short-term perspective, is often more destructive than constructive on the long-term.

An evolution, which I deeply care about, due to my previous experience within the accelerator operations team, would be to put more emphasis on the collective interest, and the interest and performance of the team which should take precedence over the individual interests and performance, which in turn lead people to focus too much on the evolution of their own careers. I regret that there is not enough emphasis on this aspect in the assessment of performance done through the current MERIT system.

ECHO: Thinking about the future, what is your vision about the future challenges that CERN will have to face?

GR: For several decades, CERN has been THE world center for research in high-energy physics. And this had an important impact on the evolution of the global population of CERN, which saw the number of users explode. Today, CERN works globally well: the personnel is motivated, the performance of our facilities are excellent and the scientific results numerous. All this has been done with a number of staff members ('titulaires') which has remained almost constant during the past twenty years. On the other hand, the number of students, fellows and project associates has considerably increased, in proportion to the number of projects we are working on. However, this situation is now becoming hard to sustain.

With regard to the future, with large-scale projects that go beyond the current physical limits of CERN, we will have to face new challenges. However, this will not be the first time that CERN will have to face such issues. The construction of the Prévessin site in the 1970s, and subsequently the expansion of CERN to the various access points of LEP and LHC, already raised similar issues at the time. I am confident about CERN’s capacity to find solutions that will allow maintaining the unicity of the Organization, of its site and personnel, while keeping up personnel commitment and motivation, which are its main strength and success.  Of course, the SA will be present to propose solutions in this direction.

ECHO: The first edition of ECHO in 2006 also represented a breakup for the communication of the SA, which had been previously done through the CERN Bulletin. What is your opinion about this subject?

GR: The split, which happened in 2006 between the Bulletin, published by the Management, and the part under the responsibility of the Staff Association, which became ECHO, was not a good idea. The SA is not a union and its way of interacting with the Management is not opposition but concertation. We all pull in the same direction, even though our views may occasionally differ. I would rather return to the previous configuration, where a dedicated space is allocated to the communication of the Staff Association in the Bulletin, rather than the current total separation. This would also give a good signal that the current Management has abandoned the views of the 2006 Management and Robert Aymar on the issue of Concertation.

ECHO: Lastly, what kind of message would you like to pass to members of personnel at CERN?

GR: I would like to make an appeal to spark the interest of all personnel present at CERN to engage into the social life of the Organization, and more generally, in its political life, in the Greek sense of the term, which means the life of the City in general. It can be through clubs, activities of general interest, being a guide or serving in one of the many Joint committees (Reclassification, Discipline, etc.) or within the Staff Association.

For employed members of personnel, this Organization is not just an employer, it is your State, which provides you with social security and your pension in the end. For Associate members of personnel, it is not just a host laboratory but also a community of interest, in which you can influence through your opinions and vision.

La version en français a été publiée dans la 300ième édition de l’ECHO.

Peter Coles - In the Dark

The Big Bang Exploded?

I suspect that I’m not the only physicist who receives unsolicited correspondence from people with wacky views on Life, the Universe and Everything. Being a cosmologist, I probably get more of this stuff than those working in less speculative branches of physics. Because I’ve written a few things that appeared in the public domain, I probably even get more than most cosmologists (except the really famous ones of course).

Many “alternative” cosmologists have now discovered email, and indeed the comments box on this blog, but there are still a lot who send their ideas through regular post. Whenever I get a envelope with an address on it that has been typed by an old-fashioned typewriter it’s a dead giveaway that it’s going to be one of those. Sometimes they are just letters (typed or handwritten), but sometimes they are complete manuscripts often with wonderfully batty illustrations. I remember one called Dark Matter, The Great Pyramid and the Theory of Crystal Healing. I used to have an entire filing cabinet filled with things like his, but I took the opportunity of moving from Cardiff some time ago to throw most of them out.

One particular correspondent started writing to me after the publication of my little book, Cosmology: A Very Short Introduction. This chap sent a terse letter to me pointing out that the Big Bang theory was obviously completely wrong. The reason was obvious to anyone who understood thermodynamics. He had spent a lifetime designing high-quality refrigeration equipment and therefore knew what he was talking about (or so he said). He even sent me this booklet about his ideas, which for some reason I have neglected to send for recycling:

His point was that, according to the Big Bang theory, the Universe cools as it expands. Its current temperature is about 3 Kelvin (-270 Celsius or thereabouts) but it is now expanding and cooling. Turning the clock back gives a Universe that was hotter when it was younger. He thought this was all wrong.

The argument is false, my correspondent asserted, because the Universe – by definition – hasn’t got any surroundings and therefore isn’t expanding into anything. Since it isn’t pushing against anything it can’t do any work. The internal energy of the gas must therefore remain constant and since the internal energy of an ideal gas is only a function of its temperature, the expansion of the Universe must therefore be at a constant temperature (i.e. isothermal, rather than adiabatic). He backed up his argument with bona fide experimental results on the free expansion of gases.

I didn’t reply and filed the letter away. Another came, and I did likewise. Increasingly overcome by some form of apoplexy his letters got ruder and ruder, eventually blaming me for the decline of the British education system and demanding that I be fired from my job. Finally, he wrote to the President of the Royal Society demanding that I be “struck off” and forbidden (on grounds of incompetence) ever to teach thermodynamics in a University. The copies of the letters he sent me are still will the pamphlet.

I don’t agree with him that the Big Bang is wrong, but I’ve never had the energy to reply to his rather belligerent letters. However, I think it might be fun to turn this into a little competition, so here’s a challenge for you: provide the clearest and most succint explanation of why the temperature of the expanding Universe does fall with time, despite what my correspondent thought.

Peter Coles - In the Dark

Especially when the October Wind

Especially when the October wind
With frosty fingers punishes my hair,
Caught by the crabbing sun I walk on fire
And cast a shadow crab upon the land,
By the sea’s side, hearing the noise of birds,
Hearing the raven cough in winter sticks,
My busy heart who shudders as she talks
Sheds the syllabic blood and drains her words.

Shut, too, in a tower of words, I mark
On the horizon walking like the trees
The wordy shapes of women, and the rows
Of the star-gestured children in the park.
Some let me make you of the vowelled beeches,
Some of the oaken voices, from the roots
Of many a thorny shire tell you notes,
Some let me make you of the water’s speeches.

Behind a pot of ferns the wagging clock
Tells me the hour’s word, the neural meaning
Flies on the shafted disk, declaims the morning
And tells the windy weather in the cock.
Some let me make you of the meadow’s signs;
The signal grass that tells me all I know
Breaks with the wormy winter through the eye.
Some let me tell you of the raven’s sins.

Especially when the October wind
(Some let me make you of autumnal spells,
The spider-tongued, and the loud hill of Wales)
With fists of turnips punishes the land,
Some let me make you of the heartless words.
The heart is drained that, spelling in the scurry
Of chemic blood, warned of the coming fury.
By the sea’s side hear the dark-vowelled birds.

by Dylan Thomas (1914-1953)

Emily Lakdawalla - The Planetary Society Blog

Imaging the Earth from Lunar orbit
Radio amateurs around the world worked together to take an image of the Earth and the far side of the Moon.

October 14, 2018

Christian P. Robert - xi'an's og

the invasion of the American cheeses

Part of the new Nafta agreement between the USA and its neighbours, Canada and Mexico, is lifting restrictions on the export of American cheeses to these countries. Having tasted high quality cheeses from Québec on my last visit to Montréal, and having yet to find similar performances in a US cheese, I looked at the list of cheese involved in the agreement, only to discover a collection of European cheese that should be protected by AOC rules under EU regulations (and only attributed to cheeses produced in the original regions):

Brie [de Meaux or de Melun?]
Burrata [di Andria?]
Camembert [missing the de Normandie to be AOC]
Coulommiers [actually not AOC!]
Emmenthal [which should be AOC Emmentaler Switzerland!]
Pecorino [all five Italian varieties being PDO]
Provolone [both Italian versions being PDO]

Plus another imposition that British Columbia wines be no longer segregated from US wines in British Columbia! Which sounds somewhat absurd if wine like those from (BC) Okanagan Valley or (Washington) Walla Walla is to be enjoyed with some more subtlety than diet cokeOwning a winery apparently does not necessarily require such subtlety!

Clifford V. Johnson - Asymptotia

Futuristic Podcast Interview

For your listening pleasure: I've been asked to do a number of longer interviews recently. One of these was for the "Futuristic Podcast of Mark Gerlach", who interviews all sorts of people from the arts (normally) over to the sciences (well, he hopes to do more of that starting with me). Go and check out his show on iTunes. The particular episode with me can be found as episode 31. We talk about a lot of things, from how people get into science (including my take on the nature vs nurture discussion), through the changes in how people get information about science to the development of string theory, to black holes and quantum entanglement - and a host of things in between. We even talked about The Dialogues, you'll be happy to hear. I hope you enjoy listening!

(The picture? Not immediately relevant, except for the fact that I did cycle to the place the recording took place. I mostly put it there because I was fixing my bike not long ago and it is good to have a photo in a post. That is all.)

The post Futuristic Podcast Interview appeared first on Asymptotia.

October 13, 2018

John Baez - Azimuth

Category Theory Course

I’m teaching a course on category theory at U.C. Riverside, and since my website is still suffering from reduced functionality I’ll put the course notes here for now. I taught an introductory course on category theory in 2016, but this one is a bit more advanced.

The hand-written notes here are by Christian Williams. They are probably best seen as a reminder to myself as to what I’d like to include in a short book someday.

Lecture 1 What is pure mathematics all about? The importance of free structures.

Lecture 2: The natural numbers as a free structure. Adjoint functors.

Lecture 3: Adjoint functors in terms of unit and counit.

Lecture 5: 2-Categories and string diagrams. Composing adjunctions.

Lecture 6: The ‘main spine’ of mathematics. Getting a monad from an adjunction.

Emily Lakdawalla - The Planetary Society Blog

Book Announcement and Excerpt: Astronomy for Kids
For Astronomy Day, Bruce announces his new book Astronomy for Kids, provides excerpts, and gives some bonus planet observing info.

October 12, 2018

ZapperZ - Physics and Physicists

Time Crystals
Ignoring the theatrics, Don Lincoln's video is the simplest level of explanation that you can ask for for what a "time crystal" is, after you strip away the hyperbole.

Zz.

Emily Lakdawalla - The Planetary Society Blog

I’m thrilled to be anticipating the beginning of a new mission to Mercury. Here's a timeline for BepiColombo's planned launch on 20 October (19 October in the U.S.).

October 11, 2018

Jon Butterworth - Life and Physics

Playful Explorations
Originally posted on NearcticTraveller:
Atom Land: A Guided Tour Through the Strange (And Impossibly Small) World of Particle Physics by Jon Butterworth. The Experiment. New York. 2018. A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy…

October 09, 2018

Jon Butterworth - Life and Physics

Boosting boost
Regular readers (hello!) will know that the topics of jet substructure, boosted objects and the annual Boost meeting often feature here, because I work on them and they are interesting and important for physics at the Large Hadron Collider (and … Continue reading

October 07, 2018

John Baez - Azimuth

Lebesgue Universal Covering Problem (Part 3)

Back in 2015, I reported some progress on this difficult problem in plane geometry. I’m happy to report some more.

First, remember the story. A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. A universal covering is a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. In 1914, the famous mathematician Henri Lebesgue sent a letter to a fellow named Pál, challenging him to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

0.86602540…

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

The resulting universal covering has area

0.84529946…

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

0.8441377708435…

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were 4 · 10-11 and 6 · 10-18. This turned out to be wrong. The actual areas are 3.7507 · 10-11 and 8.4460 · 10-21. The resulting universal covering had an area of

0.844137708416…

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, in 2015 Philip Gibbs found a way to remove about a million times more area than Hansen’s larger region: a whopping 2.233 · 10-5. This gave a universal covering with area

0.844115376859…

Karine Bagdasaryan and I helped Gibbs write up a rigorous proof of this result, and we published it here:

• John Baez, Karine Bagdasaryan and Philip Gibbs,The Lebesgue universal
covering problem
, Journal of Computational Geometry 6 (2015), 288–299.

Greg Egan played an instrumental role as well, catching various computational errors.

At the time Philip was sure he could remove even more area, at the expense of a more complicated proof. Since the proof was already quite complicated, we decided to stick with what we had.

But this week I met Philip at The philosophy and physics of Noether’s theorems, a wonderful workshop in London which deserves a full blog article of its own. It turns out that he has gone further: he claims to have found a vastly better universal covering, with area

0.8440935944…

This is an improvement of 2.178245 × 10-5 over our earlier work—roughly equal to our improvement over Hansen.

You can read his argument here:

• Philip Gibbs, An upper bound for Lebesgue’s universal covering problem, 22 January 2018.

I say ‘claims’ not because I doubt his result—he’s clearly a master at this kind of mathematics!—but because I haven’t checked it and it’s easy to make mistakes, for example mistakes in computing the areas of the shapes removed.

It seems we are closing in on the final result; however, Philip Gibbs believes there is still room for improvement, so I expect it will take at least a decade or two to solve this problem… unless, of course, some mathematicians start working on it full-time, which could speed things up considerably.

October 06, 2018

Jon Butterworth - Life and Physics

Book Review — Atom Land: A Guided Tour Through the Strange (and Impossibly Small) World of Particle Physics
Originally posted on Evilcyclist's Blog:
Atom Land: A Guided Tour Through the Strange (and Impossibly Small) World of Particle Physics by Jon Butterworth. Butterworth is a lecture in particle physics at a layman’s level. Butterworth is a physics professor…

John Baez - Azimuth

Riverside Math Workshop

We’re having a workshop with a bunch of cool math talks at U. C. Riverside, and you can register for it here:

Riverside Mathematics Workshop for Excellence and Diversity, Friday 19 October – Saturday 20 October, 2018. Organized by John Baez, Carl Mautner, José González and Chen Weitao.

This is the first of an annual series of workshops to showcase and celebrate excellence in research by women and other under-represented groups for the purpose of fostering and encouraging growth in the U.C. Riverside mathematical community.

After tea at 3:30 p.m. on Friday there will be two plenary talks, lasting until 5:00. Catherine Searle will talk on “Symmetries of spaces with lower curvature bounds”, and Edray Goins will give a talk called “Clocks, parking garages, and the solvability of the quintic: a friendly introduction to monodromy”. There will then be a banquet in the Alumni Center 6:30 – 8:30 p.m.

On Saturday there will be coffee and a poster session at 8:30 a.m., and then two parallel sessions on pure and applied mathematics, with talks at 9:30, 10:30, 11:30, 1:00 and 2:00. Check out the abstracts here!

(I’m especially interested in Christina Vasilakopoulou’s talk on Frobenius and Hopf monoids in enriched categories, but she’s my postdoc so I’m biased.)

October 05, 2018

ZapperZ - Physics and Physicists

RIP Leon Lederman
One of the most charismatic physicists that I've ever met, former Fermilab Director and Nobel Laureate Leon Lederman, has passed away at the age of 96. Most of the general public will probably not know his name, but will have heard the name "God Particle", which he coined in his book, and which he originally intended to call the "God-Damn Particle".

He had been in failing health, and suffered from dementia. It force his family to auction off his Nobel Prize medal to help with his medical cost. But his lasting legacy will be in his effort to put "Physics First" in elementary and high school. And of course, there's Fermilab.

He truly was, and still is, a giant in this field.

Zz.

October 04, 2018

Lubos Motl - string vacua and pheno

Leon Lederman: 1922-2018
Leon Lederman was a giant of the 20th century experimental particle physics. Sadly, he died on Wednesday in a care center in Idaho, due to the complications from dementia (not so shocking at age of 96).

He was born to a Russian Jewish family in 1922. He was the key man in teams that discovered the neutral $$K$$-mesons (do you remember Feynman's discussion about the two-state Hilbert space of $$K^0$$ and $$\bar K^0$$ that may be mixed as the superpositions of long-lived and short-lived kaons?), the bottom quark $$b$$, and the muon (i.e. second) neutrino.

For the muon neutrino discovery, he was given the 1988 Nobel prize in physics, along with two other men.

Lederman was a charming guy who was always a neverending fountain of jokes. As a professor, he has led 50 graduate students in some epoch – none of them went to jail, he bragged.

Also, Lederman was a crucial cheerleader for particle physics. He made the key promotion that allowed Ronald Reagan to plan and build the Tevatron (the room for superconducting magnets in an existing tunnel was reserved in 1981) – which discovered the top quark $$t$$ in 1995. We might say that among the 6+6 lepton+quark (elementary fermions') flavors, he was rather fundamental in the discovery of 4 (one-third), namely $$s,b,\nu_\mu,t$$.

Leon Lederman has been a huge proponent of physics education – and also the main guy behind the Physics First movement demanding that teenagers are first exposed to physics and then e.g. biology.

He was also a great popularizer. His book "The God Particle" described experimental particle physics and coined the laymen's most popular term for the Higgs boson. We often explain the name as saying that "it was the work by an editor" because Lederman originally wanted the title "The Goddamn Particle".

But it would be fake news – and some people promote the fake news – to say that Lederman found the references to religion unacceptable, like some others do. Instead, in one defense of his "The God Particle", he quoted quite a piece from Genesis, like here. These days, I find it obvious (but I already found it likely a decade ago) that the criticisms against the God Particle were driven by left-wing activists' efforts to make any references to religion etc. politically unacceptable within the Academia.

Some of his methods to promote physics were truly creative. A decade ago, he built a booth on the street and was answering physics questions posed by the pedestrians (literally) in Manhattan.

In 2015, Lederman became the second person who sold his Nobel Prize medal for $765,002. He may have needed some money for the treatment of his dementia that was just diagnosed. However, even from a financial perspective, I think it was a good idea to sell the medal because its value is likely to drop in coming years. It seems that two days ago, the physics Nobel prize was finally hijacked by the identity politics activists and meritocratically oriented people will simply stop watching that award – I have stopped. You know, it was announced by tons of media in advance that the newest winners "must" include a woman, and it seems that "they" found a laser team where it was possible. She was a 24-year-old accidental member of a team that did the Nobel work. On top of that, out of her less than 9,000 citations, 2/3 are from the papers co-written with Mr Mourou (who has had 30,000+ extra citations at other places) which signals that he, and not she, was the engine in their team. Needless to say, she's presented as a full-blown, if not the superior, winner by the media – but that's not what the hard data say. Once you suspect that there may be political reasons behind some winners, the problem isn't even limited to the privileged groups such as women. Mr Mourou could have gotten his prize because of his proximity to a woman researcher, too. And there may be other reasons. Fortunately, one would need a lot of concentrated energy to roll in his grave – that's the only reason why Alfred Nobel isn't doing it right now. The last disciplines in which his prize had some meaning are being ruined, too. But back to Lederman. He has lived in a different epoch when brilliant people living in the West could have been driven by genuine love for science and, without becoming slaves of any political movement, they could have done great things. RIP Leon Lederman. Jon Butterworth - Life and Physics Why use a map to tell the story? The paperback edition of A Map of the Invisible is out now, and to help promote it we made a few videos on some of the themes in the book. Here’s the second one: October 03, 2018 The n-Category Cafe Category Theory 2019 The major annual category theory conference will be held in Edinburgh next year: Category Theory 2019 University of Edinburgh 7-13 July 2019 Organizing committee: Steve Awodey, Richard Garner, Chris Heunen, Tom Leinster, Christina Vasilakopoulou. As John has just pointed out, this is followed two days later by the Applied Category Theory conference and school in Oxford, very conveniently for anyone wishing to go to both. October 02, 2018 The n-Category Cafe Applied Category Theory 2019 I’m helping organize ACT 2019, an applied category theory conference and school at Oxford, July 15-26, 2019. Here’s a ‘pre-announcement’. More details will come later, but here’s some good news: it’s right after the big annual worldwide category theory conference, which is in Edinburgh in 2019. So, conference-hopping category theorists can attend both! Dear all, As part of a new growing community in Applied Category Theory, now with a dedicated journal Compositionality, a traveling workshop series SYCO, a forthcoming Cambridge U. Press book series Reasoning with Categories, and several one-off events including at NIST, we launch an annual conference+school series named Applied Category Theory, the coming one being at Oxford, July 15-19 for the conference, and July 22-26 for the school. The dates are chosen such that CT 2019 (Edinburgh) and the ACT 2019 conference (Oxford) will be back-to-back, for those wishing to participate in both. There already was a successful invitation-only pilot, ACT 2018, last year at the Lorentz Centre in Leiden, also in the format of school+workshop. For the conference, for those who are familiar with the successful QPL conference series, we will follow a very similar format for the ACT conference. This means that we will accept both new papers which then will be published in a proceedings volume (most likely a Compositionality special Proceedings issue), as well as shorter abstracts of papers published elsewhere. There will be a thorough selection process, as typical in computer science conferences. The idea is that all the best work in applied category theory will be presented at the conference, and that acceptance is something that means something, just like in CS conferences. This is particularly important for young people as it will help them with their careers. Expect a call for submissions soon, and start preparing your papers now! The school in ACT 2018 was unique in that small groups of students worked closely with an experienced researcher (these were John Baez, Aleks Kissinger, Martha Lewis and Pawel Sobociński), and each group ended up producing a paper. We will continue with this format or a closely related one, with Jules Hedges and Daniel Cicala as organisers this year. As there were 80 applications last year for 16 slots, we may want to try to find a way to involve more students. We are fortunate to have a number of private sector companies closely associated in some way or another, who will also participate, with Cambridge Quantum Computing Inc. and StateBox having already made major financial/logistic contributions. On behalf of the ACT Steering Committee, John Baez, Bob Coecke, David Spivak, Christina Vasilakopoulou John Baez - Azimuth Applied Category Theory 2019 animation by Marius Buliga I’m helping organize ACT 2019, an applied category theory conference and school at Oxford, July 15-26, 2019. More details will come later, but here’s the basic idea. If you’re a grad student interested in this subject, you should apply for the ‘school’. Not yet—we’ll let you know when. Dear all, As part of a new growing community in Applied Category Theory, now with a dedicated journal Compositionality, a traveling workshop series SYCO, a forthcoming Cambridge U. Press book series Reasoning with Categories, and several one-off events including at NIST, we launch an annual conference+school series named Applied Category Theory, the coming one being at Oxford, July 15-19 for the conference, and July 22-26 for the school. The dates are chosen such that CT 2019 (Edinburgh) and the ACT 2019 conference (Oxford) will be back-to-back, for those wishing to participate in both. There already was a successful invitation-only pilot, ACT 2018, last year at the Lorentz Centre in Leiden, also in the format of school+workshop. For the conference, for those who are familiar with the successful QPL conference series, we will follow a very similar format for the ACT conference. This means that we will accept both new papers which then will be published in a proceedings volume (most likely a Compositionality special Proceedings issue), as well as shorter abstracts of papers published elsewhere. There will be a thorough selection process, as typical in computer science conferences. The idea is that all the best work in applied category theory will be presented at the conference, and that acceptance is something that means something, just like in CS conferences. This is particularly important for young people as it will help them with their careers. Expect a call for submissions soon, and start preparing your papers now! The school in ACT 2018 was unique in that small groups of students worked closely with an experienced researcher (these were John Baez, Aleks Kissinger, Martha Lewis and Pawel Sobociński), and each group ended up producing a paper. We will continue with this format or a closely related one, with Jules Hedges and Daniel Cicala as organisers this year. As there were 80 applications last year for 16 slots, we may want to try to find a way to involve more students. We are fortunate to have a number of private sector companies closely associated in some way or another, who will also participate, with Cambridge Quantum Computing Inc. and StateBox having already made major financial/logistic contributions. On behalf of the ACT Steering Committee, John Baez, Bob Coecke, David Spivak, Christina Vasilakopoulou ZapperZ - Physics and Physicists 2018 Nobel Prize in Physics ... FINALLY, after 55 years! I seriously thought that I'd never see this in my lifetime, and I'm terribly happy that I was wrong! The 2018 Nobel Prize in Physics has just been announced, and for the first time in more than 50 years, one of the winners is a woman! The Nobel Prize in Physics 2018 was awarded “for groundbreaking inventions in the field of laser physics” with one half to Arthur Ashkin “for the optical tweezers and their application to biological systems”, the other half jointly to Gérard Mourou and Donna Strickland “for their method of generating high-intensity, ultra-short optical pulses”. Congratulations to all, and especially to Donna Strickland. I will admit that this wasn't something I expected. I didn't realize that the area of ultra-short laser pulses was in the Nobel Committee and nomination radar. But it is still very nice that this area of laser pulse-shaping technique is being recognized. Zz. October 01, 2018 Lubos Motl - string vacua and pheno Physics was invented and built by men Activists at CERN turned an excerpt from Sexmission into reality CERN has updated the statement to say that all Strumia's CERN ties were suspended, at least during the ongoing Inquisition trial ("investigation of the conference"). I was hoping it wouldn't happen but I was prepared to see that it would happen. What do you want to investigate, idiots? Strumia has made some elementary and some elaborate comments about women in physics and a bunch of brain-dead wannabe fascists and mental cripples found the truth inconvenient. That's it. However, things are much better in Italy where Alessandro is primarily employed. The rector of the University of Pisa Paolo Mancarella (IT), after he got some complaints from the totalitarian cultural Marxists and after he looked at the 26 slides, refused to start ethical proceedings against Strumia. That looks better although something will be "investigated" over there by an ethical committee, too. But maybe the page says something else and Mr Mancarella doesn't really speak English. Poles are our Western Slavic cousins. They generally love us, Czechs, more than we love them. (We're their #1 favorite foreigners but it's not true for Poles.) They're great but I surely don't think that they're good e.g. in the sense of humor. (See my answer to What Poles do better than Czechs and vice versa.) You need to click at a link and play the video outside TRF. However, I became a great fan of a 1983 or 1984 Polish cult film, the sci-fi comedy named Sexmission. Max and Albert, two men from the 1980s, volunteer to (earn some bucks and) undergo a hibernation experiment (designed by Prof Wiktor Kuppelweiser). There's a war (whose special weapon selectively attacks men) and they are only waken up in a relatively distant future (well, 2042) in which no men are alive anymore. The rest of the mankind – purely women – live in a totalitarian society underground while their propaganda says that radioactivity makes it impossible to live on the surface. The ideology of the totalitarian society is feminism: "man is your enemy", all of the obedient girls and women shout all the time. In 1983, feminism was not a sensitive political topic at all (I think that the number of feminists in Poland is close to zero even today) so people watched it as pure fiction. If you want to make the filmmakers look courageous, the totalitarian feminism may be considered a hidden satire against the totalitarian communism in Poland – which would end 5 years later. However, I think that these "metaphors" are not so clear. The filmmakers could have also claimed that it was a satire directed against some trends in the Western society of the 1980s. Well, it wouldn't really be too accurate because the West was still alright in the 1980s. But it would be extremely apt to consider Sexmission a satire mocking the Western Europe (and North America) of 2018. In fact, the writers of the film almost accurately predicted what the West would look like just 35 years later. I embedded a 5-minute excerpt with English subtitles at the top. Max, the direct and more ordinary man, and Albert, the thinner, shy, and more intellectual guy, are asked to sign a document that they were born men against their will, they want to declare all their previous male lives as non-existent, and they would undergo naturalization (castration). They laugh and refuse the offer. A big courtroom opens above their heads with the tribunal of ladies. The female "researchers" are split into two camps. One of them wants to castrate the guys, the other one wants to kill them – after some experiments are made on them. Just to be sure, Her Excellency, the leader of the female civilization, turns out to be a man at the end, an impotent one, who could have pretended he was female (from his childhood) and he became the alpha female. It's like in the joke "What is the smartest cell in a woman's body?" – "The sperm." Max and Albert see that they're in trouble – just like Alessandro Strumia does right now. It must be some organization. Nevertheless, they start to explain to them that the whole history is the history of men. There would have been no progress without men. The women want examples. Albert mentions Copernicus and Einstein. They respond: Copernicus was a woman, Einstein was a woman, and so on. Max gets upset and screams: "And so was Marie Curie, wasn't he?" Well, that wasn't the best example, Albert tells Max. But you can see that the feminist organization that was deliberately exaggerated back in 1983 – to make the movie more comical – actually reacted more calmly to Albert's comment that the whole history and science is the history of men. The real-world feminists of 2018 – even those that have something to do with CERN, a global center of the hardest scientific discipline (a 13-TeV-hard one) that is expected to be very rational – react more insanely than the exaggerated fictitious feminists from a 1983 movie! Three decades ago, every kid and every adult knew that the history, science, and technology, among other things, was overwhelmingly created by men. Everyone agreed that only really stupid and uneducated people – uneducated at the level of retarded kids from the kindergarten – could disagree with this innocent proposition. Now, in 2018, the idiocy of not knowing this basic kindergarten fact has not only become tolerable in some environments. These environments actually love to harass everybody who dares to know what has been a matter of common sense for centuries and millenniums. Strumia has said many things but the left-wing activist journalists love to pick the statement that "physics was invented and built by men". It's insane. Even if you consider yourself moderate and even if you think that your humble correspondent is more involved in this business, you simply have to help to stop these radical loons that began to conquer every influential industry and structure within the Western society. If you fail to help all sensible people to stop this mad cow disease of cultural Marxism, you will pay dearly for your laziness, too. All of us will. The whole mankind will. In particular, I urge the Italian government to threaten the Italian exit from CERN (Italy pays some CHF 117 million a year) unless CERN officially apologizes to Prof Alessandro Strumia and restores all his access to projects at CERN. It's a moral duty of the Italian government to defend the basic civic rights of its citizens against foreign and international organizations that don't respect the basic rules of the Western civilization. P.S.: The Sexmission continues with a nice defense of the old world order by Albert – women were standing on the pedestal, poets were writing poems for them etc. One woman is particularly on their side, the blonde and wise Lamia (she stopped using the pills against the sexual desire so she's been horny, fell in love with Max, and also understood that the old world will have been a better place from a grandma, too). She agrees to try to leave the underground dystopia along with the men – who, like typical heroic men, prefer freedom, even if it meant just 2 weeks of freedom (given the radioactivity and oxygen reserves). On the surface of Earth, they of course find out that the scaremongering about the radioactivity is exactly as untrue as the global warming hysteria today. The catastrophically looking world on the surface was just a painting, too. There's no dangerous radioactivity there – they realize once they see the first stork. They find a 20th century house there ("it looks like a blockhouse," Lamia cutely said!) – it's the house where the leader (who is actually male) spends a part of his life. (He also played a key role in maintaining the lies about the radioactivity that is incompatible with life – it's easier to control the ladies if they stay underground. The similarity of the "purpose of this fearmongering" to that of the global warming hysteria in the real world is self-evident.) The movie ends with a happy end. They have fun in the bedroom with Lamia Reno and a former apparatchik (Emma Dax) who came to catch them and who is disgusted by the propaganda tactics of the feminist regime after both women are already declared to be dead on TV. The leader, after they unmask him, agrees to share the house. They won't report him to the women. Max and Albert think how to save the whole civilization. They ultimately place their sperm to lots of the test tubes in the factory producing girls. The final screenshot of the movie shows the first newborn boy's penis – after it scares a worker in the factory. ;-) You know, that movie could have been interpreted as a satire in various ways. The Polish communist authorities could have found reasons to ban it. However, it seems to me that the movie would be much more likely to get banned in the contemporary Western Europe and North America – there is probably less freedom in these ex-beacons of the free civilization than in the communist Poland of 1983. If you Google search for "sexmission watch full movie" without the quotes, you will find 3 parts of the movie with English subtitles at Daily Motion. Clifford V. Johnson - Asymptotia Diverse Futures I was asked by editors of the magazine Physics World's 30th anniversary edition to do a drawing that somehow captures changes in physics over the last 30 years, and looks forward to 30 years from now. This was an interesting challenge. There was not anything like the freedom to use space that I had in other works I've done, like my graphic book about science "The Dialogues", or my glimpse of the near future in my SF story "Resolution" in the Twelve Tomorrows anthology. I had over 230 pages for the former, and 20 pages for the latter. Here, I had one page. Well, actually a little over 2/3 of a page (once you take into account the introductory text, etc). So I thought about it a lot. The editors wanted to show an active working environment, and so I thought about the interiors of labs for some time, looked up lots of physics breakthroughs over the years, and reflected on what might come. I eventually realized that the most important single change in the science that can be visually depicted (and arguably the single most important change of any kind) is the change that's happened to the scientists. Most importantly, we've become more diverse in various ways (not uniformly across all fields though), much more collaborative, and the means by which we communicate in order to do science have expanded greatly. All of this has benefited the science greatly, and I think that if you were to get a time machine and visit a lab 30 years ago, or 30 years from now, it will be the changes in the people that will most strike you, if you're paying attention. So I decided to focus on the break/discussion area of the lab, and imagined that someone stood in the same spot each year and took a snapshot. What we're seeing is those photos tacked to a noticeboard somewhere, and that's our time machine. Have a look, and keep an eye out for various details I put in to reflect the different periods. Enjoy! (Direct link here, and below I've embedded the image itself that's from the magazine. I recommend reading the whole issue, as it is a great survey of the last 30 years.) The post Diverse Futures appeared first on Asymptotia. September 30, 2018 Lubos Motl - string vacua and pheno Nasty SJWs persuade spineless CERN officials to start an inquisition trial against an Italian scientist The victim "dared" to say that women aren't isomorphic to men when he was asked Galileo Galilei, the Italian founder of the scientific method as we know it, has been a target of the Roman Catholic Inquisition trials between 1610 and 1633 – mostly because of his heliocentric "heresies". Those Inquisition folks should have gone extinct, shouldn't they? Sadly, four centuries later, the contamination of the intellectual institutions by this garbage that is violently opposed to the Academic freedoms and any kind of honest research that is inconvenient for the powerful has exceeded anything that could have been seen in the 17th century. On Friday, the 1st Workshop on High Energy Theory and Gender took place at CERN, the Center of Europe for the Research of Nuclei [sic]. Thankfully, an Italian scientist who has actually thought about the problem – as well as the phenomenological particle physics where he has accumulated 30,939 citations according to INSPIRE so far (41,772 at Google Scholar), a real star (that you may sometimes meet in the blogosphere, anyway) – was invited to give a talk, too: Experimental test of a new global discrete symmetry Scheduled title: Bibliometrics data about gender issues in fundamental theory The aforementioned "symmetry" is the non-existent symmetry (or spontaneously broken symmetry, an alternative explanation the speaker considers) between men and women. The talk is full of graphs and evidence that the scientific institutions are heavily biased against men and have lost much of meritocracy. I won't mention the name of the Italian professor. Why? Because I want to make it harder for additional members of that toxic movement to go after his or her neck and about 70% of feminists and similar unfriendly mammals don't have a powerful enough brain to find the name of the speaker. I recommend you to go through the 26 slides because they're wonderfully on-topic, although they are often elementary and sometimes plagued by minor errors. They elaborate on lots of points and there are some calculations. At the same moment, there are certain prerequisites and methods in the "review" part of the talk that every scientist should be obliged to understand. It was a conference promoting "equal opportunities" of genders but for some reason (that all of us understand very well, of course), all 11 physics talks at the conference were delivered by women. In the insane contemporary social atmosphere, the reaction could have been predicted. You should figure out the name of the speaker and search for that surname on Twitter (try influential tweets as well as the recent ones). An amazingly hostile, brain-dead fascist mob of parasites within high-energy physics – who haven't contributed as much as the Italian scientist even if you combine their contributions – has gathered out of a stinky dumping ground and began to plan methods to harm the speaker personally. The irony that the victim of the new Inquisition trial is an achieved Italian scientist hasn't discouraged them, not even by epsilon. And you know, Galileo did his gravitational experiments in Pisa. Our victim of the postmodern Inquisition is also affiliated with University of Pisa. I could probably go on... Today, CERN has issued a totally shocking Statement: CERN stands for diversity where some officials who didn't have the courage to sign themselves seem to speak on behalf of almost all the European countries. Cheap filth, I assure you that over 95% of the citizens of the Czech Republic squarely stand on the side of the Italian scientist and against you. It's absolutely outrageous that you abused the name of my country to "sign" this disgusting piece. I know lots of names of the fascist bullies who plan to start terror against the Italian scientist. To preserve the very existence of science, you need to be totally eliminated from the institutional science, scumbags. Women in science can do much more than Galileo did in science – unless there's a catch, of course. Incidentally, you shouldn't be surprised that all traces of the Italian scientist have been erased from the conference website. The Italian scientist has been retroactively disinvited from the conference after he has delivered the talk, the only talk that has made any sense over there! It's the classic The Commissar Vanishes all over again. The slides have been removed by the Stalinists to make sure that no one can find a hyperlink pointing to the content of the talk again and no one can download it or read it. It is not clear who wrote the despicable Fatwa – that cripples the good name of CERN and makes it look like the twin sister of Daesh – but the Director General Ms Fabiola Gianotti simply has to be held responsible for outrageous abuses of the CERN website against an individual respected member of the community that take place under her watch. If she personally allowed the CERN press releases to be abused in this way, I have a message for her: If you believe that you can't follow in the footsteps of those who were tried in Nuremberg, you are recklessly optimistic about your fate. CERN has to be cleaned from this culturally Marxist junk and if it turns out that it's impossible to do so, CERN has to be euthanized. I am ready to ask our PM – whom I don't like too much – to save some money by exiting CERN. Not that Czechs matter over there. But we could save some money and he will agree. September 29, 2018 Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe) History of Physics at the IoP This week saw a most enjoyable conference on the history of physics at the Institute of Physics in London. The IoP has had an active subgroup in the history of physics for many years, complete with its own newsletter, but this was the group’s first official workshop for a long while. It proved to be a most enjoyable and informative occasion, I hope it is the first of many to come. The Institute of Physics at Portland Place in London (made famous by writer Ian McEwan in the novel ‘Solar’, as the scene of a dramatic clash between a brilliant physicist of questionable integrity and a Professor of Science Studies) There were plenty of talks on what might be called ‘classical history’, such as Maxwell, Kelvin and the Inverse Square law of Electrostatics (by Isobel Falconer of the University of St. Andrews) and Newton’s First Law – a History (by Paul Ranford of University College London), while the more socially-minded historian might have enjoyed talks such as Psychical and Optical Research; Between Lord Rayleigh’s Naturalism and Dualism (by Gregory Bridgman of the University of Cambridge) and The Paradigm Shift of Physics -Religion-Unbelief Relationship from the Renaissance to the 21st Century (by Elisabetta Canetta of St Mary’s University). Of particular interest to me were a number of excellent talks drawn from the history of 20th century physics, such as A Partial History of Cosmic Ray Research in the UK (by the leading cosmic ray physicist Alan Watson), The Origins and Development of Free-Electron Lasers in the UK (by Elaine Seddon of Daresbury Laboratory), When Condensed Matter became King (by Joseph Martin of the University of Cambridge), and Symmetries: On Physical and Aesthetic Argument in the Development of Relativity (by Richard Staley of the University of Cambridge). The official conference programme can be viewed here. My own talk, Interrogating the Legend of Einstein’s “Biggest Blunder”, was a brief synopsis of our recent paper on this topic, soon to appear in the journal Physics in Perspective. Essentially our finding is that, despite recent doubts about the story, the evidence suggests that Einstein certainly did come to view his introduction of the cosmological constant term to the field equations as a serious blunder and almost certainly did declare the term his “biggest blunder” on at least one occasion. Given his awareness of contemporaneous problems such as the age of the universe predicted by cosmologies without the term, this finding has some relevance to those of today’s cosmologists who seek to describe the recently-discovered acceleration in cosmic expansion without a cosmological constant. The slides for the talk can be found here. I must admit I missed a trick at question time. Asked about other examples of ‘fudge factors’ that were introduced and later regretted, I forgot the obvious one. In 1900, Max Planck suggested that energy transfer between oscillators somehow occurs in small packets or ‘quanta’ of energy in order to successfully predict the spectrum of radiation from a hot body. However, he saw this as a mathematical device and was not at all supportive of the more general postulate of the ‘light quantum’ when it was proposed by a young Einstein in 1905. Indeed, Planck rejected the light quantum for many years. All in all, a superb conference. It was also a pleasure to visit London once again. As always, I booked a cheap ‘ n’ cheerful hotel in the city centre, walkable to the conference. On my way to the meeting, I walked past Madame Tussauds, the Royal Academy of Music, and had breakfast at the tennis courts in Regent’s Park. What a city! Walking past the Royal Academy on my way to the conference Views of London over a quick dinner after the conference ZapperZ - Physics and Physicists Record 1200 Tesla, and then, BANG! Hey, would you sacrifice your equipment just so you can break the record on the strongest magnetic field created in a lab? These people would. Speaking with IEEE Spectrum, lead researcher Shojiro Takeyama explained that his team was hoping to achieve a magnetic field that reached 700 Tesla (the unit of measurement for gauging the strength of a magnetic field). At that level, the generator would likely self destruct, but when pushed to its limits the machine actually achieved a strength of 1,200 Tesla. To put that in perspective, an MRI machine — which is the most intense indoor magnetic field most people would ever encounter — comes in at just three Tesla. Needless to say, the researchers’ machine didn’t survive the test, but it did land them in the record books. Honestly, I don't think I can get away with doing that! Zz. September 28, 2018 Lubos Motl - string vacua and pheno Sleptons in Antarctica: 5-sigma evidence for stau-like high energy terrestrial rays As Jitter pointed out, an extremely interesting astro-ph paper appeared yesterday: The ANITA Anomalous Events as Signatures of a Beyond Standard Model Particle, and Supporting Observations from IceCube The paper was promoted at Live Science and the Science Magazine: Bizarre Particles Keep Flying Out of Antarctica's Ice, and They Might Shatter Modern Physics Oddball particles tunneling through Earth could point to new physics What's going on? The LHC collider hasn't found any evidence for supersymmetry before the deadlines that looked rather likely to the optimists – and not only optimists. Your humble correspondent has sent$100 to Adam Falkowski, with some logistic help by Tobias Sander. If the SUSY had been found, the outcome of our bet would have been more exciting – $10,000 into my pocket. But the superpartners exist at some scale – everyone who is convinced that this statement is incorrect is a moron. Maybe an easier way to find evidence for SUSY is to ignore the$10 billion collider and buy an air ticket to the chilliest continent. That's how it looks according to the paper.

Derek Fox is the lead author. Steinn Sigurðsson is an important second author (in total, there are 7 authors). Do you understand why The Reference Frame is the only website that pays tribute to the beautiful (as in "Dirac") Icelandic character ð$$=\partial_\mu \gamma^\mu$$ in his name? ;-)

I've known Steinn over the Internet for many years. But according to this blog, his most famous achievement so far was that he proved that his Motl number was at most six. It means that there exists the chain of collaborators Motl-Dine-Farrar-Hogg-Blandford-Hernquist-Sigurdsson.

Now, he's been very important in an actual scientific development that is said to provide us with some evidence for supersymmetry.

ANITA, some detector in Antarctica, has recorded something like two cosmic rays with EeV energies. Just to be sure, "eV" is the electronvolt and "E" stands for "exa" which is one million times "tera" (the thing in between is "peta"). So "exa" is $$10^{18}$$.

There have been other "exa-electronvolt" particles in the cosmic rays but dear Houston, we have a problem here. Cosmic rays should arrive from the Cosmos and like Heaven and the sky, the Cosmos is above us. Instead, two events arrived from the bottom, from the hell, they were going up.

Can cosmic rays penetrate through the Earth and land in the detector while going in the unusual direction "up"? Low-energy cosmic rays surely can – low-energy neutrinos are almost invisible, like ghosts. But what about high energy neutrinos, like EeV?

Mr Tau (right) and his silent, small but heavy superpartner.

Well, EeV is way above the electroweak scale, 240 GeV or so, and at these high energies, the electroweak symmetry is restored. One of the implications is that neutrinos recall their siblings, the charged leptons – they interact equally strongly. That really means "very strongly". High energy neutrinos have virtually no chance to penetrate through thousands of kilometers of rock.

Fox et al. say that the probability of a Standard Model-based explanation for these two ANITA events – and a few seemingly analogous IceCube results – is below one millionth, i.e. the evidence for the Beyond the Standard Model physics is formally above 5 sigma.

And they identify a nice supersymmetric scenario that may explain the events smoothly. Instead of the conversion of "tau neutrino to tau" (which still may explain cosmic rays going from the empty space), they suggest that the particle flying through the Earth was a stau, the superpartner of the charged tau lepton.

Their stau $$\tilde \tau_R$$ – like in some regular GMSB (gauge-mediated supersymmetry breaking) models – is the NLSP (next-to-lightest superpartner), it is rather long-lived, and (when it hits a nucleon, it) decays to the tau $$\tau$$ lepton (the same end product as if you have the tau neutrino from the heaven) plus the LSP, the truly invisible (lightest supersymmetric) particle that is a dark matter candidate – probably denoted not as $$\tilde \chi$$ but $$\tilde G$$ because it should be a gravitino in GMSB.

If you can offer an immediate explanation why stau in these models is so weakly interacting to get through Earth, I will appreciate you crash course.

Right now, Fox is hungry for more data. I mean Derek Fox or the mammal. In the entertainment industry, Fox isn't hungry at all and it – acting on behalf of Disney – sold its stake in Sky to a hungry Comcast.

The n-Category Cafe

Exceptional Quantum Geometry and Particle Physics

It would be great if we could make sense of the Standard Model: the 3 generations of quarks and leptons, the 3 colors of quarks vs. colorless leptons, the way only the weak force notices the difference between left and right, the curious gauge group $\mathrm{SU}\left(3\right)×\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(2\right)\times \mathrm\left\{U\right\}\left(1\right)$, the role of the Higgs boson, and so on. I can’t help but hope that all these facts are clues that we have not yet managed to interpret.

These papers may not be on the right track, but I feel a duty to explain them:

After all, the math is probably right. And they use the exceptional Jordan algebra, which I’ve already wasted a lot of time thinking about — so I’m in a better position than most to summarize what they’ve done.

Don’t get me wrong: I’m not claiming this paper is important for physics! I really have no idea. But it’s making progress on a quirky, quixotic line of thought that has fascinated me for years.

Here’s the main result. The exceptional Jordan algebra contains a lot of copies of 4-dimensional Minkowski spacetime. The symmetries of the exceptional Jordan algebra that preserve any one of these copies form a group…. which happens to be exactly the gauge group of the Standard Model!

Formally real Jordan algebras were invented by Jordan to serve as algebras of observables in quantum theory, but they also turn out to describe spacetimes equipped with a highly symmetrical causal structure. For example, ${𝔥}_{2}\left(ℂ\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{C\right\}\right)$, the Jordan algebra of $2×22 \times 2$ self-adjoint complex matrices, is the algebra of observables for a spin-$1/21/2$ particle — but it can also be identified with 4-dimensional Minkowski spacetime! This dual role of formally real Jordan algebras remains somewhat mysterious, though the connection is understood in this case.

When Jordan, Wigner and von Neumann classified formally real Jordan algebras, they found 4 infinite families and one exception: the exceptional Jordan algebra ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$, consisting of $3×33\times 3$ self-adjoint octonion matrices. Ever since then, physicists have wondered what this thing is good for.

Now Todorov and Dubois–Violette claim they’re getting the gauge group of the Standard Model from the symmetry group of the exceptional Jordan algebra by taking the subgroup that

1. preserves a copy of 10d Minkowski spacetime inside this Jordan algebra, and

2. also preserves a copy of the complex numbers inside the octonions — which is just what we need to pick out a copy of 4d Minkowski spacetime inside 10d Minkowski spacetime!

But let me explain this in more detail. First, some old stuff:

If you pick a unit imaginary octonion and call it $ii$, you get a copy of the complex numbers inside the octonions $𝕆\mathbb\left\{O\right\}$. This lets us split $𝕆\mathbb\left\{O\right\}$ into $ℂ\oplus V\mathbb\left\{C\right\} \oplus V$, where $VV$ is a 3-dimensional complex Hilbert space. The subgroup of the automorphism group of the octonions that fixes $ii$ is $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$. This is the gauge group of the strong force. It acts on $ℂ\oplus V\mathbb\left\{C\right\} \oplus V$ in exactly the way you’d need for a lepton and a quark.

The exceptional Jordan algebra ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$ contains the Jordan algebra ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$ of $2×22 \times 2$ self-adjoint octonion matrices in various ways. ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$ can be identified with 10-dimensional Minkowski spacetime, with the determinant serving as the Minkowski metric. Picking a unit imaginary octonion $ii$ then chooses a copy of ${𝔥}_{2}\left(ℂ\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{C\right\}\right)$ inside ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$, and ${𝔥}_{2}\left(ℂ\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{C\right\}\right)$ can be identified with 4-dimensional Minkowski spacetime.

All this is well-known to people who play these games. Now for the new part.

1) First, suppose we take the automorphism group of the exceptional Jordan algebra and look at the subgroup that preserves the splitting of $𝕆\mathbb\left\{O\right\}$ into $ℂ\oplus V\mathbb\left\{C\right\} \oplus V$ for each entry of these octonion matrices. This subgroup is

$\frac{\mathrm{SU}\left(3\right)×\mathrm{SU}\left(3\right)}{ℤ/3} \displaystyle\left\{ \frac\left\{ \mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(3\right) \right\} \left\{\mathbb\left\{Z\right\}/3\right\} \right\} $

It’s not terribly hard to see why this might be true. We can take any element of ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$ and spit it into two parts using $𝕆=ℂ\oplus V\mathbb\left\{O\right\} = \mathbb\left\{C\right\} \oplus V$, getting a decomposition one can write as ${𝔥}_{3}\left(𝕆\right)={𝔥}_{3}\left(ℂ\right)\oplus {𝔥}_{3}\left(V\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right) = \mathfrak\left\{h\right\}_3\left(\mathbb\left\{C\right\}\right) \oplus \mathfrak\left\{h\right\}_3\left(V\right)$. One copy of $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$ acts by conjugation on ${𝔥}_{3}\left(ℂ\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{C\right\}\right)$ while another acts by conjugation on ${𝔥}_{3}\left(V\right)\mathfrak\left\{h\right\}_3\left(V\right)$. These two actions commute. The center of $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$ is $ℤ/3\mathbb\left\{Z\right\}/3$, consisting of diagonal matrices that are cube roots of the identity matrix. So, we get an inclusion of $ℤ/3\mathbb\left\{Z\right\}/3$ in the diagonal of $\mathrm{SU}\left(3\right)×\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(3\right)$ and this subgroup acts trivially on ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$.

2) Next, take the subgroup of $\left(\mathrm{SU}\left(3\right)×\mathrm{SU}\left(3\right)\right)/ℤ/3\left(\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(3\right)\right)/\mathbb\left\{Z\right\}/3$ that also preserves a copy of ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$ inside ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$. This subgroup, Dubois-Violette and Todorov claim, is

$\frac{\mathrm{SU}\left(3\right)×\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)}{ℤ/6} \displaystyle\left\{ \frac\left\{ \mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(2\right) \times \mathrm\left\{U\right\}\left(1\right) \right\} \left\{\mathbb\left\{Z\right\}/6\right\} \right\} $

And this is the true gauge group of the Standard Model!

People often say the Standard Model has gauge group is $\mathrm{SU}\left(3\right)×\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(2\right) \times \mathrm\left\{U\right\}\left(1\right)$, which is okay, but this group has a $ℤ/6\mathbb\left\{Z\right\}/6$ subgroup that acts trivially on all particles—a fact that arises only because quarks have the exact charges they do! So, the ‘true’ gauge group of the Standard model is the quotient $\left(\mathrm{SU}\left(3\right)×\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)\right)/ℤ/6\left(\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(2\right) \times \mathrm\left\{U\right\}\left(1\right)\right)/\mathbb\left\{Z\right\}/6$. And this is fundamental to the $\mathrm{SU}\left(5\right)\mathrm\left\{SU\right\}\left(5\right)$ grand unified theory—a well-known fact that John Huerta and I explained a while ago here. The point is that while $\mathrm{SU}\left(3\right)×\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(2\right)\times \mathrm\left\{U\right\}\left(1\right)$ is not a subgroup of $\mathrm{SU}\left(5\right)\mathrm\left\{SU\right\}\left(5\right)$, its quotient by $ℤ/6\mathbb\left\{Z\right\}/6$ is.

I’ll admit, I don’t fully get how

$\frac{\mathrm{SU}\left(3\right)×\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)}{ℤ/6} \displaystyle\left\{ \frac\left\{ \mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(2\right) \times \mathrm\left\{U\right\}\left(1\right) \right\} \left\{\mathbb\left\{Z\right\}/6\right\} \right\} $

shows up inside

$\frac{\mathrm{SU}\left(3\right)×\mathrm{SU}\left(3\right)}{ℤ/3} \displaystyle\left\{ \frac\left\{ \mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(3\right) \right\} \left\{\mathbb\left\{Z\right\}/3\right\} \right\} $

as the subgroup that preserves an ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$ inside ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$.

I think it works like this. I described $\mathrm{SU}\left(3\right)×\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(3\right)$ one way, but there should be another essentially equivalent way to get two copies of $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$ acting on ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$. Namely, let the first copy act componentwise on each entry of your $3×33 \times 3$ octonionic matrix, and let the second act by conjugation on the whole matrix. In this alternative picture the $ℤ/3\mathbb\left\{Z\right\}/3$ subgroup lies wholly in the second copy of $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$. Then, figure out those elements of $\mathrm{SU}\left(3\right)×\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right) \times \mathrm\left\{SU\right\}\left(3\right)$ that preserve a copy of ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$ inside ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$: say, the matrices where the last row and last column vanish. All the elements of the first copy of $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$ preserve this ${𝔥}_{2}\left(𝕆\right)\mathfrak\left\{h\right\}_2\left(\mathbb\left\{O\right\}\right)$, because they act componentwise. But not all elements of the second copy do: only the block diagonal ones with a $2×22\times 2$ block and a $1×11 \times 1$ block. The matrices in $\mathrm{SU}\left(3\right)\mathrm\left\{SU\right\}\left(3\right)$ with this block diagonal form look like

$\left(\begin{array}{cc}\alpha g& 0\\ 0& {\alpha }^{-2}\end{array}\right) \left\left( \begin\left\{array\right\}\left\{cc\right\} \alpha g & 0 \\ 0 & \alpha^\left\{-2\right\} \end\left\{array\right\} \right\right) $

where $g\in \mathrm{SU}\left(2\right)g \in \mathrm\left\{SU\right\}\left(2\right)$ and $\alpha \in \mathrm{U}\left(1\right)\alpha \in \mathrm\left\{U\right\}\left(1\right)$. These form a group isomorphic to

$\frac{\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)}{ℤ/2} \displaystyle\left\{ \frac\left\{ \mathrm\left\{SU\right\}\left(2\right) \times \mathrm\left\{U\right\}\left(1\right)\right\}\left\{\mathbb\left\{Z\right\}/2\right\} \right\} $

If all this works out, it’s very pretty: the 2 and the 1 in $\mathrm{SU}\left(2\right)×\mathrm{U}\left(1\right)\mathrm\left\{SU\right\}\left(2\right) \times \mathrm\left\{U\right\}\left(1\right)$ arise from the choice of a $2×22 \times 2$ block and $1×11 \times 1$ block in ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$… which is also the choice that lets us find Minkowski spacetime inside ${𝔥}_{3}\left(𝕆\right)\mathfrak\left\{h\right\}_3\left(\mathbb\left\{O\right\}\right)$.

But I need to check some things, like how we get the $ℤ/6\mathbb\left\{Z\right\}/6$.

September 27, 2018

Axel Maas - Looking Inside the Standard Model

Unexpected connections
The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

September 26, 2018

The n-Category Cafe

A Communal Proof of an Initiality Theorem

One of the main reasons I’m interested in type theory in general, and homotopy type theory (HoTT) in particular, is that it has categorical semantics. More precisely, there is a correspondence between (1) type theories and (2) classes of structured categories, such that any proof in a particular type theory can be interpreted into any category with the corresponding structure. I wrote a lot about type theory from this perspective in The Logic of Space. The basic idea is that we construct a particular structured category $\mathrm{Syn}Syn$ out of the syntax of the type theory, and prove that it is the initial such category. Then we can interpret any syntactic object $AA$ in a structured category $CC$ by regarding $AA$ as living in $\mathrm{Syn}Syn$ and applying the unique structured functor $\mathrm{Syn}\to CSyn\to C$.

Unfortunately, we don’t currently have any very general definitions of what “a type theory” is, what the “corresponding class of structured categories” is, or a very general proof of this “initiality theorem”. The idea of such proofs is easy — just induct over the construction of syntax — but its realization in practice can be long and tedious. Thus, people are understandably reluctant to take the time and space to write out such a proof explicitly, when “everyone knows” how the proof should go and probably hardly anyone would really read such a proof in detail anyway. This is especially true for dependent type theory, which is qualitatively more complicated in various ways than non-dependent type theories; to my knowledge only one person (Thomas Streicher) has ever written out anything approaching a complete proof of initiality for a dependent type theory.

There is currently some disagreement in the HoTT community over how much a problem this is. On one side, the late Vladimir Voevodsky argued that it is completely unacceptable, delayed the publication of his seminal model of type theory in simplicial sets because of his dissatisfaction with the situation, and spent the last years of his life working on the problem. (Others, less dogmatic in philosophy, are nevertheless also working on the problem — specifically, attempting to give a general definition of “type theory” and prove a general initiality theorem for all such “type theories”.) On the other side, plenty of people point out reasonably that functorial semantics has been well-understood for decades, and why should we worry so much about a particular instance of it all of a sudden now? Unfortunately, the existence of this disagreement is not good for the perception of our discipline among other mathematicians.

In my experience, arguments about the importance of initiality often tend to devolve into disagreements about questions like: Is Streicher’s proof hard to understand? Does it generalize “easily” to other type constructors? How “easy” is “easily”? What kinds of type constructors? Is it hard to deal with variable binding? Is the categorical structure really “exactly” the same as the type-theoretic structure? Where is the “hard part” of an initiality proof? Is there even a “hard part” at all? Plenty of people have opinions about these questions, but for most of us these opinions are not based on actual experience of trying to prove such a theorem.

Last month at the nForum, Richard Williamson suggested the (in hindsight obvious) solution: let’s get a bunch of people together and work together to write out a complete proof of an initiality theorem, in modern language, for a basic uncomplicated dependent type theory. If we have enough contributors to divide up the work, the verbosity and tedium shouldn’t be overwhelming. We can use the nLab wikilink features to organize the proof in a “drill-down” manner so that a reader can get a high level idea and then delve into as many or as few details as desired. Hopefully, this will increase overall public awareness of how such proofs work, so that they seem less “magic”. Moreover, all the contributors will get some actual experience “in the trenches” with an initiality proof, thereby hopefully leading us to more informed opinions.

I don’t view such a project as a replacement for proving one general theorem, but as a complementary effort, whose goals are primarily understanding and exposition. However, if it’s successful, the result will be a complete initiality theorem for at least one dependent type theory; and we can add as many bells and whistles to this theory as we have time and energy for, hopefully in a relatively modular way.

We had some preliminary discussion about this project at the nForum here, at which enough people expressed interest in participating that I think the project can get off the ground. But the more the merrier! If you’d like to help out, even just a little bit, just add your name to the list of participants on this nLab page and join the conversation when it begins. (Some other people have informally told me they’re interested, but I didn’t keep a record of their names, so I didn’t add them to the list; if you fall in that category, please add yourself!)

I’m not sure yet how we will do most of our communication and coordination. We’ll probably have one or more nForum threads for discussion. I think it might be nice to have some scheduled videoconference meetings for those who can make it, especially during the early stages when we’ll have to make various decisions that will affect the overall course of the project; but I’m not wedded to that if others aren’t interested. Most of the work will probably be individual people writing out proofs of inductive cases on nLab pages.

Some of the decisions we’ll have to make at the beginning include:

• What type theory should we aim for as a first target? We can always add more to it later, so it should be something fairly uncomplicated, but nontrivial enough to exhibit the interesting features. For instance, I think it should certainly have $\Pi \Pi$-types. What about universes?

• Exactly how should we present the syntax? In particular, should we represent variable binding with named variables, de Bruijn indices, or some other method? Should all terms be fully annotated?

• What categorical structure should we use as the substrate? Options include contextual categories (a.k.a. “C-systems”), categories with families, split comprehension categories, etc.

• How should we structure the proof? The questions here are hard to describe concisely, but for instance one of them was mentioned by Peter Lumsdaine at the nForum thread: Streicher phrases the induction using an “existential quantifier” for interpretation of contexts, but it is arguably easier to use a “universal quantifier” in the same place.

Feel free to use the comments of this post to express opinions about any of these, or about the project overall. My current plan is to wait a couple weeks to give folks a chance to sign up, then “officially” start making plans as a group.

ZapperZ - Physics and Physicists

How Fast Is The Photoelectric Effect?
Every student who studied modern physics in an undergraduate General Physics course would have encountered the photoelectric effect. It is a phenomenon that has a special place in the history of physics, and the theoretical description of this phenomenon gave Einstein his Nobel Prize.

So one would think that this is a done deal already, and we should know all there is to know about it. In some sense, we do. We know enough about it that we have expanded this phenomenon to be included in a more general phenomenon called photoemission. We use this phenomenon to study many things, including band structure of materials. So it is very well-known.

Yet, as with so many things in physics, the more we study it, the more we want to know the minute details of it. In this case, the current study is on how fast an electron is emitted from a material once light impinges upon it. In other words, from the moment a photon is absorbed, how quickly does the electron is liberated from the material?

This is not that easy to answer because, well, one can already guess at how would one determine (i) the exact time when one photon is absorbed into a material, and (ii) the exact time when an electron  is liberated due to that absorbed photon. On top of that, this may be a very fast process, so how does one measure a time scale that is almost instantaneous?

The authors of this latest paper[1] came up with a very ingenious method to determine this, and in the process, they have elucidated even more the various stages of what is involved in the photoelectric effect. But before we continue, let's get one thing very clear here.

The "photoelectric effect" that we know and love, and the one that Millikan studied, is the phenomenon whereby UV light is shown onto a metallic surface (cathode). We know now that this is an emission process of electrons coming from the metal's conduction band. This is important because, as this new study shows, this process is different than the emission from core levels (i.e. not from the continuous conduction band). Those of us who have done photoemission work using both UV and x-rays can attest to such differences.

The experiment in this report was done on a tungsten surface, or more specifically, W(110) surface. The hard UV light that was used allowed them to get photoemission from the conduction band and a core-level state.

What they found was that from the time that a photon is absorbed to the moment that an electron is emitted, the time for the process for a conduction electron is ~ 45 as, while for a core-level electron is ~100 as.

{as = attosecond = 1 x 10^(-18) second}

So the emission from core-level takes more than twice as long to occur. In their analysis, the authors stressed this conclusion:

These findings highlight that proper accounting for the initial creation, origin, transport and scattering of electrons is imperative for the proper description of the photoelectric effect.

Bill Spicer's 3-step model of photoemission process certainly highlighted the fact that it isn't a simple process. This paper not only reinforce that, but also included the effect of surface states in the influence to emission time and thus, possibly influencing other properties of the emitted photoelectron.

There are many things in physics which we know a lot of. But these are also areas in which we continue to dig deeper to find out even more. There will never be a point where we know everything there is to know, even with established ideas and phenomena.

Zz.

[1] M. Ossiander et al., Nature 561, 374 (2018). https://www.nature.com/articles/s41586-018-0503-6
Summary of this work can be found here.

September 25, 2018

Sean Carroll - Preposterous Universe

Atiyah and the Fine-Structure Constant

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

(He’s also proposed a proof of the Riemann conjecture, I have zero insight to give there.)

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.
As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

September 24, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

September 22, 2018

Clifford V. Johnson - Asymptotia

Jumpers, Sweaters, and So Forth…

If you've been following on instagram you'll know that I spent some time over the last weeks working on an illustration that was commissioned by a physics magazine. (Feels odd saying that, commissioned, but that's exactly what happened. Apparently I'm able to add professional illustrator to my CV now. Huh.) Anyway, the illustration will show the interior of a lab. I'll let you know more about it closer to publication. Much of the focus was on the people, and for reasons that will become clear, I did a bit of a throwback to the 80s, and so tried to reflect that period somewhat, old computers and ghastly sweaters and all. Here's a sequence of stages of a corner of the work (click on it for a larger view):

The post Jumpers, Sweaters, and So Forth… appeared first on Asymptotia.

September 20, 2018

John Baez - Azimuth

Patterns That Eventually Fail

Sometimes patterns can lead you astray. For example, it’s known that

$\displaystyle{ \mathrm{li}(x) = \int_0^x \frac{dt}{\ln t} }$

is a good approximation to $\pi(x),$ the number of primes less than or equal to $x.$ Numerical evidence suggests that $\mathrm{li}(x)$ is always greater than $\pi(x).$ For example,

$\mathrm{li}(10^{12}) - \pi(10^{12}) = 38,263$

and

$\mathrm{li}(10^{24}) - \pi(10^{24}) = 17,146,907,278$

But in 1914, Littlewood heroically showed that in fact, $\mathrm{li}(x) - \pi(x)$ changes sign infinitely many times!

This raised the question: when does $\pi(x)$ first exceed $\mathrm{li}(x)$? In 1933, Littlewood’s student Skewes showed, assuming the Riemann hypothesis, that it must do so for some $x$ less than or equal to

$\displaystyle{ 10^{10^{10^{34}}} }$

Later, in 1955, Skewes showed without the Riemann hypothesis that $\pi(x)$ must exceed $\mathrm{li}(x)$ for some $x$ smaller than

$\displaystyle{ 10^{10^{10^{964}}} }$

By now this bound has been improved enormously. We now know the two functions cross somewhere near $1.397 \times 10^{316},$ but we don’t know if this is the first crossing!

All this math is quite deep. Here is something less deep, but still fun.

You can show that

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, \frac{\sin \left(\frac{t}{301}\right)}{\frac{t}{301}} \, dt = \frac{\pi}{2} }$

and so on.

It’s a nice pattern. But this pattern doesn’t go on forever! It lasts a very, very long time… but not forever.

More precisely, the identity

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$

holds when

$n < 9.8 \cdot 10^{42}$

but not for all $n.$ At some point it stops working and never works again. In fact, it definitely fails for all

$n > 7.4 \cdot 10^{43}$

The explanation

The integrals here are a variant of the Borwein integrals:

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, dx= \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} \, dx = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\, \frac{\sin(x/3)}{x/3} \, \frac{\sin(x/5)}{x/5} \, dx = \frac{\pi}{2} }$

where the pattern continues until

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} \, dx = \frac{\pi}{2} }$

but then fails:

$\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots \frac{\sin(x/15)}{x/15} \, dx \approx \frac \pi 2 - 2.31\times 10^{-11} }$

I never understood this until I read Greg Egan’s explanation, based on the work of Hanspeter Schmid. It’s all about convolution, and Fourier transforms:

Suppose we have a rectangular pulse, centred on the origin, with a height of 1/2 and a half-width of 1.

Now, suppose we keep taking moving averages of this function, again and again, with the average computed in a window of half-width 1/3, then 1/5, then 1/7, 1/9, and so on.

There are a couple of features of the original pulse that will persist completely unchanged for the first few stages of this process, but then they will be abruptly lost at some point.

The first feature is that F(0) = 1/2. In the original pulse, the point (0,1/2) lies on a plateau, a perfectly constant segment with a half-width of 1. The process of repeatedly taking the moving average will nibble away at this plateau, shrinking its half-width by the half-width of the averaging window. So, once the sum of the windows’ half-widths exceeds 1, at 1/3+1/5+1/7+…+1/15, F(0) will suddenly fall below 1/2, but up until that step it will remain untouched.

In the animation below, the plateau where F(x)=1/2 is marked in red.

The second feature is that F(–1)=F(1)=1/4. In the original pulse, we have a step at –1 and 1, but if we define F here as the average of the left-hand and right-hand limits we get 1/4, and once we apply the first moving average we simply have 1/4 as the function’s value.

In this case, F(–1)=F(1)=1/4 will continue to hold so long as the points (–1,1/4) and (1,1/4) are surrounded by regions where the function has a suitable symmetry: it is equal to an odd function, offset and translated from the origin to these centres. So long as that’s true for a region wider than the averaging window being applied, the average at the centre will be unchanged.

The initial half-width of each of these symmetrical slopes is 2 (stretching from the opposite end of the plateau and an equal distance away along the x-axis), and as with the plateau, this is nibbled away each time we take another moving average. And in this case, the feature persists until 1/3+1/5+1/7+…+1/113, which is when the sum first exceeds 2.

In the animation, the yellow arrows mark the extent of the symmetrical slopes.

OK, none of this is difficult to understand, but why should we care?

Because this is how Hanspeter Schmid explained the infamous Borwein integrals:

∫sin(t)/t dt = π/2
∫sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫sin(t/13)/(t/13) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But then the pattern is broken:

∫sin(t/15)/(t/15) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

Here these integrals are from t=0 to t=∞. And Schmid came up with an even more persistent pattern of his own:

∫2 cos(t) sin(t)/t dt = π/2
∫2 cos(t) sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫2 cos(t) sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫2 cos(t) sin(t/111)/(t/111) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But:

∫2 cos(t) sin(t/113)/(t/113) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

The first set of integrals, due to Borwein, correspond to taking the Fourier transforms of our sequence of ever-smoother pulses and then evaluating F(0). The Fourier transform of the sinc function:

sinc(w t) = sin(w t)/(w t)

is proportional to a rectangular pulse of half-width w, and the Fourier transform of a product of sinc functions is the convolution of their transforms, which in the case of a rectangular pulse just amounts to taking a moving average.

Schmid’s integrals come from adding a clever twist: the extra factor of 2 cos(t) shifts the integral from the zero-frequency Fourier component to the sum of its components at angular frequencies –1 and 1, and hence the result depends on F(–1)+F(1)=1/2, which as we have seen persists for much longer than F(0)=1/2.

• Hanspeter Schmid, Two curious integrals and a graphic proof, Elem. Math. 69 (2014) 11–17.

I asked Greg if we could generalize these results to give even longer sequences of identities that eventually fail, and he showed me how: you can just take the Borwein integrals and replace the numbers 1, 1/3, 1/5, 1/7, … by some sequence of positive numbers

$1, a_1, a_2, a_3 \dots$

The integral

$\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(a_1 x)}{a_1 x} \, \frac{\sin(a_2 x)}{a_2 x} \cdots \frac{\sin(a_n x)}{a_n x} \, dx }$

will then equal $\pi/2$ as long as $a_1 + \cdots + a_n \le 1,$ but not when it exceeds 1. You can see a full explanation on Wikipedia:

• Wikipedia, Borwein integral: general formula.

As an example, I chose the integral

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt }$

which equals $\pi/2$ if and only if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} \le 1 }$

Thus, the identity holds if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k} \le 1 }$

However,

$\displaystyle{ \sum_{k=1}^n \frac{1}{k} \le 1 + \ln n }$

so the identity holds if

$\displaystyle{ \frac{1}{100} (1 + \ln n) \le 1 }$

or

$\ln n \le 99$

or

$n \le e^{99} \approx 9.8 \cdot 10^{42}$

On the other hand, the identity fails if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} > 1 }$

so it fails if

$\displaystyle{ \sum_{k=1}^n \frac{1}{101 k} > 1 }$

However,

$\displaystyle{ \sum_{k=1}^n \frac{1}{k} \ge \ln n }$

so the identity fails if

$\displaystyle{ \frac{1}{101} \ln n > 1 }$

or

$\displaystyle{ \ln n > 101}$

or

$\displaystyle{n > e^{101} \approx 7.4 \cdot 10^{43} }$

With a little work one could sharpen these estimates considerably, though it would take more work to find the exact value of $n$ at which

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$

first fails.

September 10, 2018

Lubos Motl - string vacua and pheno

Why string theory is quantum mechanics on steroids
In many previous texts, most recently in the essay posted two blog posts ago, I expressed the idea that string theory may be interpreted as the wisdom of quantum mechanics that is taken really seriously – and that is applied to everything, including the most basic aspects of the spacetime, matter, and information.

People like me are impressed by the power of string theory because it really builds on quantum mechanics in a critical way to deduce things that would have been impossible before. On the contrary, morons typically dislike string theory because their mezzoscopic peabrains are already stretched to the limit when they think about quantum mechanics – while string theory requires the stretching to go beyond these limits. Peabrains unavoidably crack and morons, writing things that are not even wrong about their trouble with physics, end up lost in math.

Other physicists have also made the statement – usually in less colorful ways – that string theory is quantum mechanics on steroids. It may be a good idea to explain what all of us mean – why string theory depends on quantum mechanics so much and why the power of quantum mechanics is given the opportunity to achieve some new amazing things within string theory.

At the beginning, I must say that the non-experts (including many pompous fools who call themselves "experts") usually overlook the whole "beef" of string theory just like they overlook the "beef" of quantum mechanics.

They imagine that quantum mechanics "is" a new equation, Schrödinger's equation, that plays the same role as Newton's, Maxwell's, Einstein's, and other equations. But quantum mechanics is much more – and much more universal and revolutionary – than another addition to classical physics. The actual heart of quantum mechanics is that the objects in its equations are connected to the observations very differently than the classical counterparts have been.

In the same way, they imagine that string theory is a theory of a new random dynamical object, a rubber band, and they imagine either downright classical vibrating strings or quantum mechanical strings that just don't differ from other quantum mechanical objects. But this understanding doesn't go beyond the (unavoidably oversimplified) name of string theory. If you analyze the composition of the term "string theory" as a linguist, you may think it's just a "theory of some strings". But that's not really the lesson one should draw. The real lesson is that if certain operations are done well with particular things, one ends with some amazing set of equations that may explain lots of things about the Universe.

Strings are exceptionally powerful – and only exceptionally powerful – at the quantum level. And the point of string theory isn't that it's a theory of another object. The point is that string theory is special among theories that would initially look "analogous".

Why is it special? And why is the magic of string theory so intertwined with quantum mechanics?

Discrete types of Nature's building blocks

For centuries, people knew something about chemistry. Matter around us is made of compounds which are mixtures of elements – such as hydrogen, helium, lithium, and I am sure you have memorized the rest. The number of types of atoms around us is finite. If arbitrarily large nuclei were allowed or stable, it would be countably infinite. But the number would still be discrete – not continuous.

For some century, people realized that the elements are probably made out of identical atoms. Each element has its own kind of atoms. The concept of atoms was first promoted by Democritus in ancient Greece. But in chemistry, atoms became more specific.

Sometime in the late 19th and early 20th century, people began to understand that the atom isn't as indivisible as the Greek name suggested. It is composed of a dense nucleus and electrons that live somewhere around the nucleus. Nucleus was later found to be composed of protons and neutrons. Quantum mechanics of 1925 allowed the physicists to study the quantized motion of electrons around the nuclei – and the motion of the electrons is the crucial thing that decides about the energy levels of all atoms and, consequently, their chemical properties.

In the 1960s, protons and neutrons were found to be composite as well. First, matter was composed of atoms – different kinds of building blocks for every element. Later, matter was reduced to bound states of electrons, protons, and neutrons. Later, protons and neutrons were replaced with quarks while electrons remained and became an important example of leptons, a group of fermions that is considered "on par" with quarks. The Standard Model deals with fermions, namely quarks and leptons, and bosons, namely the gauge boson and the Higgs boson. The bosons are particularly capable of mediating forces between all the fermions (and bosons).

But even in this "nearly final" picture, there are still finitely many but relatively many species of elementary particles. Their number is slightly lower than the number of atoms that were considered indivisible a century earlier. But the difference isn't too big – neither qualitatively nor quantitatively. We have dozens of types of basic "atoms" or "elementary particles" and each of them must be equipped with some properties (yes, the properties of elementary particles in the Standard Model look more precise and fundamental than the properties of atoms of the elements used to). The different particle species amount to many independent assumptions about Nature that have to be added to the mix to build a viable theory.

Can we do better? Can we derive the species from a smaller number of assumptions – and from one kind of matter?

String theory – let's assume that Nature is described by a weakly-coupled heterotic string theory (closed strings only), to make it simpler – describes all elementary particles, bosons and fermions, as discrete energy eigenstates of a vibrating closed string. All interactions boil down to splitting and merging of these oscillating strings. Quantum mechanics is needed for the energy levels to be discrete – just like in the case of the energy levels of atoms. But for the first time, there is only one underlying building block in Nature, a vibrating closed string.

Like in atomic and molecular physics, quantum mechanics is needed for the discrete – finite or countable – number of species of small bound objects that exist.

Also, the number of spacetime dimensions was always arbitrary in classical physics. When constructing a theory, you had to assume a particular number – in other words, you had to add the coordinates $$t,x,y,z$$ to your theory manually, one by one – and because the choice of the spacetime dimension was one of the first steps in the construction of any theory, there was no way to treat the theories in different spacetime dimensions simultaneously, and there were consequently no conceptual ways how to derive the right spacetime dimension.

In string theory, it's different because even the spacetime dimensions – scalar fields on the world sheet – are "things" that contribute to various quantities (such as the conformal anomaly) and string theory is therefore capable of picking the preferred (critical) dimension of the spacetime. Even the individual spacetime dimensions are sort of made of the "same convertible stuff" within string theory. This would be unthinkable in classical physics.

Prediction of gravity and other special forces: state-operator correspondence

String theory is not only the world's only known theory that allows Einsteinian gravity in $$D\geq 4$$ to co-exist with quantum mechanics. String theory makes the Einsteinian gravity unavoidable. It predicts gravitons, spin-two particles that interact in agreement with the equivalence principle (all objects accelerate at the same acceleration in a gravitational field).

Why is it so? I gave an explanation e.g. in 2007. It is because a particular energy level of the vibrating closed string looks like a spin-two massless particle and it may be shown that the addition of a coherent state of such "graviton strings" into a spacetime is equivalent to the change of the classical geometry on which all other objects – all other vibrating strings – propagate. In this way, the dynamical curved geometry (or at least any finite change of it) may be literally built out of these gravitons.

(Similarly, the addition of strings in another mode, the photon mode, may have the effect that is indistinguishable from the modification of the background electromagnetic field and it is true for all other low-energy fields, too.)

Why is it so? What is the most important "miracle" or a property of string theory that allows this to work? I have picked the state-operator correspondence. And the state-operator correspondence is an entirely quantum mechanical relationship – something that wouldn't be possible in a classical world.

What is the state-operator correspondence? Consider a closed string. It has some Hilbert space. In terms of energy eigenstates, the Hilbert space has a zero mode described by the usual $$x_0,p_0$$ degrees of freedom that make the string behave as a quantum mechanical particle. And then the strings may be stretched and the amount of vibrations may be increased by adding oscillators – excitations by creation operators of many quantum harmonic oscillators. So a basis vector in this energy basis of the closed string's Hilbert space is e.g.$\alpha^\kappa_{-2}\alpha^\lambda_{-3} \tilde \alpha^\mu_{-4} \tilde\alpha_{-1}^\nu \ket{0; p^\rho}.$ What is this state? It looks like a momentum eigenstate of a particle whose spacetime momentum is $$p^\rho$$. However, for a string, the "lightest" state with this momentum is just a ground state of an infinite-dimensional harmonic oscillator. We may excite that ground state with the oscillators $$\alpha$$. These excitations are vaguely analogous to the kicking of the electrons in the atoms from the ground state to higher states, e.g. from $$1s$$ to $$2p$$. Those oscillators without a tilde are left-moving, those with a tilde are right-moving waves on the string. The (negative) subscript labels the number of periods along the closed string (which Fourier mode we pick). The superscript $$\kappa$$ etc. labels in which transverse spacetime direction the string's oscillation is increased.

The total squared mass is given by $$2+3=4+1$$ in some string units. The sum of the tilded and untilded subscripts must be equal (five, in this case) for the "beginning" of the closed string to be immaterial, technically because $$L_0-\tilde L_0 = 0$$. Great. This was a basis of the closed string's Hilbert space.

But we may also discuss the linear operators on that Hilbert space. They're constructed as functionals of $$X^\kappa(\sigma)$$ and $$P^\kappa(\sigma)$$ – I am omitting some extra fields (ghosts) that are needed in some descriptions, plus I am omitting a discussion about the difference between transverse and longitudinal directions of the excitations etc. – there are numerous technicalities you have to master when you study string theory at the expert level but they don't really affect the main message I want to convey.

OK, the Hilbert space is infinite-dimensional but its dimension $$d$$ must be squared, to get $$d^2$$, if you want to quantify the dimension of the space of matrices on that space, OK? A matrix is "larger" than a column vector. The number $$d^2$$ looks much higher than $$d$$ but nevertheless, for $$d=\infty$$, as long as it is the right "stringy infinity", there exists a very natural one-to-one map between the states and the local operators. Let me immediately tell you what is the operator corresponding to the state above:$(\partial_z)^2 X^\kappa (\partial_z)^3 X^\lambda (\partial_{\bar z})^4 X^\mu (\partial_{\bar z})^1 X^\nu \exp(ip\cdot X(\sigma))$ There should be some normal ordering here. All the four operators $$X^{\kappa,\lambda,\mu,\nu}$$ are evaluated at the point of the string $$\sigma$$, too. You see that the superscripts $$\kappa,\lambda,\mu,\nu$$ were copied to natural places, the subscripts $$2,3,4,1$$ were translated to powers of the world sheet derivative with respect to $$z$$ or $$\bar z$$, the holomorphic or antiholomorphic complex coordinates on the Euclideanized worldsheet. Tilded and untilded oscillators were translated to the holomorphic and antiholomorphic derivatives. An exponential of $$X^\rho$$ operator was inserted to encode the ordinary "zero mode", particle-like total momentum of the string. And the total operator looks like some very general product of a function of $$X^\rho$$ – the imaginary exponentials are a good basis, ask Mr Fourier why it is so – and its derivatives (of arbitrarily high orders). By the combination of the "Fourier basis wisdom" and a simple decomposition to monomials, every function of $$X^\rho$$ and its worldsheet derivatives may be expanded to a sum of such terms.

The map between operators and states isn't quite one-to-one. We only considered "local operators at point $$\sigma$$ of the string" where the value of $$\sigma$$ remains unspecified. But the "number of possible values of $$\sigma$$" looks like a smaller factor than the factor $$d$$ that distinguishes $$d,d^2$$, the dimension of the Hilbert space and the space of operators, so the state-operator correspondence is "almost" a one-to-one map.

Such a map would be unthinkable in classical physics. In classical physics, a pure state would be a point in the phase space. On the other hand, the observable of classical physics is any coordinate on the phase space – such as $$x$$ or $$p$$ or $$ax^2+bp^2$$. Is there a canonical way to assign a coordinate on the phase space – a scalar function on the phase space – to a particular point $$(x,p)$$ on that space? There's clearly none. These mathematical objects carry completely different information – and the choice of the coordinate depends on much more information. You would have a chance to map a probability distribution (another scalar function) on the phase space to a general coordinate on the phase space – except that the former is non-negative. But that map wouldn't be shocking in quantum mechanics, either, because the probability distribution is upgraded to a density matrix which is a similar matrix as the observables. The magic of string theory is that there is a dictionary between pure states and operators.

This state-operator correspondence is important – it is a part of the most conceptual proof of the string theory's prediction of the Einsteinian gravity. Why does the state-operator correspondence exist? What is the recipe underlying this magic?

Well, you can prove the state-operator correspondence by considering a path integral on an infinite cylinder. By conformal transformations – symmetries of the world sheet theory – the infinite cylinder may be mapped to the plane with the origin removed. The boundary conditions on the tiny removed circle at the origin (boundary conditions rephrased as a linear insertion in the path integral) correspond to a pure state; but the specification of these boundary conditions must also be equivalent to a linear action at the origin, i.e. a local operator.

Another "magic player" that appeared in the previous paragraph – a chain of my explanations – is the conformal symmetry. A solution to the world sheet theory works even if you conformally transform it (a conformal transformation is a diffeomorphism that doesn't change the angles even if you keep the old metric tensor field). Conformal symmetries exist even in purely classical field theories. Lots of the self-similar or scale-invariant "critical" behavior exhibits the conformal symmetry in one way or another. But what's cool about the combination of conformal symmetry and quantum mechanics is that a particular, fully specified pure state (and the ground state of a string or another object, e.g. the spacetime vacuum) may be equivalent to a particular state of the self-similar fog.

The combination of quantum mechanics and conformal symmetry is therefore responsible for many nontrivial abilities of string theory such as the state-operator correspondence (see above) or holography in the AdS/CFT correspondence. At the classical level, the conformal symmetry of the boundary theory is already isomorphic to the isometry of the AdS bulk. But that wouldn't be enough for the equivalence between "field theory" in spacetimes of different dimensions. Holography i.e. the ability to remove the holographic dimension in quantum gravity may only exist when the conformal symmetry exists within a quantum mechanical framework.

Dualities, unexpected enhanced symmetries, unexpected numerous descriptions

The first quantum mechanical X-factor of quantum mechanics is the state-operator correspondence and its consequences – either on the world sheet (including the prediction of forces mediated by string modes) or on in the boundary CFT in the holographic AdS/CFT correspondence.

To make the basic skeleton of this blog post simple, I will only discuss the second class of stringy quantum muscles as one package – the unexpected symmetries, enhanced symmetries, and numerous descriptions. For some discussion of the enhanced symmetries, try e.g. this 2012 blog post.

In theoretical physicists' jargon, dualities are relationships between seemingly different descriptions that shouldn't represent the same physics but for some deep, nontrivial, and surprising reasons, the physical behavior is completely equivalent, including the quantitative properties such as the mass spectrum of some bound states etc.

The enhanced symmetries such as the $$SU(2)$$ gauge group of the compactification on a self-dual circle (under T-duality) are a special example of dualities, too. The action of this $$SU(2)$$, except for the simple $$U(1)$$ subgroup, looks like some weird mixing of states with different winding numbers etc. Nothing like that could be a symmetry in classical physics. In particular, we need quantum mechanics to make the momenta quantized – just like the winding numbers (the integer saying how many times a string is wound around a non-contractible circle in the spacetime) are quantized – if we want to exchange momenta and windings as in T-duality. But within string theory, those symmetries become possible.

Many stringy vacua have larger symmetry groups than expected classically. You may identify 16+16 fermions on the heterotic string's world sheet and figure out that the theory will have an $$SO(16)\times SO(16)$$ symmetry. But if you look carefully, the group is actually enhanced to an $$E_8\times E_8$$. Similarly, a string theory on the Leech lattice could be expected to have a Conway group of symmetries – the isometry of such a lattice – but instead, you get a much cooler, larger, and sexier monster group of symmetries, the largest sporadic finite group.

Two fermions on the world sheet may be bosonized – they are equivalent to one boson. This is also a simple example of a "stringy duality" between two seemingly very different theories. The conformal symmetry and/or the relative scarcity of the number of possible conformal field theories may be used in a proof of this equivalence. Wess-Zumino-Witten models involving strings propagating on group manifolds are equivalent to other "simple" theories, too.

I don't want to elaborate on all the examples – their number is really huge and I have discussed many of them in the past. They may often be found in different chapters of string theory textbooks. Here, I want to emphasize their general spirit and where this spirit comes from. Quantum mechanics is absolutely essential for this phenomenon.

Why is it so? Why don't we see almost any of these enhanced symmetries, dualities, and equivalences between descriptions in classical physics? An easy answer is unlikely to be a rigorous proof but it may be rather apt, anyway. My simplest explanation would be: You don't see dualities and other things in classical physics because classical physics allows you the "infinite sharpness and resolution" which means that if two things look different, they almost certainly are different.

(Well, some symmetries do exist classically. For example, Maxwell's equations – with added magnetic monopoles or subtracted electric charges – have the symmetry of exchanging the electric fields with the magnetic fields, $$\vec E\to \vec B$$, $$\vec B\to -\vec E$$. This is a classical seed of the stringy S-dualities – and of stringy T-dualities if the electromagnetic duality is performed on a world sheet. But quantum mechanics is needed for the electromagnetic duality to work in the presence of particles with well-defined non-zero charges in the S-duality case; and in the presence of quantized stringy winding charges in the T-duality example because the T-dual momenta have to be quantized as well.)

On the other hand, quantum mechanics brings you the uncertainty principle which introduces some fog and fuzziness. The objects don't have sharp boundaries and shapes given by ordinary classical functions. Instead, the boundaries are fuzzy and may be interpreted in various ways. It doesn't mean that the whole theory is ill-defined. Quantum mechanics is completely quantitative and allows an arbitrarily high precision.

Instead, the quantum mechanical description often leads to a discrete spectrum and allows you to describe all the "invariant" properties of an energy-like operator by its discrete spectrum – by several or countably many eigenvalues. And there are many classical models whose quantization may yield the same spectrum. The spectrum – perhaps with an extra information package that is still relatively small – may capture all the physically measurable, invariant properties of the physical theory.

We may see the seed of this multiplicity of descriptions in basic quantum mechanics. The multiplicity exists because there are many – and many clever – unitary transformations on the Hilbert space and many bases and clever bases we may pick. The Fourier-like transformation from one basis to another makes the theory look very different than before. Such integral transformations would be very unnatural in classical physics because they would map a local theory to a non-local one. But in quantum mechanics, both descriptions may often be equally local.

OK, so string theory, due to its being a special theory that maximizes the number of clever ways in which the novel features of quantum mechanics are exploited, is the world champion in predicting things that were believed to be "irreducible assumptions whose 'why' questions could never be answered by science" and allowing new perspectives to look at the same physical phenomena. String theory allows to derive the spacetime dimension, the spectrum of elementary particles (given some discrete information about the choice of the compactification, a vacuum solution of the stringy equations), and it allows you to describe the same physics by bosonized or fermionized descriptions, descriptions related by S-dualities, T-dualities (including mirror symmetries), U-dualities, string-string-dualities which exhibit enhanced gauge symmetries, holography as in the AdS/CFT correspondence, the matrix model description representing any system as a state of bound D-branes with off-diagonal matrix entries for each coordinate, the ER-EPR correspondence for black holes, and many other things.

If you feel why quantum mechanics smells like progress relatively to classical physics, string theory should smell like progress relatively to the previous quantum mechanical theories because the "quantum mechanical thinking" is applied even to things that were envisioned as independent classical assumptions. That's why string theory is quantum mechanics squared, quantum mechanics with an X-factor, or quantum mechanics on steroids. Deep thinkers who have loved the quantum revolution and who have looked into string theory carefully are likely to end up loving string theory, and those who have had psychological problems with quantum mechanics must have even worse problems with string theory.

Throughout the text above, I have repeatedly said that "quantum mechanics is applied to new properties and objects" within string theory. When I was proofreading my remarks, I felt uneasy about these formulations because the comment about the "application" indicates that we just wanted to use quantum mechanics more universally and seriously, and it was guaranteed that we could have done so. But this isn't the case. The existence of string theory (where the deeper derivations of seemingly irreducible classical assumptions about the world may arise) is a sort of a miracle, much like the existence of quantum mechanics itself. (Well, a miracle squared.) Before 1925, people didn't know quantum mechanics. They didn't know it was possible. But it was possible. Quantum mechanics was discovered as a highly constrained, qualitatively different replacement for classical physics that nevertheless agrees with the empirical data – and allows us to derive many more things correctly. In the same way, string theory is a replacement for local quantum field theories that works in almost the same way but not quite. Just like quantum mechanics allows us to derive the spectrum and states of atoms from a deeper point, string theory allows us to derive the properties of elementary particles and even the spacetime dimension and other things from a deeper, more starting point. Like quantum mechanics itself, string theory feels like something important that wasn't invented or constructed by humans. It pre-existed and it was discovered.

September 04, 2018

Clifford V. Johnson - Asymptotia

Beach Scene…

The working title for this was “when you forget to bring your camera on holiday...” but I know you won’t believe that's why I drew it! (This was actually a quick sketch done at the beach on Sunday, with a few tweaks added over dinner and some shadows added using iPad.)

I'm working toward doing finish work on a commissioned illustration for a magazine (I'll tell you about it more when I can - check instagram, etc., for updates/peeks), and am finding my drawing skills very rusty --so opportunities to do sketches, whenever I can find them, are very welcome.

The post Beach Scene… appeared first on Asymptotia.

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

And there are more exciting episodes on the way. Enjoy, and spread the word!

July 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ?
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist.

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC).

July 12, 2018

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence ($3\sigma$) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

$\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})$

and CMS (see here)

$\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).$

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from $35.9{\rm fb}^{-1}$ data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below $2\sigma$ (see here). For the WW decay, ATLAS does not see anything above $1\sigma$ (see here).

So, although there is something to take under attention with the increase of data, that will reach $100 {\rm fb}^{-1}$ this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

July 04, 2018

Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin.

June 25, 2018

Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

June 24, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

The Irish-born scientist and aristocrat Robert Boyle

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

Images from the garden party in the grounds of Lismore Castle

June 22, 2018

Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

June 16, 2018

Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

June 12, 2018

Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

June 10, 2018

Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations.

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations.

June 09, 2018

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles.

It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.

Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

June 08, 2018

Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl～10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ～300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m～10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ～1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

June 07, 2018

Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.

This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

June 01, 2018

Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.

To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field.

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?